“Is TDD dead?” – No, it’s still the Future (But Ruby-on-Rails is dead.)

Here’s some thoughts on Google+ hangout series on Is TDD dead? by David Heinemeier Hannson (Ruby on Rails), Martin Fowler (do I have to list anything?) and Kent Beck (author of the book TDD by example). I’d like to provide a short summary of their opinions and discuss the key points in regard to other experiences and some empirical data.

For those who are on a short coffee break: Of course TDD is not dead, Heinemeier Hanson just likes to code like in the good’ol days of wizard programming ;-).

Short Summary of Beck, Heinermeier Hannson and Fowlers points

Kent Beck, author of TDD by example, obviously propagates TDD. He explains how it fits his way of tackling complex problems and how it helps to overcome anxiety when facing complicated tasks. A developer is allowed to have confidence that his code will work and he claims TDD provides just that. The question of too much tests and test to production code ratio is interesting, as he explains why the ratio depends on the type of project and basically on the amount of coupling between components. Beck points out that deleting tests is not only valid but also necessary. This is also already mentioned in his book.

The corner stone of TDD is that breaking problems into smaller problems always yields a next step, some achievable task at hand. Chop up a hard task until it’s easy to write a test, come up with a solution, then refactor to have a SOLID foundation for the next layer. (OK, I made the SOLID up myself ūüėČ ).

Fun quote in response to Heinemeier Hannsons claim that unit-testing always leads to bad, layer-bloated design (Martin Fowler laughing really hard):

David, you know a lot more about driving a car than me. But here in Oregon, if you get out of your car and you are some place that you don’t wanna be, getting a new car is not gonna fix that.

I have to remember that one ;-).

David Heinemeier Hansen states with firm conviction that TDD does not fit his way of work – he does simply not like TDD. He claims that code written by TDD is not better than conventionally tested code, often even worse because of the use of Mocks. He presents a bad example of code, probably derived by TDD. That’s where Beck responds with the fun quote. In the last episode he argues that because it is so difficult to scientifically prove the superiority of one approach over another we shouldn’t even try.

He points that he once tried TDD but when it came to web programming he found it just didn’t work for him. He firmly relies on the regression test suite, though. To summarize, he rejects Unit-Testing more or less as a whole which includes TDD.

Martin Fowler points out that he likes to have self-testing code that can be shown to work with the press of a button. He doesn’t mind where it comes from, which does include TDD. He says that getting the right amount of test coverage for a person individually and a team is a calibration process – you have to find the patterns where you screw up and write a test for them next time. That may lead to an overshoot when developers tend cover each feature. He also points out that it is always valid to reject practices that do not fit and one should not apply practices without reflecting on them – self-evident but still important.

On personalities

I must admit that I find it extremely hard to listen to Heinemeier Hannson. His way of argumentation is very subjective without leaving too much room for other opinions. Every statement – or should I call it rant? – serves as a kind of (lame) excuse for his way of work which he claims is as just as good or even superior to other approaches. It sometimes feels like Fowler and Beck are discussing with a Creationist about Evolution. He is probably very convincing to those who do not have a (computer) science background and where anxiety of leaving the good’ol unstructured trial and error process behind for something unknown might cause one to fail. I hope that my discomfort with the personality does not influence the conclusions too much.

A matter of discipline

Going with Heinemeiner Hannson, from what I understood is, if the problem is too hard, when it’s to boring to figure out the details, it is all right to go with trial-and-error. Yes, doing HTTP/HTML/Database stuff can be boring some time and you want quick results. But with an approach like TDD finding the right abstractions can be both fun while still being disciplined engineering. Heinemeier Hannson states that the ActiveRecord pattern cannot be well unit-tested in isolation. But bad testability is just an indicator that ActiveRecord is probably just an anti-pattern.

Heinemeier Hannson claims that TDD leads to too many tests (“overtesting”) which makes it hard to refactor. I’ve seen this happen without using TDD going to extremes, so, again this is not related to TDD but related to coverage requirements (“more than 80% unit-test coverage”).¬† Some CI Test-Coverage Plugins require the default public no-arg constructor in a class full of static helper methods to be untested. A class of helper methods should trigger an architecture alert by itself, but to get 100% test coverage you can be sure that some developer is writing a test to make the warning go away. Or he creates a private no-arg constructor. I don’t know what’s worse.

He also falls for a common misunderstanding. Having good black box tests does not make white box tests irrelevant.¬†I don’t mean those beginner’s white box unit-tests where they peek around in objects using reflection to do state verification, but a minimal setup that puts the tested component in the center of attention.

Imagine the development of a car. Even if you test a car under all circumstances in the real world this does not eliminate the need to test the engine, the gear and the electronics separately (and the subcomponents they are made of). After that you will want to test various combinations of  integration setups. Even if there were only one type of engine and gear, it would simply be a waste of resources to only to high-level black box testing because most problems would have shown earlier in a much cheaper test. Additionally, testing error cases or distater recovery is often just not possible without a special setup. Wasting high-end gears to test your high-end engines in a high-end chassis on a desert race-track is nothing you want to do on a regular basis, if only because it takes too long.

Maintaining the developer test suite(s) is as much as hard a problem as maintaining the production code. It may be viable to even have a separate role for that, something like a Test-Code Manager or Archtitect. I find myself sometimes in a situation where I discover that somebody else already wrote a test that I could have extended or adapted and refactored. It was just in an unexpected class, had a misleading name or I was just too lazy to search for it. In general I found, test code quality is often way below production code quality. In part this is because writing tests in some circumstances (like web applications) is more a chore than the fun part. You would have to refactor big parts of the application or even drop a bad framework to make the test look good. Hardly anybody does that.

Where you would reject a 50 line production code method, developers easily get away with a 50 line @Test method that mainly replicates the fixture and verification logic from the previous @Test method. By writing tests first, test quality usually also improves as it isn’t done in a hurry to meet some metrics. When tests are considered first class citizens developers tend to take greater care to make them as good as they can.

Empirical background on TDD

The software industry lives off its unproved best practices. That’s why Heinemeier Hansen can claim that TDD is dead – there is simply little empirical evidence if TDD is better. But there are some studies that at least are close enough to back up the benefits. Continuous Testing, that is, automatically executing unit-tests after each change in the IDE was actually empirically studied, some background information can be found here. Reducing the time between introducing an error or misconception and fixing it seems fundamental to me. As TDD per se can speed up the cycle from minutes (or hours) to a few seconds it is valid to assume that TDD done right is an improvement over test-last approaches.

Bridging the gap between exploration and engineering

Sometimes is the need to explore systems,  libraries and frameworks, one answer is already in TDD by example: In the chapter Red Bar Patterns there is the Learning Test. Write a test for externally produced software. This not the only way. Visualization by writing a simple UI is good start. But to verify this knowledge, just create a test.

As an example, setting the read-only flag using the DOS view on a folder in Windows does not make the folder unwritable. But removing the permission WRITE_DATA for the current user or group does. Here’s where TDD kicks in – state what you expect – but don’t expect that what you state is necessarily correct.

Conclusions

There are certainly some points from the 3hrs+ videos that I dropped or missed out. The talk clearly showed to me that TDD is not dead. Given a focus on quality and longevity of a software product it is more alive than ever. It delivers excellent results in the engineering categories of correctness, robustness, maintainability and extensibility. It is a natural barrier towards rushing a system into production that is just not well designed. In the aircraft and space industry there is a saying:

We do not build anything until we know how to test it.

If you ever worked in the industry you know they mean every single unit and every little component down to a screw.

Classical engineering is also one of the main sources where software engineers can gather knowledge from. There’s 150+ years of experience, from building cars, to power plants to space ships. If you are a serious software engineer you will want to benefit from this experience.

Why do we write tests in the first place?

We want our software to be a delight for the customer. It deploy without error, it should start without an error we could have prevented and it should help the customer doing what he wants it to do. If there is some undesired situation, like disk full, low memory or an unreachable server, we want to point out as precisely as possible to the stakeholders that are interested what happened at an appropriate level of detail. And obviously, stack overflows, off-by-one errors or null references should not occur in a final product. Because we designed it’s contract carefully and tested it. Designing the contract becomes much easier with TDD because for a short time we can change our perspective from Provider to Client.

Some approaches on the other hand, like the one Heinemeier Hansen describes as his own, are hardly engineering. Resisting a disciplined process because it’s not fun (or it shows your design is no good) sounds a bit like handicraft work to me. You simply cannot rely on it. You cannot promise a customer that it takes 12 person days to implement a feature when there was some self-proclaimed code-wizard at work before.

The major personal driver of spreading TDD for me is that my work gets easier when I can rely on results yielded by a disciplined apporach. It happened and still happens too often that I have to work with code that is just not finished, there are some alibi unit-tests but they only cover the happy-path and a few obvious error cases. Much more often than not, I run into the untested or bad designed parts. I don’t like that. It’s not fun. You might have had fun developing that code. I have to clean up the mess. I have to go to our customer and break our promises. Who wants that?

That’s why TDD is not dead, but is still the Future. It’s alife and kickin’. On the other hand, despite it’s undisputed impact on web development in genral, Ruby-on-Rails is dead. It showed that it does not scale, is hard-to-test, slow and hard-to-maintain. I call it the Deus-Ex-Machina or “Where the hell does does that method come from?” problem. Ironically, with TDD, the design of Ruby-on-Rails might have been up to the challenge.

 

 

 

Advertisements

Using standard OO techniques instead of Property Files for I18N and L10N

Using property files / message bundles for I18N / L10N and configuration purposes is problematic. Here’s why:

  • Property files are not refactoring safe.
  • They are in fact interfaces but rarely treated a such.
  • They are often a dump for everything that needed to be configurable a day before the iteration ends.
  • They’re often not well enough documented or not documented at all¬†because documenting property files is a hassle (no standards, no javadoc)
  • …or they consist entirely of comments and commented out statements.
  • They make testing harder than necessary.
  • Often properties are created ad-hoc.
  • Validation logic and default handling inside the application is error-prone.
  • Missing definitions lead to runtime errors, undetected errors or awkward messages like “Error: No {0} in {1} found”.
  • Sometimes the file is not loaded because someone cleaned up src/main/resources.
  • Most property files are loaded on startup or only once.

How can we get rid of property files? I’d like to show you a straight forward solution in Java that will work well if you are not afraid of recompilation. Let me give you an example. This is what is common practice in Java:

# Find dialog
dialog.find.title=Suchen
dialog.find.findlabel.text=Suchen nach
dialog.find.findbutton.text=A very long text that needs to be wrapped but the developer does not know that it's possible - this really happens!
...

Most applications I’ve seen have endless definitions of properties for the UI. I swear I have never ever seen a non-developer change these property files!

The alternative is so simple I’m almost afraid to show it, but it has been extremely useful.

Define a name for each value and add it as a simple read-only property to an interface. Provide concrete implementations for each required language / locale.

 

package com.acme.myapp.presentation.find;

public interface FindDialogResources {
     public String getDialogTitle();
     public String getFindLabelText();
     public String getFindButtonText();
     public String getFindNextButtonText();
     public String getCancelButtonText();
     public String getIgnoreCaseText();
}

// Implementation in myapp-ui-resources-de_DE.jar

package com.acme.myapp.resources;

public class FindDialogResoucesBundle implements FindDialogResources {
    public String getDialogTitle() { return "Suchen"; }
    public String getFindLabelText() { return "Suchen nach";  }
    ....
}

// alternative to enable dynamic change of language:

public class FindDialogResources_de_DE implements ...

public class FindDialogResources_en_US implements ...

Resources that are subject to L18N and I10N must be externalized. But the way of externalization should conform to good software engineering practice.

I like the approach of linking a different resource JAR to smart client applications but you can also load classes by package- or name prefix to enable dynamic switching of languages (for web apps).

Advantages over property files:

  • Clean, minimal, intention-revealing interface (obviously).
  • Refactoring safe.
  • Statically typed and automatically linked by the JVM.
  • Missing definitions are compile-time errors, missing values are easily detected using reflection-based tests.
  • No assumptions about the final implementation.
  • Self-documenting
  • Interface defined in the same module.
  • Implementations can be delivered in other modules or (OSGi) fragments / language packs.

The builder that creates a UI entity requires a concrete implementation of this specific interface as a dependency. This principle can be applied to web applications as well, for example my exposing the resources as a managed bean. Declaring values in Java-Beans style comes in handy for auto-completion in Facelet templates.

public class SwingFindDialogFactory {
    private final FindDialogResources resources;

    public SwingFindDialogFactory (FindDialogResourceBundle resources) {
        requireNonNull(resources); // precondition
        this.resources = resources;
    }

    public FindDialog createInstance(...) {

        ...
        final JLabel findLabel = new JLabel(resources.getFindLabelText());
        final FindButtonAction findAction = new FindButtonAction(resources.getFindButtonText(), resources.getFindNextButtonText());
        ...
    }
}

...

// web app with JSF 2:

@Model
public class Resources implements FindDialogResources {
 private FindDialogResources delegate;

 @PostConstruct
 public initResources() {
     resources = application.getLocalizedFindDialogResourcesForPrincpipal();
 }

 public String getDialogTitle() {
     return delegate.getDialogTitle();
 }

// usage in facelet definition

<h:form>
 ...
 <h:outputText value="{resources.nameLabelText"} />
 <h:inputText value="{myUseCaseForm.name} />
 ...
</h:form>

As a bonus, you can:

  • Fallback to a default language (i.e. EN_us) if translation is not yet complete using a decorator.
  • Automatically validate each ResourceBundle implementation in the CI pipeline for completeness. For example by invoking all methods reflectively and checking results for non-empty values (a pretty good indicator if someone got a call during translation…).
  • You can easily generate and update the manual using simple programs written in Java as I’ve shown in other posts.
  • You can even wrap property files or other data sources if this is required by company policy.

My favorite text book on Database Architecture

datenbanksysteme-cover

As I noted in previous posts I consider typical relational database systems – although ubiquitous – to be non economic for typical transactional applications as they tend to dominate the development process. O/R-Mapping, schema migrations and database-testing are quite complicated compared to using an object store that provides optimized access and only relevant functionality.

Most textbooks on database architecture I have seen have a strong focus on the relational model, only mentioning internal data representations in a few sentences and providing almost no discussion on implementation variants.

The (German) text book by Theo Härder and Erhard Rahm named Datenbanksysteme РKonzepte und Techniken der Implementierung (Springer, 2001) provides quite a lot of insight into the principle internal workings of data storage systems. It is pleasantly different from those books that center around query optimization and the (insufficient) modeling capabilites of the relational model.

The descriptions of implementation options are resourceful and are well suited to lay the foundations of correct, compact and mainainable small- to largest scale object stores. Sadly I currently know of no book in English that provides this kind information.

I am quite sure that the future of object persistence is not SQL and not NoSQL as they do not provide core requirements in an elegant, minimalistic way. Instead relational and NoSQL databases provide a lot of functionality that is not required in domain-driven designed object models but must be paid and and maintained.

These are some of the requirements that should be in focus when implementing an object store / persistence mechanism:

  • Objects and their behaviour are more important than data independence. Application integration is done using (store- and forward) messaging.
  • The object storage service extracts relevant state from objects (usually using their class definitions). Objects are recreated using their internal state and their class definitions.
    • This process should be refinable, but a minimalistic configuration should be able to persist suitable objects without the need for annotations or the like.
    • Suitable means limiting the set of types, cycle-free references (or not), correct usage of transient attributes etc. The mechanism must reject non-suitable objects.
  • Constraints can and must only be defined in the O-O language using common design-by-contractmechanisms such as pre-conditions and class-invariants. The system must not require a data definition language. Altough some means of internal representation is needed this must by default be derivable from the application source code.
  • The persistence mechanism must support all native and user-defined data-types of the application-development language.
  • There should be a simple binary data representation mechanism which should be used as a default. This leaves more compact representations and compression as options that might be applied when performance or space requirements are not met.
  • In most cases a query language is not required. Most object models that result from a domain-driven design process should designed to only require key-based and index-optimized specific searches that are provided by respective repositories.
  • The set of classes stored should be finite. The required classes (including their dependencies) can be stored as a module inside the data store itself, taking the role of the data dictionary. To compete with pseudo-maintainable XML documents an object store should require minimal knowledge and dependencies in its archived form. Minimal knowledge means: a bootstap loader running on a virtual machine, such as the JVM, that contains all knowledge to recreate the objects that are contained in the object store.

Notice that focusing on objects instead of tables or (text-) documents reveals that relational and NoSQL databases a just an implementation variant of the super-set of all¬† object stores. By not exposing the lower layers DBMS vendors achieve an easy way to irritate and lock-in customers to their products, as most functionality of commerical DBMS is (and in fact should!) never be used. “Database tricks” like optimizer hints and specific SQL commands are used in quite a lot projects I have seen but are the least desirable. As Johannes Siedersleben denoted in “Moderne Softwarearchitektur”, only text-book SQL is one line, in practice an SQL-query is more like a page (which is sad even when generated using JPA-QL and the like).

Configurable object-stores might be harder to sell, but because of their simplicity can be expected to be more robust and easier to use, to test and to maintain. The book is a great help to factor out the components that are required for effective object stores.

Project templates: Declarative Maven Archetypes vs. Rolling your own Generator (with NIO2)

Abstract

A Maven Archetype is a template that enables developers to create several Maven projects with common features, such as Maven settings, dependencies and default resources. Creating of archetypes can be cumbersome if done seriously and may be difficult to test. A lot of public archetypes seem to be outdated and not very flexible. This article describes experiences with the artifact mechanism and reminds of the fact that some simple Java objects can do the same trick using basic Java APIs and the StringTemplate v4 template library, while being easily testable and require no knowledge of yet-another-maven-plug-in.

My basic-java-archetype

Years ago I started putting every idea, learning test and snippet into a single project in my workspace (an “experimental”, “playground” or “prototype” project). This worked well until I really wanted to use the outcome ;-). If an idea evolves it is sensible to create a seperate project to have cohesive parts testable and in one place, with minimal dependencies. Additionally, I leave ideas unfinished, even uncompiling or with failing unit-tests which prevents using the Infinitest plug-in that supports continuous unit-testing.

One of my most valuable assets is a simple Maven archetype for Java projects that I created in 2011. Creating a Maven archetype was the obvious solution to the problem. A simple template that sets up common dependencies and plug-ins. I chose a minimal variant, only including a reference to a parent POM for setting up basic plug-ins (compiler source/target 1.6 and so on), and two POM projects including common run-time and test dependencies (this helps to keep the parent POM unpolluted).

Maven Archetypes are heavy-weight

What turned out tricky was that unless you decide working with snapshots you’ll need a stable, released parent POM, released dependency POMs and an actual working template project to begin with, as it makes no sense to create a non-working template. These need to go into version control (and additonally the POMs are required in the artifact repository – Nexus, Artifactory etc.).

Secondly you need an archetype project, which also is put into version control because it needs to get released and installed in your central Maven repository. From my experience it is best to keep templates free of any example code (such as HelloWorldService.java or MyFirstUnitTest.java). As I create a lot of small projects these get deleted right away and only bother me. You could add an “example-archetype” to aid new developers if you feel your organization requires it but keep a “real”, minimal template for the experienced staff.

This turned out to be non-trivial, because there’s a lot that can go wrong. Items need to be Maven-released in the right order, bugs require re-releases and you might also discover Maven (Plug-in) bugs along the way. “Make simple things simple and hard things possible” is definitely not the Maven way. Except for some default resources required by the Unitils test library I wanted to keep the directories clean, that is empty. The resources plug-in omits empty directories by default (there is a configuration setting) and I think some other plug-in wasn’t happy too. It was definitively hard work to create a working archetype with everything released and working flawlessly, altough it included only three files and a few directories. But it was still worth the effort, I use it to this very day.

The archetype mechanism is basically not very flexible. I observed that some organizations tend to use obsolete or inefficient artifacts because it is regarded a hassle to improve and refactor them. Parent POMs are several versions ahead of what is declared in the artifact, so there are several versions in use depending on what the developers thought was the current parent POM version. With me it’s the same. There are some settings in the parent POM and in the template I’d like to change. But as I get along with it it’s low priority.

Java generation scripts

Learning from Ruby and Python I’ve found a way that suits my requirements much better. Currently I am working out several ideas regarding continuous delivery and automatic deployment processes. which involves a lot of file handling tasks. Java NIO2 provides near-native possibilities for file-system handling, including POSIX-attributes. I wrote a lot of small learning tests using UnitilsIO to get familiar with it. Writing small, script-like components feels a lot like Ruby and Python, without sacrificing compile-time type checking.

What it great about NIO2 is that it lends itself very well to scripting. I seems heavily inspired by commons-io, making it possible to create files with one line:

Files.write(aPath, myRenderedContent.getBytes(StandardCharSets.UTF_8), StandardOpenOption.CREATE);

Using commons-cli it is easy to create a clear structured command-line parser (that is even unit-testable) if you need one. Using StringTemplate it is straight forward to create a custom Maven pom.xml. Directories are simply created using

Files.createDirectories(Paths.get("src","main","java");

From my experience is sufficient to have one archetype project per team or per department that includes several project generation classes and use some common building blocks to ease directory handling. I prevent creation of projects in existing directories (say /basedir/myproject/ already exists). Failure to create any file or directory results in termination of the generator – no sophisticated error handling is required because generating projects should be a simple process.

You can use package- or class names to encode different versions of the same generator. Your archetype project may look like:

MyDepartmentsArchetypes:

src/main/java/
              com/myorg/mydep/archetypes
                                        /SimpleMavenJARGeneratorv1.java
                                        /SimpleMavenJARGeneratorv2.java
              com/myorg/mydep/archetypes/simplemaven/v1/
                                        /SimpleMaven.java
src/test/java/
              ... // don't forget to test archetypes                  // it may pay off to use integration-tests that
                  // fire up a Maven instance with seperate
                  // configuration and canned local repo!

Note that custom Java archetype generation scripts are easy to test, to document and to debug because you’re using existing infrastructure. And O-Os “tell, don’t ask” principle may come in handy when solving really complex projects and deployment situations.¬† If Java is not your primary language of course all of this can be applied too.

Please comment if you have some hints or practices on creating Maven archetypes and if you find the plain Java approach useful and applicable.

A metaphor to differentiate between Software Engineering and Programming

As a kid I learned how to solve jigsaw puzzles from my grandma. I remember her telling me “Try find the corners” and “Try to find some pieces that fit together. We will later see where they go.” Sometime ago it occured to me that solving a jigsaw puzzle has something in common with software engineering. I find myself trying to find some edges – things that I know for sure won’t change. If I find some pieces that fit together I will put them together right away and set them aside. Bit by bit the big picture appears.

How is programming different? A jigsaw puzzle consists of numerous small, often oddly shaped, interlocking and tessellating pieces (http://en.wikipedia.org/wiki/Jigsaw_puzzle). The way the pieces are shaped indicate what belongs together. Imagine all pieces were the same size but square, non-interlocking and non-tessellating. Depending on the motif, solving the puzzle might be easy – or difficult to impossible. It is easy when you already have knowledge about the motif, it’s size and how it should look like. If you don’t, it takes a lot more time because there are no hints at the very small level.

The science, the practice and – hopefully – the art of software engineering provide the little nuts and bolts that make it possible to solve the puzzle of creating correct, flexible and maintainable systems. The principles of class design (Command-Query Separation, Single-Responsibility Principle, Operand-Option Principle and others) provide the necessary guidance to create small, coherent parts that will find their place in the big picture. Best practices including an approved refactoring catalog as well as design- and architecture patterns complete the set of basic engineering tools.

Plain programming is what is covered in SCJP exams and the like. I consider these exams not very valuable, given that they are very easy if you know how to memorize. In the books covering SCJP unit-testing for example is not even mentioned, which is just irresponsible given that this is a basic qualification even for entry-level programmers. But in some areas these kinds of exams seem to be in great demand. A programmer (or a team) who does not see the nuts and bolts will get a result on simple problems, but will finally end in the “big-ball-of-mud”.

Each system I work with is a giant puzzle where the common software engineering principles help to mix find connecting pieces and usually quite early the big picture. The collected set of software-engineering principles make it an almost mechanical task to identify coding, design and architecture problems. Saving time in this part frees time to do thorough requirements analysis and find out what the customer really needs – which can be quite puzzling :-).