Heartbleed was subject to major coverage in mainstream media. Starting with disbelieve in that such a severe defect hits major companies such as banks and well-known hosting services quickly turned into a search for a scapegoat. The poor guy who just wanted to apply his knowledge gained in PhD studies did just not know that this kind of marketing – look, I co-authored OpenSSL – might backslap on him.
Conspiracy fantasies of failed programmers
Self-entitled German security expert Felix von Leitner (“fefe”), a figurhead of the German nerd scene, started a public accusation suspecting that the author who introduced the bug did it on purpose, paid by intelligence agencies.
Are the critics better programmers?
Comparing Leitner’s own programming style (which can be reviewed here: https://erdgeist.org/cvsweb/Fefe/, for example the infamous blog.c) with that of OpenSSL and especially the Heartbleed commit (http://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=4817504) reveals that he’s lucky to find a second carreer doing the blogging stuff. No unit-tests, no application of object-oriented principles and C as an application programming language – guess where the little bugs come from? Obviously criticizing is still so much easier than doing it better.
Preventing future Heartbleeds
Now, what can we do to prevent such disasters in future – besides opposing managers that set up unrealistic deadlines? I am sure that nobody is willing to pay for redevelopment and formal verification of software that basically works somehow. A practical approach that can be easily applied s required. Re-reading Meyer’s Object-Oriented Software Construction wise answers are easily found.
Certainly the OpenSSL code that introduced Heartbleed was not correct. Meyer states:
Correctness is the ability of software products to perform their exact tasks, as defined by their specification.
Has there been a specification for delivering a payload on a Heartbeat message? Did it state how this payload should be structured? Additionally, the patch that introduced the Heartbleed bug does not contain automated, self-checking unit-tests.
Design-By-Contract, Assertions, Run-time checks
Even if there was a specification, how can we validate if the implementation meets the specification? Meyer fostered the idea of Design-by-Contract, stated by assertions, enforced by run-time checks. Using Design-by-Contract is difficult in a procedural programming style, as in typical Unixoid C-programs, but extremely elegang and effective in Object-Oriented programs.
An object like the incoming heartbeat message would have been rejected by asserting against a solid specification. Even with a non-satisfying specification, a run-time check when accessing an out of bounds index would have led to an exception.
Assuming that assertions are in place, conservative C coders will remove them in production. Hoare comments on this:
“It is absurd to make elaborate security checks on debugging runs, when no trust is put in the results, and then remove them in production runs, when an erroneous result could be expensive or disastrous. What would we think of a sailing enthusiast who wears his life-jacket when training on dry land but takes it off as soon as he goes to sea?”
There is nothing more to say.
Even if performance may be very important in SSL code, from a customer perspective a correct and robust implementation is more desirable than ever. Michael Schweitzer and Lambert Strether say:
“An object-oriented program without automatic memory management is roughly the same as a pressure cooker without a safety valve: sooner or later the thing is sure to blow up!”
Heartbleed shows once again that procedural software development is not safe and never will be. Thinking in objects and using the appropriate tools combined with better training (and self-training) of developers. As a start, read (or re-read) Object-Oriented Software Construction today!
- Premature optimization is stil the root of all evil.
- Software without a specification that it can be checked against is not production ready – even and especially if it is complicated system software.
- Minimalism prevents creation of features that can be exploited by malware.
- Relying on software you did not check is dangerous.
Beware, the next Heartbleed will come soon. Some rants on the development style in the Unix/C-community can be found here. While “hacking” code in the sense of prototyping and evaluation is a good thing, its publication without thorough rework and automated self-checking tests – have a look at von Leitners CVS repository again – is just bad engineering.