Archive for April, 2007

Ah, readers… How I hate th… mmmh, never mind

Writing technical articles is really an art.

It’s not enough to start writing with a very clear goal in mind: you actually need to find a way to take your readers there in a gentle and intuitive way.

The main problem with readers is that they get so easily distracted, so writing your article really boils down to making sure that nothing in the various examples, text or code snippets that you are showing will make them react in a direction that is not the one you are trying to pursue.

Take my previous article called Unit or functional tests?.

In that post, I decided to show the importance of functional tests, and in order to make that point, I made up a code snippet where the ordering of method invocation (which is usually not captured by unit tests) is significant. I initially used the following code:

public class Main {
public void g() {}
public void f() {}
public static void main(String[] argv) {

Once I was satisfied with the article, I reread it one last time before posting it and I wondered if the use of anonymous method names such as f() and g() were not going to weaken my arguments. I wanted to make sure that anyone reading my code would immediately see that these methods must be run in a certain order, so that I wouldn’t even need to remind it throughout the article.

So I decided to replace f()/g() with names that would be a little more explicit. I picked openFile()/filterFile(). These names are generic enough, and it’s pretty obvious to anyone who can read English that you should open a file before you can filter it.

All the while, I was convinced that I was overthinking the entire problem. Surely, the readers would see that the code is just an example, that the point is not about what this code does, but about the simple fact that by nature, functions and methods *must* be called in a certain order, or your programs break (if you’re not convinced, just go back to whatever code you were working on yesterday and swap two method invocations. Your application will break).

Armed with the conviction that the world needs more explicit and realistic code snippets (boy.kissGirl() anyone?), I went through my draft and replaced f()/g() with openFile()/filterFile().

And sure enough, it was a mistake. Next thing I know, readers start posting comments on how filterFile() should throw an exception, or that it should invoke openFile() internally, and so on… Some readers are so obsessed with the tree that they are completely missing the forest.

It’s very humbling, in a sense, and as the author of this blog, the fault is entirely mine. If I can’t find the right phrasing and interesting examples that will make my readers think, I am not doing my job correctly.

Hopefully, the book that Hani and I are now done writing will not suffer from this problem, but you, dear readers, will be the judges.

And by the way, I was just kidding. I don’t really hate you, I’m sorry I said that.

Unit or functional?

Is it possible to have 100% unit test coverage and yet have an application that fails?


Consider the following code:

public class Main {
public void openFile() {}
public void filterFile() {}
public static void main(String[] argv) {

We have tests for openFile() and filterFile() and they all pass, however, since the main method should be calling openFile() first, our application is broken.

The problem here is that we only have unit tests but no functional (or “end-to-end”) tests. In this case, only one functional tests would suffice (it would invoke main() and make sure that the application does what it’s expected to do). Note also that just like testing openFile() and filterFile(), having one test that invokes main() will result in 100% coverage as well, but it should be clear to everyone now that 100% coverage doesn’t always mean that the code you are testing actually works.

Does this mean that unit tests are useless and that we should only write functional tests?

Not exactly, but I definitely want to make something very clear: while unit tests have been receiving a lot of exposure these past years, functional tests are ultimately the only way to guarantee that your application works the way your customers expect.

Let’s write a few tests for the code above to make things clearer:

public class MainTest {
@Test(groups = { "unit", "fast" })
public void openFileShouldWork() { ... }
@Test(groups = { "unit", "slow" })
public void filterFileShouldWork() { ... }
@Test(groups = { "functional", "slow"})
public void mainShouldWork() { ... }

For illustration purposes, I’m assuming that filterFile() is slower to run than openFile(), so I put this method in the “slow” group. This test class if fairly extensive: it covers unit and functional tests, and achieves more than 100% coverage.

More than 100% coverage? Is this even possible? I know it sounds a bit strange, but it simply means that running all the tests in this class will result in certain portions of our main application to be run several times, which is not always a bad thing. Another way to look at it is that running only the group “functional” will result in 100% coverage (as will running only the group “unit”).

Assume that we only run the group “functional” and the test fails. We now know that our application is broken, but we don’t know exactly where: main() invokes openFile() and filterFile(), and either could be broken. We won’t find out until we start testing them individually.

If we run the entire class, not only will the functional test fail, but at least one of the unit tests will break, therefore pointing us directly to what’s wrong in our application. The virtue of unit tests is therefore to help us, the programmers, pinpoint errors when they occur. But keep in mind that traditionally, unit tests are of no interest to users: they are just here to help developers.

Ideally, you want your application to be covered by both unit and functional tests, but if you are pressed for time and you need to choose between implementing a unit test or a functional test, I therefore recommend going for the latter (because it serves your users) and then down the road, implement more unit tests as you see fit (for your own comfort).

Note: this topic (and many others) is covered in greater details in our upcoming book.

Ah… Linux… How I hate thee

I recently upgraded the Linux workstation that I use at work, and like every Linux upgrade I have ever gone through in the past fifteen years, it’s been as painful as ever.

Well, actually I’m not really talking about the upgrade itself, which was fairly smooth, but just the inevitable thought that precedes booting the new machine (“Let’s see how much Linux has improved in the past two years”) invariably followed by “Well, it still sucks”.

First of all, notice that I’m carefully not disclosing what distribution I’m using, because I know from experience that such rant is always followed by swarms of comments saying “You should have used distribution X, you idiot!”. So there, I hope at least that I’ll be spared this outpouring.

So what am I unhappy about, exactly?

First of all, the very basics. While I lost faith in the Linux user interface a long time ago, I still believe it’s a tremendous operating system for server operations, but from a user perspective, it’s just not there. I’m currently making a full build as I’m writing this, and my 3 Gb multi-core chokes and freezes now and then, sometimes causing the cursor to pause or the menus to not appear for sometimes an entire second.

Isn’t Linux supposed to be the king of time slicing, hyperthreading and process allocations? How come I can barely know that a compilation is occurring when I’m using my XP desktop? Oh and for that matter, my Mac Book Pro is barely better than Linux in that area.

Application installation is still a joke on Linux.

Whether you have to go through pages of manual for rpm, alien or dpkg, the end result is that most of your installations end abruptly and quietly with barely any indication of whether they succeeded or failed. Of course, trying to figure out when you need to use sudo or not is another exercise left to the reader, and some of these installations still leave program files created with the wrong owner. Needless to say such packages will fail in very mysterious ways.

A few days after the upgrade, I’m still not able to play the CD in my tray. Right now, I just press the play button and nothing happens. Needless to say I already gave up on playing mp3’s, much less ripping them (I’ll happily go back to iTunes or Windows Media Player for that).

Just for the fun of it, I tried to Linuxify myself for a few hours and try some highly recommended (and critically acclaimed) Linux programs for regular operations, such as Konqueror and XMMS.

Did you actually see what these look like? Here is a teaser:

Yup, that’s it. Linux’ award winning, top-of-the-line, even-better-than-winamp MP3 player. Good luck finding the menus (there are none, why do they feel the need to copy Windows Media Player’s idiocy?). Oh, and the best? It doesn’t seem to support play lists. I can play a File or a Directory, but no play list. Lovely.

On the desktop side, the fonts are as horrendous as they ever were, opaque window moving still leaves a conspicuous trace on the screen as you move your window around (what year is this, 1992?) and window switching with ALT-TAB is a sad joke compared to its beautiful Vista counterpart (and also Witch on Mac OS, because I still think the defaut Mac OS window switching is pretty bad, although it’s much better than Linux’).

Oh, and just as I was about to post this, I noticed that my Firefox was pinning the CPU at 100% and taking up 400 megs of heap space. Granted, it’s not really Linux’ fault, but it’s certainly another deterrent to add to the long list of reasons why Linux is not for you.