Archive for December, 2003

EJB restrictions

Bob explains the

reasons behind some of the restrictions of the EJB specification

While it is certainly healthy to understand the rationale behind these kinds of
decisions, Bob omits a very critical piece of information when he advises that
these rules can be bent when you know what you are doing:  these
restrictions are also used by the container.

For example, the fact that EJB’s are not allowed to create threads makes it
possible for the container to attach important information to ThreadLocal (such
as clustering, transaction, security, etc…).  After reading Bob’s
article, you might decide that since your application will only ever run in a
single JVM, you can safely violate the rule and create threads in your EJB. 
But by doing that, you are going to break the container in disastrous ways.

Bob gives the example of an over-zealous architect who decided to follow the
rule literally but by doing so, introduced a big performance problem in the
application.  This is indeed unfortunate but there are better and more
standard way to address this problem than by using a static (the two most
obvious ways are to keep this information in the servlet session, or maybe a
Stateful Session bean, if your application doesn’t have a Web tier).

That being said, Bob’s entry is a must-read for anybody who has ever wondered
where these odd limitations in the EJB specification came from.

User interfaces and innovation

I just read two interesting articles contrasting the two approaches that Microsoft and Apple have at designing user interfaces. The author makes the point that Microsoft has innovated much more than Apple in graphic user interfaces, and more in general than people care to admit.

The article focuses on the task-based aspect of Windows XP and Longhorn, and Paul Thurott makes some very good points. Here are some other interesting innovations that Microsoft made since Windows 95, off the top of my head:

  • Flat toolbar buttons. Nobody notices them any more because they have become mainstream, but if you remember, toolbars used to have the standard 3D buttons when they started getting popular. The Microsoft usability labs then noticed that the extra border of the buttons made the UI look more cluttered, something that users confirmed. So they got rid of the extruded border and added a “colorization” when the user hovers on them. It was a bold move back then but who would be confused by a flat button these days?
  • The tooltipped thumb in scrollbars. I think this first appeared in PowerPoint, was quickly adopted by Acrobat Reader and then spread to the whole Office suite. The idea is that when you use the scrollbar to move quickly through a large document, you don’t have a very good idea on what page you are going to land on when you stop moving the thumb. The idea was to create a tooltip that would stay persistent (as opposed to regular tooltips which usually disappear after a couple of seconds) and update it to the page number as you scroll. I am pretty sure hardly anyone ever noticed this but it added a great deal of usability.
  • The wheel mouse. Not really a software GUI innovation per se, but definitely something that radically changed the way we use a mouse.

I am sure there are a lot more. The bottom line is certainly not that Microsoft has innovated more than Apple but that we should be giving credit where credit is due. Especially when the said company is a monopoly and has little incentive to innovate at all. Or so we would think.

Refactoring inner glow

I just went through an intense refactoring session for
EJBGen.  There is more to come, but I
have basically turned around the entire implementation in prevision of incoming
features and new integrations.  The interesting thing about this is: 
users will never notice.

Some users are already using the improved version but of course, they have no
idea how much it is different from the previous drop.  To them, it’s the
same tool as the previous version, hopefully without any regression introduced.

There is something strangely satisfying about this but it’s not about the
intensity of the session, nor about the size of the changes.  Sure, adding
features that users are eagerly expecting is also a fulfilling experience, but
not as much as rewriting a complex piece of software from the ground up all the
while knowing that you are covered by your regression tests.

Am I the only one feeling like this?

AOP refactoring

Ramnivas just published the
second
part of his AOP refactoring series
and I have a quick remark on the second
second example (listings 3 and 4) where he abstracts the concurrency mechanism
in an aspect.

I agree that the business logic that takes care of the concurrency belongs in
a separate aspect, but not the pointcut definition. I think this is a very good
example where the pointcut definition should be specified inside the code, not
separately.

Determining if a method should get a read or a write lock is fundamentally
tied to the code inside the method, so I believe this should be specified on the
method itself, with an annotation. Using a separate pointcut is inherently
fragile because you have to specify the method names inside, which is
error-prone because

  • Method names can change.
     
  • Their inside logic can change (you used to need just a read-only lock
    and now you need a write lock).

Both these actions will force you to remember to modify your pointcut
definition as well. I believe that having an annotation such as

@ReadLock
public float getBalance() {
// …
}

makes it easier for the developer to maintain his code.
 

Something wonderful has happened

It took me a while to get my first virus but as some of you already know, it
happened not long ago
Well, it happened again, but things were a little bit worse this time.

A few days ago, my home network started acting up, mysteriously crawling to a
halt to the point where 90% of my packets couldn’t even reach my gateway. 
I soon identified the faulty machine and I disabled the network interface until
I had time to deal with the problem, because solving it would probably require
me to resort to a packet sniffer.  I finally found some time to investigate
the issue.

My first quick attempt was to selectively kill tasks and see if the network
comes back to normal, but this method didn’t produce any results.

The last time I used a packet sniffer was about fifteen years ago, on a Unix
machine.  If you’ve never used one, it’s quite enlightening, if not scary. 
Things have progressed quite a bit since that time but except for a fancy
graphic interface, the basic idea is the same:  your machine needs to be in
promiscuous mode (the default in Windows XP and 2000, which makes things easier
and is not a problem in a home network).  Of course, you need to be using a
hub ant not a switch, or you won’t be seeing all the packets broadcast through
your network.

A quick search revealed a host of packet sniffers on Windows and I settled on
AnalogX’s PacketMon.  It’s free, offers some basic filtering capabilities
and fits the bill for my simple problem.

I launched the program on another machine, re-enabled the network interface
on the patient and blam! the screen immediately starts scrolling at an alarming
rate.  The verdict was pretty clear:  my machine was sending a stream
of ICMP requests to IP addresses in decreasing order.  There is no better
way to spell "infected by a virus".

I downloaded an anti-virus, disabled the network interface and started the
long and painful scan.  In the meantime, I did some research on the Web and
based on the symptoms and the ports and packets used, my suspicion quickly
narrowed down on the

Welchia virus
, which the anti-virus quickly confirmed.

Fortunately, getting rid of it is straightforward and only requires a full
scan of your hard drive.  You can also download a separate

remover
which will accomplish the same job.

Interestingly, even after I removed the virus and confirmed with the packet
sniffer that everything was back to normal, my machine was still the target of a
lot of requests (both TMP and ICMP) from a wide variety of geographic locations. 
My firewall doesn’t have many ports open and since these requests were now
initiated from the outside, they were of no concern to me, but I ended up
wondering if these requests came from other randomly infected machines on the
Internet or if they were buddy machines that the virus had identified and
started exchanging information with.

 

Groovy and renaming imports

I really like what Groovy is turning into.  Closures are definitely the
most important thing in my eyes, but I can understand that the value of such a
feature can be puzzling for you until you try a language like Ruby.  I have
also been very impressed by the choice of features that James has made so far
and I agree with pretty much all of them.  Except maybe for
this one.

import java.util.Date;
import java.sql.Date as SqlDate

I have seen "import renaming" at work with Eiffel and it’s not pretty. 
It’s a good example of a feature that sounds good in theory but ends up making
your code very hard to read and to maintain.  Note that Eiffel was worse in
that respect because it also allowed to rename methods when you import them (a
feature that, unfortunately, Ruby supports as well).

What problem are we trying to solve exactly?

Sometimes, you have to create a class that already
exists, either in java.lang or elsewhere.  And before you jump the gun and
say "just make sure it doesn’t happen", you need to realize that you don’t
always have a choice since these classes might have been generated by a tool
that doesn’t allow you to customize its mapping (such as an XML-Java mapping
tool for example).

I would argue that in such a case, it’s much clearer to require the developer
to fully qualify both colliding names in order to clear up the ambiguity:

java.lang.Number n1 = …;
com.foo.Number n2 = …;

instead of using a random renaming which is local to the compilation unit
and can vary depending on who is writing it:

MyNumber n1 = …;   // java.lang or com.foo?  Need to check
the import

I will also observe that in my experience, such cases are rare.  Adding
a feature to a language in order to address a rare case is a dangerous slope
that Groovy should avoid at all costs.

I think Groovy has tremendous potential, which is already showing thanks to
its ability to compile to bytecode directly, hence making it useful to Groovy
fans and non-fans alike.  But Groovy’s biggest challenge still lies ahead
and James and his team will have to be very vigilant that any new feature they
add to it obeys the "80/20" rule:  solve 80% of the problems with 20% of
the work.

Based on what I’ve seen so far, I am confident James is up to the task.

More on debuggers

There were quite a few reactions to my previous entry.  I’d like to add a few comments of my own.

First of all, Rob wrote:

[Debuggers are] a little *too* clever sometimes. A print statement does not (usually) interrupt the flow of execution. A breakpoint does.

That’s why the best way to do this with a debugger is to install a watchpoint at a location and do a println there. It won’t stop the code but it will print your debug code. No interruption, no spurious println in your code and the exact effect you were waiting for.

Have you ever blown an hour or two debugging the right code > in the wrong context because you didn’t realize that the breakpoint would get hit more than once, and so you used the first hit instead of the 17th?

Sure, the very first time I used a debugger. About 20 years ago 🙂

Besides, good luck finding the 17th println in your console.

At least, I can tell a debugger to stop the 17th time the breakpoint is hit. With your method, I need to discard manually and visually the first 16 println.

Mike said:

Cedric, I completely disagree with just about everything you say here. I find debuggers are enormous time wasters, and often cause enormous ancillary problems – timeouts, missing multi-threading problems, etc.

The big problems with debuggers are very well capured in "The Practice of Programming". Debug sessions are ephemeral, and they suck in mult-request/multi-thread environments. Debuggers can’t generally be used to diagnose production problems in-flight.

While this is true, I am having a hard time seeing how debuggers fall shorter than println debugging for this kind of task.

As for the ephemeral aspect, yes, but fixing a bug is also by nature, ephemeral. A debugging session might end up in one or several fixes, which you might want to capture in a regression test once you are done.

That’s why I really don’t understand people who say "don’t use debuggers, use unit tests". If you only use one of these two tools, you are not doing your job correctly. They are complementary.

In an ideal world, every developer uses test-driven development or at least writes regression or unit tests for any bug they fix and feature they add. In my world, you can never convince most developers to write a test to save their life, but at least, they understand the importance of finding bugs.

When you’re running your code through a debugger, what you’re seeing is the code right now in certain paths at it exists at that moment.

True, and once again, how is that different from a line in a log file?

Think interfaces and factories with pluggable backends, and you can see that understand the underlying design in detail, in your head, is far more important than being able to stumble through a debug session – because that session is only a brief moment in time. Not that I advocate something so primitive as println’s – use a logging package, for shrike’s sake!

No difference in my mind, you are just using a different API with more powerful output capabilities. Which is still missing the point.

As a final note – debug sessions do not help the poor slob who next has to debug the code – again because a debug session is ephemeral. Stop wasting your time in a debug, and craft logging information that will far outlast the current version (and possibly even outlast you!).

I have certainly never said that debugging should replace logging, which is yet another orthogonal issue.

In my opinion, a good developers uses:

  • A debugger
  • Logging
  • Tests

As I a said above, if you are not using these three tools, your productivity is not as high as it could be.

The Power of Debuggers

Rob Martin doesn’t like debuggers

A good debugger is a very capable tool. With it, an experienced developer can step through very complex code, look at all the variables, data structures, and stack frames; even modify the code and continue. And yet, for all their power, debuggers have done more to damage software development than help it.

Rob goes on:

…a debugger is a tool of last resort. Once you have exhausted every other avenue of diagnosis, and have given very careful thought to just rewriting the offending code, *then* you may need a debugger.

The problem is that by then, you might have missed your deadline and wasted your precious time by adding and removing manually all sorts of logging code, not mentioning the headache by staring at the screen for the outputs.

Andreas chimes in with a mind-boggling analogy:

A debugger is like a telescope looking at the night sky and you have to find the North Star.

Talk about a leaky abstraction (how about a telescope where you can type in the name of the star you want to watch and which will possibly rotate the Earth for you so you can observe it.  Now that’s a more accurate analogy).

People who campaign against debuggers typically think that debuggers are only good to find bugs.  That’s an unfortunately common misconception.

A debugger is also very powerful at making sure a bug doesn’t happen in the first place.  I don’t know about you, but pretty much every time I write a reasonable chunk of code, I always run it first inside a debugger.  No stupid println comment, just the hard facts and the naked truth about all the variables I created.

Now let’s turn to the alternative chosen by Rob and Andreas, which is so deeply flawed that it’s hard to know where to being:  println debugging.  Here are a few facts about this practice:

  • A println is tedious and error prone.  You need to know exactly where to insert your println and make sure you remove it before you ship.  No matter how good you are, I’ll bet my little kitten Trogdor that shipping with a debug statement has happened to you at least once.
     
  • A println is human, hence fragile.  This debugging method is only as good as your eyes, or your brain.  No matter how good you are at reading scrolling consoles or log files, your eyes will get tired.  Really quickly.  And at some point, I guarantee that you will miss something essential.  Which you will forget to remove from your production code (have I already mentioned that?).
     
  • A println cannot be automatically tested.  Whenever you feel the need for println debugging, I am betting that you should write a regression test to make sure the bug is fixed.  println’s don’t really lend themselves to this kind of exercise and you will probably find yourself changing your println statements to something more testable, like values in a HashMap.
     
  • A println carries your bias.  This is the worst thing about println’s:  they are typically only found in places where you think there might be a bug.  If you created three variables in a method, it’s pretty common to only print one or two of them, because you don’t really need to check that third variable, you just know it contains the right value.  Right?

A debugger shows no mercy.  It will expose every single variable you created, including arrays, hashmaps or Properties.  When you are running your code through a debugger, you don’t just think it’s working.  You know it.

Don’t underestimate the power of debuggers, they play a huge part in the software quality cycle.

It’s all about ease of use

It seems hard to believe now, but some time in a distant past, Windows and X
Window were very close competitors and proponents of both windowing systems
would spend hours debating on the virtues of each.

Depending how old you are, you might never even have heard of X Window, so
allow me a brief introduction to this technology.  X Window was developed
at the MIT by Bob Scheifler.  The development pretty much started when
bitmap screens were emerging and that a handful of developers decided that UNIX
workstations should have a graphic user interface.

X Window is low-level and powerful at the same time.  It defines all the
standard basic operations you would expect from a windowing API (drawing shapes,
handling colors and fonts) but nothing else.  Concepts such as buttons,
menus and other widgets had to be developed separately (this is what other
toolkits such as Motif or Athena provided, but later). 

X Window also supported a very intriguing and quite innovative idea at the
time:  remote displays.  In short, the whole API was basically
"headless", to use the Java terminology.  The API assumed two very distinct
parts:  client and server, allowing an X server to display window on any
client workstation.

X Window’s flexibility was also its downfall.  By virtually allowing any
kind of window interface to be implemented, it created a market that started
fragmented and never recovered.  Trust me when I say that it was almost
impossible for even coworkers to use each other’s workstation without being
utterly confused by the various key remappings, window shapes and mouse button
bindings that each user had configured.

Dave
captured this phenomenon extremely well in his short review of Eric Raymond’s
book "The art of UNIX programming":

So, by providing policy, the designers of Windows and Mac interfaces have
provided their end-users with a consistent look and feel, and a base set of
application behaviors. By instead focusing on mechanism and ignoring policy, the
designers of X allowed developers to experiment, but gave the users of X
applications a very inconsistent interface experience. Arguing one approach is
better than the other is pretty pointless: they