Archive for October, 2003

Bertrand Meyer’s interview

Eiffel inventor, Bertrand Meyer, gave an interview on artima recently. He makes reasonable points in the first two parts.
And then he mentions Eiffel (which he never fails to do).

I learned Eiffel in college a long time ago and it was quite an ordeal.  I
suspect the tools have improved quite a bit since then but the
language itself remains a puzzling mix of very questionable design choices for
an object-oriented languages, such as renaming imports a syntax that will
make you miss C++.  A lot of these features made sense back then when we
didn’t know better and Java didn’t exist, but I would argue that by now, we have
a pretty good idea of which object-oriented concepts are sound and which ones
should stay in the realm of academia.  As far as I
can tell, Eiffel hasn’t really learned these lessons as of today, but that’s not
the point of this posting.

Eiffel is vastly different from the C++/Java family
of languages, and there are undoubtedly dozens of different things you could
discuss in such an interview, but instead, Bertrand tries to make the point that
assigning fields is evil:

For example, in just about every recent object-oriented language, you have
the ability, with some restriction, of directly assigning to a field of an
object: x.a = 1, where x is an object, a
is a field. Everyone who has been exposed to the basics of modern methodology
and object technology understands why this is wrong.

Well, no.  Just saying "everyone understands it’s wrong" is not going to
make it suddenly a reality.

Without even discussing whether this is a good point or not, reducing 
most of the complexity of today’s software base to this simple concept is
laughable. And the quasi non-existence of Eiffel in the industrial world since
its inception (almost twenty years now) doesn’t seem to bother Bertrand Meyer at
all about the validity of the choices he made when he designed Eiffel.

Eiffel sports some very amusing archaisms, such as the absence of
overloading.  The
justification is worth a read (from the

Readability is also enhanced when overloading is not possible. With
overloading you would need to consider the type of the arguments as
well as the type of the target before you can work out which feature
is called. With multiple inheritance and dynamic binding this is
awkward for a compiler and error-prone for a human.

"awkward for a compiler"?  Now that’s a good one.  Code is code. 
It might be a difficult problem to tackle, but refusing to implement overloading
and choosing instead to support multiple inheritance of implementations makes me
wonder about the sanity of the ISE engineers.

Even now,
Bertrand Meyer is certainly still convinced that overloading is evil:

This kind of apparent short-term convenience buys a lot of long-term complexity,
because you have to find out in each particular case what exactly is the
signature of every variant of an operation.

Well, this is called a "paradigm shift".  And by the way, the
paradigm shifted at least ten years ago, and I can guarantee that by now, programmers are pretty much used
to read polymorphic overloaded methods.  Especially with modern IDE’s.

Besides, I happen to know for a fact the original motivation for leaving overloading out
of Eiffel (my teacher was a close friend of Bertrand Meyer).  Back then,
Eiffel was generating C, and overloading is not supported in C.  Of course,
it never stopped the original C++ compilers (cfront) from solving this problem,
but it looks like the ISE engineers have decided that this problem was too hard
for them.  And they haven’t revisited their choice since 1985.

Finally, one last gem:

“if it looks too good to be true, then it must not be true,” which is certainly the stupidest utterance ever proffered by humankind. This is the kind of cliche
we hear from many people, and it’s just wrong in the case of Eiffel.

Bertrand can rest assured: anyone who sees an Eiffel program for the first time will definitely not think “it’s too good to be true”.

Read the rest of this entry »

Very abstract classes

Rickard is making a few changes in his views on AOP, more specifically on dynamicity:

Basically, each class (be it a normal class or aspect implementation) simply declares what it it needs in terms of other interfaces in the “implements” clause, but does not actually implement them.

I find this idea puzzling. While it does indeed help solve the static typing problems, since your classes and advice now really implement the interfaces that the AOP framework might have introduced on them, you are still left with the problem of object identity.

For example, in his example, the class PersonMixin declares that it implements Person and Author, but the class is declared abstract and the Java code only implements Person. The implementation of Author will be supplied by the AOP framework in a mix-in.

I don’t know about you, but my first reaction is that this reminds me a lot of how CMP 2.0 works. You work on an abstract class that only contains your business logic and all the abstract methods are generated by the container in a subclass. The difference with Rickard’s AOP approach is that he uses CGLIB to implement the concrete class and, more importantly, that the generated subclass can be controlled by a pointcut model. So far, so good.

However, unless I am missing something important, I just don’t see how this could work beyond a toy application where objects can be populated statically. The problem is that at runtime, you just don’t know what instance of Author you will be talking to. How do you make sure it’s the right one?

In the case where this class can be populated statically, you can use something like Spring’s BeanFactory to get you started. However, I suspect that in the real world, the Author object is populated at runtime. Its value is part of your business logic, and I don’t see how you can guarantee this in an AOP framework, unless you start putting the said business logic in your pointcut definition. An idea which, frankly, scares me.

Maybe you could envision having an advanced version of PicoContainer that knows about identities of objects and allows you to look them up, but do we really want to go down the path of defining primary keys in Java objects just so that we can use them in-process?

I wonder if this idea is simply not going too far. AOP makes us think about proper divisions or responsibilities and concerns, but this idiom is a step toward splitting your business logic between Java code and… some other undefined place which depends on your AOP framework.

Rickard, if I missed something, please let me know.

By the way, all these problems magically go away with AspectJ, which combines:

  • Static typing.
  • A powerful pointcut model.
  • Familiar OO concepts such as inheritance and polymorphism.

Flaws in Ruby

I am in general very fond of Ruby. It’s a very appealing language allowing all kinds of object-oriented designs not available in Java and other traditional languages. However, there is no such a thing as a perfect language, and there are a few details in Ruby that bother me. For example:

  • The Perl heritage. Ruby uses variables named $`, $’, etc… This hacker parlor trick is the worst outrage you can inflict to a program. You are guaranteed to confuse anyone who isn’t intimately familiar with Perl if you use these variables. Luckily, a module called “english.rb” allows you to use more meaningful names, but not everyone uses it.
  • The end keyword. I am a big fan of meaningful indenting, such as in Python. It bothers me when I read a source and suddenly, I see five “end” words next to each other in decreasing indentation. This is visually unpleasant and clutters the code. And if visually indenting is not an option, at least “}” is not as verbose as “end”, which I just can’t help spelling in my mind when I read it, even though it doesn’t add much to the semantics of the code.
  • No overloading. That’s right. If you want overloaded methods, you need to declare one method with a varargs signature and decide what code to invoke based on the types of the objects that were passed. This omission boggles my mind, but it’s perfectly in line with Matz’ philosophy, who is a strong opponent of orthogonal features because they tend to “explode into complexity”.

Matz does have a point with the exploding complexity of orthogonal features. I believe this fact is one of the main reasons why C++ became so unbelievably complex, both in syntax and in semantics. For examples, templates were initially introduced using the “<" and ">” characters. It didn’t take long before somebody realized that this new notation would conflict with the stream operator “>>”, thereby forcing you to close nested templates with “> >” instead of “>>”.

However, I believe that in the particular case of overloading, Matz is mistaken. This is one of the few features whose combination with other features is pretty well understood and still easy to read. The only problem I can think of is when you try to mix overloaded methods with default arguments. The ambiguity of this particular case led to the rule that default arguments can only be specified at the end of the signature (okay, there is another reason for this constraint that has to do with the way parameters are pushed to the stack, but I won’t go there).

Matz himself is the first to say that “there is no perfect language”. Or rather, his perfect language is not my perfect language. Fair enough.

Ruby is still a joy to program.

A back-up story

I used to be very thorough with my back-ups, sometimes spending one hour inserting each of the thirty floppy disks to archive my whole hard drive. Of course, such an exercise is pointless these days, where the only way you could realistically do a full hard drive back-up would be with another hard drive.

I tend to do selective back-ups now. I have a few directories that contain unique information (Web site, MP3, calendar, address book, etc…) and once in a while, I burn these on a CD or I just replicate them on another machine (home, work, laptop, etc…).

This strategy has worked pretty well so far. Until recently. The interesting thing is that the unfortunate event was not a disk crash (which hasn’t happened to me since pretty much my Amiga days in the early nineties) but from myself. While I was cleaning up my registry (manually), I stupidly deleted one key too many.

It’s scary to see how deleting a single registry key can wipe out an entire profile out of existence. Pretty much all my Outlook information (address book, contacts, calendar, etc…) was wiped in less than a second. And that’s just the beginning.

It took about one full day for my machine to become functional again, and the selective back-up trick worked very decently. Except for one tiny detail.

One of my hobbies is to read German books and write summaries for them. These books are often scanned and I once needed a more automatic way to clean up the output of the OCR. So I wrote a Word VBScript macro to make my life easier. This is probably the only time in my life I have ever written VBScript, and while it turned out relatively painless, my absolute lack of knowledge of the syntax and libraries made the task take longer than I expected. Yesterday, I tried to apply my macro and realized that it wasn’t there any more.

Selective back-ups work fine as long as you know what to back up. But Word VBScript macros? I wouldn’t even know where to start to look for them.

Then I remembered that a few years ago, I had sent that macro to a mailing-list. I sent a call for help and a few hours later, I received my macro from a list member.

Who needs back-ups these days? The Internet is my back-up.

Workshop and EJBGen

If you are curious to see how Workshop supports EJBGen, take a look at
this article on dev2dev.

Also, I just noticed that the EJBGen mailing-list has passed the 600 subscribers. It looks like there are still a few people using EJB’s after all…

AOP talk tonight

Reminder:  I will be making a presentation on Aspect-Oriented
Programming tonight at the
Java by the Bay
, downtown San Francisco.  Stop by and say hi.

JSR 175 and coupling

As you probably all know, JSR 175 will enable annotations for Java programs as
part of the JDK.  The current method, using Javadoc, will slowly be phased
out and replaced by this JSR which defines models for

  • The annotations.
  • The class file format.
  • An API to access the annotations.

The current Javadoc approach is very fragile and non-standard, but it does
have a fairly strong point in its favor:  it is totally non-intrusive.

Consider the following Stateless Session Bean defined with EJBGen

 * @ejbgen:session
 *    ejb-name = statelessSession
public class TraderEJB implements SessionBean {

* @ejbgen:remote-method
*     transaction = Required
public void buy(String stockSymbol, int shares) {
// buy shares

This Java source file doesn’t have any dependency on EJBGen.  You can
compile it with any Java compiler as long as you have the J2EE libraries in your
classpath.  No need for ejbgen.jar.

Now consider the JSR 175 equivalent:

import com.bea.wls.ejbgen.annotations.*;

ejb-name = "statelessSession"
public class TraderEJB implements SessionBean {

@Remote (
transaction = Required
public void buy(String stockSymbol, int shares) {
// buy shares

This file will not compile unless you have the EJBGen annotations (com.bea.wls.ejbgen.annotations.*)
in your classpath.  You have just tied yourself to a tool.

Now imagine that this source file uses other tools as well, mixing several
kinds of annotations such as Struts, AOP, etc…  Suddenly, your
dependencies shoot up the roof and a simple source file like the one above ends
up pulling a big set of JAR files before you can even run javac on it.

Here is another example:  you send me an MBean source with annotations
from a tool that helps you create MBeans.  I want to be able to build that
MBean even if I’m not using your tool.

How about IDE’s such as Rational that save information as annotations in
their Java files.  With JSR 175, you will no longer be able to compile
these files unless you have the Rational classes in your classpath.

The common point in these examples is that the class can be useful
without the annotations, which are merely hints used by third-party tools to get
a certain job done faster.

What can we do about this?

The first idea that came to my mind was to make annotations optional to javac. 
If the compiler cannot locate an annotation type, it will emit a warning but
keep compiling.  This is similar to the Javadoc behavior, which tries to
make a full transitive closure of all the types in your source files, attempts
to locate the dependent source files but simply issues a warning if it can’t
find them and keeps going.

After all, all this means is that no documentation will be available for these
dependent classes, but it shouldn’t stop the tool from producing documentation for the
supplied classes.

Another way to mitigate the problem would be for tools to be distributed in
two parts:  the annotation jar file (or even better:  the annotation
source files) and
the "real" (runtime) jar file.  Compiling the above source file would still make you
depend on EJBGen, but only the annotation part, not the whole runtime.

What do you think?

  • Should javac have an option -ignoreAnnotations?
  • Should this flag be turned on by default?


You might have noticed that my Web site went down for a few days earlier this
week.  Well, I simply forgot to renew the domain registration.  Quite
silly of me.

That being said, Network Solutions is partly to blame for this as well. 
They have had outdated contact information about me for years and the couple of
attempts I made to change these records (it has to be done by fax with a copy of
your driver’s license) have failed.  I didn’t try to fix this problem very
hard, so it fell off my radar.  I assume that they tried to notify me that
my domain was about to expire but sure enough, they are using an old email
address from an ex employer that obliterates your email account on the very day
you leave the company.  So no luck there.

I thought it would be a good opportunity to move to a registrar that doesn’t
charge $35 a year to register a domain, such as
(I have been a pair user for years
now and I only have praises about their service).  Unfortunately, they
can’t transfer an expired domain, so I am stuck with Network Solutions for
another year.  It serves me right.  But be assured that I won’t make
this mistake again next year.

Any domain registrar that charges more than $20 a year deserves to go out of

Bill Joy on the future

Bill Joy gave an

interview to Fortune
recently and some of his statements puzzle me.

Another reason spam is so bad is that so many
companies use Microsoft Outlook for reading e-mail. Again, because that program
is written in C, it’s quite easy to design a virus to go through your e-mail
address book and broadcast spam to all the people you know

Sometimes, I wonder if Joy really lives in the same world we live in. 
Or if he really knows what he’s talking about.  There are two things that
are totally wrong in the above quotation:

  • It’s not because Outlook is written in C that viruses can read your address book. 
    It’s because of a Windows language-neutral technology called COM.
  • Windows viruses propagate not because of Outlook but because people execute
    attachments.  You will suffer the same fate if you are using Eudora or
    Mozilla Mail and you double click on the virus.

I also note that Joy encourages writing programs in Java over C, because C is
not as safe as Java (true enough) but I notice that he doesn’t mention C#, which
is just as safe.

So by using Outlook, you’re not practicing safe
e-mail. We need a "condomized" version of it.

Even though Joy is "cutting the cord" completely with Sun, he obviously can’t
resist spinning things according to the party line of his former employer.  This idea of running
programs in protected environments is very dear to Sun.  It sounds good in
theory but is just a disaster of usability in the real world.  Let’s
contrast the two approaches:

  • Sun decided to go the "sandbox" way, by denying all actions potentially
    harmful to Java programs, thus forcing users and developers to explicitly grant
    access to dangerous functionalities.  The result?  A model that has
    totally failed to catch on because using Java applets is extremely unpractical
    both for users and developers.
  • Microsoft decided to take the opposite approach:  start by (mistakenly)
    allowing everything to run everywhere without restrictions.  The result? 
    They created a tremendous traction in the user space, generated millions of
    dollars in revenues both for themselves and the industry, but they also spawned
    the civilization of viruses that we all know and hate today.  Now that they
    have momentum, they are proceeding to patch the holes.

Who made the best choice? You decide.

So far, Bill Joy has been the "enfant cheri" of Sun.  Every year, he
came up with a brand new idea and regardless of whether it became a success or
not, Bill Joy kept being saluted as a visionary and a true hero of our time.

But are you really a visionary when you keep repeating that the world is
doomed unless we change our ways, that we should use more solar-powered energy
and design our power-grids so they never black out?  Last time I checked,
being a visionary was about proposing solutions for the future, not making trite
statements about events past and predicting extinction.

There is no safety net this time, he will have to prove himself.  The
coming years will show whether he really has what so many people see in him, or
if he is simply a standard geek who just happened to be at the right place at
the right time and ended up receiving much more exposure than he deserved.

I wish him well.  Honestly.  But the only way he can achieve the
ambitious goals he set for himself is to break away from the religion and start
looking at things objectively.

Otherwise, the future simply doesn’t need you.

Objects that should budge

A few additional thoughts on the

"Half-Bean" technique
.  Brian says:

If I add a final instance variable to Album and forget to set it in the
constructor, the code won’t compile.

Mmmh, yes, but you need to remember to use "final".  And also not to
assign it directly in the declaration.  If you don’t religiously do these
two things (I don’t know about you, but I pretty much only use "final" on
constants), my original complaint applies:  you have no way to remember you
need to update the Builder class every time the main class changes.

On a slightly different topic:

Making Strings immutable was a wonderful choice

I wonder about that.

One of the main reasons why String is immutable is performance.  While trying to make Java
faster, the designers pushed an optimization decision on us, poor programmers,
who are now paying a dear price.  String is immutable, but the fact is that
we need mutable strings on a daily basis.  How did the Java designers solve
this problem?  By silently introducing StringBuffer objects whenever the
operator "+" is applied
to a String.

The consequence of this choice is sadly obvious:  the #1 tip
you see in Java performance books is "Use StringBuffer to manipulate strings,
not String".

Making String immutable was a very poor design decision, because in people’s
minds, a String object is mutable.  Always was, always will be.  It
would have been much more realistic to call the current String class "ConstString"
and StringBuffer, "String".  At least this naming obeys the principle of
least surprise.