Allen Holub is at it again. I already commented on his previous column about inheritance and I pointed out his usual habit of coming up with a provocative title to get readership and then try to turn his ideas into an article, usually very unconvincing.

This one is no exception. Worse, it is actually a rehash of an article he wrote a few years ago.

This (in)famous article is an unconvincing attempt at proving that getters and setters are evil.

Can you make massive changes to a class definition—even throw out the whole thing and replace it with a completely different implementation—without impacting any of the code that uses that class’s objects?

And what does this have to do with accessors? Besides, getters and setters are part of the contract of the class, they are no different than business methods. If you modify anything that’s part of the public interface of the class, you are going to break existing code.

Holub’s thought is basically "A program is made of code only", which is a terrible simplification. Programs are made of code and data and refusing to acknowledge this is the sure way to catastrophic designs.

Don’t ask for the information you need to do the work; ask the object that has the information to do the work for you

What if this work absolutely doesn’t belong in the object in the first place? What if you need to gather data from different objects and then process it in a way that clearly doesn’t belong anywhere else than in your object? Of course, you can create an intermediary object to do that, but wait… you already did this on your current object! Following Holub’s advice in this particular case basically means turning everything into an object, even when this is not really needed. This screams "over design" to me.

Though getIdentity starts with "get," it’s not an accessor because it doesn’t just return a field. It returns a complex object that has reasonable behavior.

Oh but wait… then it’s okay to use accessors as long as you return objects instead of primitive types? Now that’s a different story, but it’s just as dumb to me. Sometimes you need an object, sometimes you need a primitive type.

Also, I notice that Allen has radically softened his position since his previous column on the same topic, where the mantra "Never use accessors" didn’t suffer one single exception. Maybe he realized after a few year that accessors do serve a purpose after all…

Bear in mind that I haven’t actually put any UI code into the business logic. I’ve written the UI layer in terms of AWT (Abstract Window Toolkit) or Swing, which are both abstraction layers.

Good one. What if you are writing your application on SWT? How "abstract" is really AWT in that case? Just face it: this advice simply leads you to write UI code in your business logic. What a great principle. After all, it’s only been like at least ten years since we’ve identified this practice as one of the worst design decisions you can make in a project.

In 1989, Kent Beck and Ward Cunningham taught classes on OO design, and they had problems getting people to abandon the get/set mentality. They characterized the problem as follows:

The most difficult problem in teaching object-oriented programming is getting the learner to give up the global knowledge of control that is possible with procedural programs, and rely on the local knowledge of objects to accomplish their tasks. Novice designs are littered with regressions to global thinking: gratuitous global variables, unnecessary pointers, and inappropriate reliance on the implementation of other objects.

Ah, what a subtle way to twist a message to make it match your own. Back then, the concern was not about accessors. OO advocates were simply trying to change the minds of procedural developers who were used to consider everything in terms of method calls. The idea was to introduce the concept of an object, which encapsulates both behavior and data.

If there is one thing that survived the switch from procedural to OO techniques, it’s the fact that data is central to programming. The paradigm shift comes from the fact that we access this data differently, not that we stop accessing it altogether.

Allen keeps repeating that "Calling accessors to get your data makes your code less maintainable" but I don’t see an ounce of proof in this article. Tying maintainability to accessors is a very naive and simplifying assumption. It’s even misleading in the sense that developers might now be tempted to write code without accessors and assume that it will automatically be more maintainable.

Maintainability of code is connected to several factors, and one of them is coupling. Coupling can happen in a variety of ways and accessors are just a very tiny fraction of them. Ignore the bigger picture at your own risk.