Programming by difference

I’m currently working my way through Michael Feathers Working Effectively With Legacy Code. Damn good book, BTW. Anyway, this was this section on “programming by difference” that got my thinking while reading it.

In this section, Michael describes adapting a MessageForwarder. The existing implementation has a method, getFromAddress(), which says who the message that is being forwarded is from. Michael wants to adapt the class, without changing the client (significantly), so he creates a subclass, AnonymousMessageForwarder, that overrides the getFromAddress() method.

All through this, my mind was screaming “No, don’t do it like that!”. What I wanted to see was this sequence of steps:

* Make a subclass of MessageForwarder, called something like SignedMessageForwarder.
* Move the implementation of getFromAddress() down to the SignedMessageForwarder, leaving the version in MessageForwarder abstract.
* Then create the AnonymousMessageForwarder.

(This slots in _after_ the tests are written, of course.😉

Then, about 6 pages later, Michael did exactly this (except that he called the other subclass an AddresPreservingForwarder, which proves that he comes up with better names than I do). *phew* I felt better reading that. However, I disagreed with why he did it.

Michael invoked the Liskov Substituion Principle, which I’ve rabbited on about before, so I’m not going to do it again. I don’t think the LSP is the most valid reason for doing this. IMO, he gave the best reason in the title of the section: Programming by Difference.

When you have a class hierarchy, it is easy to see differences between classes at the same level. You can easily scan the class signatures and see the methods that each have overriden from their respective parent. It’s not as simple to see the differences between a parent and a subclass. If you start from the parent class, you may not even realise that the subclass exists.

What I like to try and do when using implementation inheritance (as opposed to type inheritance) is to keep methods designed for inheritance extremely simple. In the superclass, I strive for these rules:

  • If the method belongs in the super, make it final. Use the Template Method pattern to give hooks in for subclasses.
  • If the intended subclass needs to define the method, make it abstract.
  • If an implementation is optional (as is often the case with the Template Method pattern), provide an empty implementation.
  • Override concrete methods only to limit behaviour, not expand behaviour.
  • Above all else: avoid having complex methods that are completely overriden by a subclass. This just confuses things no end.
  • Finally: don’t have protected variables!

These rules tend to give me inheritance trees of a fairly uniform depth. It also usually gives me trees that are, in Michael Feather’s terms, “normalised” – that is, class hierarchies where a class only has one implementation of any given method between itself and its parents. This is a very powerful technique for determining differences between classes.

It’s also a corollary of the “Single Responsibility Principle”, which Michael elaborates on as well – if a class has some behaviour that a subclass wants, but also has behaviour the subclass doesn’t want (and will remove by overriding), then the class probably hasn’t had the responsibilities broken down far enough.

I find people tend to either be abusers of inheritance, or have been so scarred by it they swing entirely the other way, practically banning it. Personally, I find it powerful when not abused, and the rules listed above are some of the ways I try to not abuse it. Another is never to have a class hierarchy that makes me want to draw it on a scroll instead of a piece of paper.🙂

Author: Robert Watkins

My name is Robert Watkins. I am a software developer and have been for over 18 years now. I currently work for people, but my opinions here are in no way endorsed by them (which is cool; their opinions aren’t endorsed by me either). My main professional interests are in Java development, using Agile methods, with a historical focus on building web based applications. I’m also a Mac-fan and love my iPhone, which I’m currently learning how to code for. I live and work in Brisbane, Australia, but I grew up in the Northern Territory, and still find Brisbane too cold (after 16 years here). I’m married, with two children and one cat. My politics are socialist in tendency, my religious affiliation is atheist (aka “none of the above”), my attitude is condescending and my moral standing is lying down.

4 thoughts on “Programming by difference”

  1. Perhaps “Greg’s” restriction is not such a bad idea – it removes the temptation.

    To paraphrase hitchhikers guide to the galaxy:

    Developers must not use implementation inheritence ever unless one of the following is true:
    1) Their life depends on them using it
    2) Their life would otherwise depend on it if they weren’t able to use it
    3) They really really want to.

    VB (pre .Net) had only interface inheritance (not sure about VB.Net) – I suppose that was a good thing, as otherwise it would have been like a kid with a new toy, as it was a lot of peoples only experience with OO in corporate development.

  2. It depends, really…

    Implementation inheritance is not a core pillar of OO. Polymorphic behaviour depends on type inheritance, not implementation inheritance. Implementation inheritance provides one form of aggregation of behaviour; the other being aggregation via composition (e.g. delegation).

    When you have classes with mostly similar behaviour, but some differences, composition gets a bit painful. You end up delegating most of the class, but leaving a little bit. In these scenarios, implementation inheritance makes a lot of sense. In general, if you are overriding less than about half of the behaviour of the parent, stick with implementation.

    The real issue (and the cause of the abuse) is classes that have too much behaviour. Too much behaviour makes composition too painful (too many methods to delegate), but makes an inheritance hierarchy too convulted.

  3. Yes, they can (at least Eclipse can). Unfortunately, for larger classes, excessive delegation significantly impairs readability. Again, the issue here is overly large classes.

    (In Eclipse, you can select a member variable of a class, and automatically generate delegates to it)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s