The Five Minute Introduction to Using Smalltalk-Style Traits Instead of Inheritance

Smalltalk-style traits are beginning to gain more traction outside of Smalltalk, but there still seems to be a lot of misunderstanding about them. This is a ridiculously quick and language-agnostic description of traits and how they work. Much of the background is simply ignored and it's taken for granted that you understand the conceptual problems with inheritance (both single and multiple).

Cross-cutting concerns

When presenting alternatives to inheritance, there are a couple of things to keep in mind. First, the core problem with inheritance as commonly used is that it tends to tightly couple class responsibility with code reuse. A Person class is likely responsible for the person's name and birthdate. The fact that it has save() and update() methods is likely a convenience. For example (all examples are pseudo-code unless otherwise noted):

    ORM orm .= new(list,of,connection,parameters);;

    // versus;

In the examples above, you can see that you don't need to have a save() method directly on the Person class. However, many find it convenient. In fact, it's convenient enough that ...;;;;

In that example, we have a bunch of objects which are saved. We don't know and don't care how they're saved, but it's a good bet that most of those items are not related by inheritance aside from whatever persistence layer may be used. Behaviours which unrelated classes might implement are often referred to as "cross-cutting concerns" because many classes, regardless of their relation to each other, might want to implement them.

Examples of cross-cutting concerns are:

  • Object persistence (ORMs)
  • Logging
  • Serialisation/deserialisation
  • Memory allocation
  • Synchronisation

Declarative statements

The next problem which often comes up is code which looks declarative, but really isn't. For example, I used to be the administrative coordinator for a luxury furniture company. However, 25% of my time, I was also their only software developer (it was my first programming job). How might this be modelled?

class AdminCoordinator isa OfficeGrunt, Programmer { ... }

So far nothing looks too unusual. We've inherited from the two jobs I had. What happened, though, if I finished all of my office paper work and had 50% of my time left over for programming? Simply put: I had to stand there and twiddle my thumbs because I was paid a higher hourly rate when programming (I don't believe this was legal, but I needed the programming experience) and thus wasn't allowed to exceed 25% of my time working as a programmer. Thus, both OfficeGrunt and Programmer might have their own salary() methods. Which one gets called? That could be a compile-time error due to ambiguity (Eiffel does this and I recommend looking at how it works), a runtime error, or it might simply call the salary() method from OfficeGrunt because that's the one you've inherited from first.

Mixins don't solve the problem here (note that Scala traits are basically mixins and also fail to solve this) because they actually have a "last wins" rule and the Programmer.salary() method would be called instead. In Ruby, for example, we might have this:

    class AdminCoordinator 
        include OfficeGrunt
        include Programmer
        # more code here

If you were to call the ancestors method on AdminCoordinator, you would see something like this:

    irb(main):026:0> AdminCoordinator.ancestors
    => [AdminCoordinator, Programmer, OfficeGrunt, Object, Kernel]

In short, you might think that mixins "mix in" the methods into your class, but they actually just diddle the inheritance hierarchy and this causes some frustrating issues.

Ultimately the problem is that mixins, like inheritance, rely on the order in which things are declared, but let's step back and think about my job at that furniture company.

  • Ovid worked at the furniture company as both an office grunt and a programmer.
  • Ovid worked at the furniture company as both a programmer and an office grunt.

You'll note that both of those sentences declare something about me while reordering some terms. The order of those terms does not matter. This is what "declarative" code is about; it makes a declaration and the order in which it declares things does not matter.

Solving this with Smalltalk-style traits

Smalltalk-style traits, also known as "roles" (particularly in the Perl world), are declarative and cleanly separate class responsibility and code reuse. We'll start calling them roles because the term "trait" is rather overloaded in the programming world.

A role does several things (a bit of hand-waving here):

  • It might provide behaviour (methods)
  • It might require behaviour (acting sort of like an interface)
  • Any conflicts (duplicate methods) must be resolved at compile-time

A role might look like a Ruby mixin or a partial class. Here is how the "Programmer" role might look:

    role Programmer {
        // something, even another role, must implement this
        requires int seniority();

        // here's a method we provide
        method salary(float hours_worked) {
            return hours_worked * (12 + ( this.seniority() / 10 ) );

And when composed into a class:

    class AdminCoordinator 
      isa Employee 
      does OfficeGrunt, Programmer {
        int seniority; // we should fail at compile time without this 
                       // because Programmer requires it

But that should fail at compile time because OfficeGrunt also has a salary method and roles do not allow you to get away with this ambiguity. Roles should allow you to exclude conflicting methods. Let's say that company policy requires that a person always get paid the highest salary amongst their several roles. The code might look like this:

    class AdminCoordinator 
      isa Employee
        OfficeGrunt -salary,  // exclude the salary method
        int seniority;

However, in my case, my salary had to be distributed across the different tasks I was doing, so we need each of those methods. In this case, we can rename them.

    class AdminCoordinator 
      isa Employee
        OfficeGrunt salary -> grunt_salary,
        Programmer  salary -> programmer_salary
        int seniority;
        method salary(float hours) {
              this.grunt_salary(.75 * hours) 
              this.programmer_salary(.25 * hours);

Note that absolutely nothing in the above is dependent on the order in which the roles are consumed. There is also no ambiguity present.

What now?

There is, unfortunately, a bootstrapping problem with roles. On at least one Smalltalk project mailing list, I saw the lead developers veto using roles (traits) because other people were not using them. Surprisingly, it's the Perl community which has embraced them wholeheartedly. We use them extensively at the BBC on the PIPs project. PIPs is the central metadata repository for the BBC and as you might imagine for the world's largest broadcaster, there's a huge codebase managing the metadata. Almost all of this code is in Perl and we started using roles in an attempt to simplify our code.

It turned out to be a huge win. The codebase became simpler, much of the code was easier to read and new features because easier to implement. Far from having a theoretical benefit, roles provided concrete, measurable benefits.

Aside from the "nobody uses them" argument (one which will erode with time), others argue that the problems which roles address may not be real. I've heard this multiple times and I don't understand it. First, working on large-scale code bases tends to magnify problems which might not even be noticed in small projects. Second, even the briefest of internet searches reveal plenty of problems with people trying to shoehorn everything into some inheritance model and dealing with the resultant bugs. Of course, the fact that there's been forty years of arguing over how to implement inheritance properly (even single inheritance!) should indicate that something is amiss.

This has been a ridiculously short introduction to roles and I've omitted many things I would have liked to include. Though there's a strong formal background behind them, they are very easy to use and once understood, map very cleanly to developer's understanding of their task at hand. Unfortunately, it's very much a "chicken and egg" problem on encouraging wider adoption of them. If they're available in your programming language of choice, I encourage you to start using them on smaller, less critical projects to better understand how they can make your projects easier to understand and manage.

  • Current Mood: thoughtful thoughtful
Note that Scala traits require the "override" annotation if they will be stacked. The rightmost trait in a composition can also call the same method of the trait to left via "super".
Re: Scala
Where is the "override" annotation placed? Surely not in the trait itself? They shouldn't know if they're being "stacked" or if they're being consumed with other traits. I don't know enough Scala to just know the answer offhand.
Re: Scala
in the implementing trait, e.g.
trait A { def f():Int=1}
trait B {def f():Int =2}
class C extends A with B {
//def f():Int = 3 compile time error
override def f():Int = 3

Great advice, thanks
Thanks for the article, some very good tips in here, and I appreciate the reference to the research papers. Note that the "Design Patterns" GoF book advises basically the same thing: they advocate code reuse via composition of classes rather than inheritance.

I've run into the same problems in my own work; I think anyone doing OOP for a length of time has done the same. Lately I've been using Ruby modules to compose classes instead, however it has the problem you mention with same-named methods stepping on each other, and no way to express "this final class *must* have [x] method."

Where I used to do stub methods to express a dependency--seemed like a good idea until I included modules in the wrong order--I now express dependencies in the initialize(), like so:

raise "Class [foo] requires method bar" unless respond_to?(:bar)

This eliminates ordering problems but still lets you barf on creating an instance if the final class doesn't have all the methods it should have.

All that said, the pseudocode you show is much cleaner. This could probably be done in Ruby with some meta-programming foo, but *any* OOP language could benefit from it tremendously.