This article speaks about one of the many, many reasons I don't like Robert Martin's approach to programming. If some principle results in a handful of single-method classes that don't really do anything on their own, the principle is not a good basis for design.
I find the author's alternative much more useful in practice.
> If some principle results in a handful of single-method classes that don't really do anything on their own, the principle is not a good basis for design.
Absolutely agree. Proponents of approaches like this tend to only worry about intra-object complexity, and ignore the fact that a vast, complicated object graph is also hard to reason about.
Basically it's advocating removing if/switch statements and replacing them with polymorphic method calls. I understand that polymorphism has its value, but think that it's only valuable when, for lack of a better phrase, the thing that's being polymorphosed is "big and varied" enough that it makes sense to impose this extra level of indirection. I think that it being hard to explain when something is worth it, is enough justification that principles shouldn't be arbitrarily decided based on that.
However, if you already have a class type for each key
I think that's precisely what I was trying to say - polymorphism makes sense when you already have a bunch of classes to do it with, and classes which already contain lots of other fields and methods; it doesn't make sense to create a bunch of classes just to use polymorphism.
Often, though, a series of complicated if statements is hinting at a type system for your objects that hasn't yet materialized in your code. I find it's a good idea to always look at cascading if statements and switch statements and ask, "Would this be cleaner if I reified these concepts as types?"
This is the single most important trick for factoring out shitty code. I cannot believe how many times reification collapsed complexity in our code base, or how not using it was the source of bugs.
If you have cascading ifs, there is a good chance there is a huge set of ifs for every place this type system is missing. Meaning, if you wanted to add another "case" to a feature, you are modifying cascading ifs in 5-10 places, not even just one.
Wrapping up all of this code into an implementation of an interface, that is "hooked in" at each contact point allows you to add a cohesive new use case by generating a new implementation of an interface, instead of "forgetting 2 places in the code" and causing massive bugs or issues because of it.
If it is sensible to call the same function with them as the same parameter, the objects are not unrelated. There is a very real and relevant-to-the-application sense in which they share a common type. (Now, if they aren't sharing implementation -- which they clearly aren't in at least one relevant way if you are avoiding an "if" statement by putting them in a class heirarchy -- then it probably makes more sense for them to implement a common interface rather than being subclasses of the same class, in a language which distinguishes those rather than using multiple inheritance class heirarchies plus method overloading to serve the role of interfaces.)
Agreed. I find the best advice comes from people who have built significant systems that are both complex and innovative for their time. Rob Pike and Dennis Kernighan's The Practice of Programming is one of my favorites for this reason.
They tend to recommend solutions that are highly dependent on the problem to be solved.
Trying it the other way round, i.e. fitting the problem to an idealized solution, rarely works, and I find this is what I see a lot when I see people who place an emphasis on being "object-oriented" as opposed to just solving the problem with minimum redundancy but not twisting their code to do it.
As an example, creating a function and calling it twice from two similar classes is much easier to read than inventing an intermediate class that is an ancestor of the two (something I see very often from the hardcore OO folks).
"As an example, creating a function and calling it twice from two similar classes is much easier to read than inventing an intermediate class that is an ancestor of the two (something I see very often from the hardcore OO folks)."
You really see this a lot? As far as I can remember using inheritance over composition for de-duplicating code has been considered bad practice in OO circles for at least 15 years.
Wow, that's terrible, but I'd prefer if you referred to those people as "out of date OO people" as opposed to "hardcore OO people". Even though I don't consider myself a "hardcore OO" person now, when I did this would bother me.
> If some principle results in a handful of single-method classes that don't really do anything on their own, the principle is not a good basis for design.
I call those "half responsibility classes", where classes don't actually have any responsibility other that carrying the little extra information the parent needs to perform the actual job.
That's bad, because classes are a way to encapsulate data and logic; and those... _things_ don't have any logic.
That's terrible OOP right there. I don't think the SRP encourages such thinking.
I think the SRP is a subtle and often poorly explained thing that is very easy to misunderstand in a way which encourages such thinking, even though understood properly, it does not. The basic problem is getting the right "level" for the responsibility in context, and that's not an easy thing to explain how to do.
I find the author's alternative much more useful in practice.