Good design separates things that change for independent reasons. This is often called the “Separation of Concerns” (Edsger Dijkstra) and applies to many different aspects of design, process, and analysis.
Here are some examples of aspects of an entity that should ideally be handled separately from one another:
- The conceptual aspect: What it is
- The specification aspect: How to use it
- The implementation aspect: How it works
- The creation aspect: How it is made
- The selection aspect: How it is chosen
- The workflow view: How it collaborates
Therefore, Continue reading “Principles Patterns Follow: Separation of Concerns”
In 1996, Robert Martin postulated, “High level modules should not depend on low level modules; both should depend on abstractions. Abstractions should not depend on details. Details should depend upon abstractions.”
When objects interact, they do so through some kind of interface. An interface is always an abstraction of some kind. The first part of this principle is about making sure these abstractions are not tied to specific implementations.
But there is more to consider. How is an interface created? Based on what? What should the methods of a service look like, and what should the signatures of those methods be?
In both cases, we should avoid basing an interface on how the entity functions (its implementation). Rather it should be based on how it will be used (the conceptual, or abstract view of the behavior in question). Continue reading “Principles Patterns Follow: Dependency Inversion”
Barbara Liskov (1987) stated, “Clients that use base classes must be able to use objects of derived classes without changing” (paraphrased).
When a class is derived from a base class, traditionally we call this an “is-a” relationship. But Liskov suggests we should instead consider it to be a “behaves-like” relationship and, when this determined to be untrue, then perhaps inheritance is being misused.
One place where I saw this in action was in scheduling. The system began with the concept of an “Event,” which had a start date, end date, start time, and end time.
Later, a “Day-Long Event” was added by sub-classing “Event” since a “Day-Long Event is an Event.” However, the Day-Long Event was altered such that the start and end times were locked at midnight-to-midnight since a “day is a 24-hour period.”
This caused problems when support for different time zones was added. Day-Long Events that were 12 AM-12 AM in one zone were 9 PM – 9 PM in others, spanning two days… A 24-hour period is not always a day; they did not “behave” the same way and therefore were not substitutable.
The patterns rigorously avoid this kind of mistake.
The “Open-Closed Principle” was coined in 1988 by Bertrand Meyer, based on an idea put forth earlier by Ivar Jacobsen.
It states, “Software entities (such as classes, modules, functions) should be open for extension, but closed to modification.”
What does this mean? It means that one aspect of strong design is that it allows new functions, features, behaviors, etc. to be added to a system in such a way that the previously existing code does not have to be altered.
Most experienced developers will tell you they would much prefer to make something new rather than change something old. This is because they have experienced both things and have found that making new things is less difficult, less dangerous, less time-consuming, and in general is something they feel more confident about.
How can this principle be achieved? You can make a system open-closed in many different ways, depending on what you want to be able to add later by cleanly plugging in a new entity.
Each design pattern follows open-closed in a different way, about a different thing or set of things. Understanding this is an interesting way to distinguish each pattern from the others. I will examine this aspect of each pattern as I explore it.
Ideally, all software should be tested. That said, some designs are more easily tested than others. This “testability” factor can be very useful in determining the quality of a given design. Here are some reasons.
- When a design is excessively coupled, then testing any class in it will require that many other parts of the system be created in the test. This can make tests complex to write and slow to run. The test will also fail for multiple reasons.
- When a class has multiple responsibilities (weak cohesion) then those responsibilities must be tested together. The tests become difficult to read, write, and maintain.
- When the system has redundancies, the tests will too because the same issues will have to be tested repeatedly.
- When encapsulation is weak many side effects are possible and, therefore, tests must be written to guard against them. The test suite becomes many times larger than the production package.
Testability is really the quality of all qualities because weakness in design always makes testing difficult and painful. And after all, pain is nature’s diagnostic tool. We feel pain in order to know that something is wrong.
The earlier testability is considered, the earlier design flaws can be discovered and corrected.
Redundancy can be a good thing if we are referring to a backup for safety, like the systems on a spacecraft. I don’t mean that. I am referring to an element of the system that is repeated needlessly in such a way that altering it will require the same change be performed repeatedly.
A good example of making such a mistake is the Y2K bug. Remediating this was not expensive because changing from a two-digit to a four-digit date is inherently difficult. It was expensive because we had to make that same change millions of times. We knew it would be easy to miss some and so we had to proceed very slowly and methodically. Y2K remediation produced little or no business value, but cost billions of dollars.
This bug was created at a time when the expensive part of automating something was the hardware. Memory, disk space, computing cycles were all very costly and also very limited. The human programmer was seen as a fairly trivial expense.
Today this equation is reversed. Developers are expensive, computer hardware is cheap and getting cheaper all the time. Redundancies cost developer time.
Any change should be able to be made in a single place. The patterns will help us to enforce this in various ways, as we shall see.
Much of the literature on object-orientation defines encapsulation as “data hiding.” While this is true, it is far too limited as a definition. Data hiding is encapsulation but not all encapsulation is data hiding.
Encapsulation is the hiding of anything. Here are some examples.
- Interfaces, abstract classes, and concrete base classes can be used to hide the types of the classes that implement or derive from them, by casting.
- Factories can encapsulate the specific design of a subsystem; clients call the factory but do not couple to the specific details of what is built.
- The number of entities in a collaboration (cardinality) can be hidden. All clients see a single interaction when in fact there may be more.
- Workflows and the details of interactions that vary by circumstance can be hidden.
- Whether an instance is shared or not can be hidden.
Whenever something can be hidden you gain advantages if you have to change it. You have much greater freedom when you can make a change without extensively investigating the system, without fear that you may introduce a defect, and without, in fact, creating one.
What you hide you can freely change. Each pattern hides different things from the rest of the system.
Coupling exists when one part of the system is impacted by changes to another part. Where there is too much coupling, changes to a system can be difficult, time-consuming, and often destructive.
That said, coupling is also necessary. When objects collaborate with each other then they must interact, and this always creates some form of coupling among them.
Given that coupling is both needed and can also be problematic, this means that there is both good and bad coupling in a system.
“Loose” is the term most people use when they think the coupling is the way it should be. I prefer the term “intentional” because it means the coupling was created on purpose, and will therefore make sense and be expected to exist. Developers are smart; they never intend bad or excessive coupling.
“Tight” is the term people use to describe poor or excessive coupling, but I prefer the term “accidental.” The coupling we don’t want is the coupling we never intended in the first place; it’s a mistake. When we discover coupling that exists but serves no purpose, we find a way to eliminate it.
Here again, the patterns will help us. The coupling in each pattern is there for a defined reason, is logical and meaningful, and therefore intentional.
Cohesion is a quality indicting alignment. The best way to understand and remember this is to relate the root word “cohere” to the term coherent. Lasers are often called coherent light because all the of the beams of light in a laser are perfectly aligned.
What does this mean in software? It has to do with scoping, and we focus on two version of creating scope: class scope and method scope. Continue reading “Qualities Patterns Share: Strong Cohesion”
I’m going to be using the terms quality, principles, and practices quite a bit, so it might be useful to explain how I am using them, just for the sake of clarity.
By quality, I am referring to an aspect of design that is desirable or, if missing, is a deficit. In general, the qualities I look for make it easier to change things, since maintenance is the major expense in most systems.
By principle, I mean general guidance about design, concepts that can inform our decisions in many different ways depending on circumstances. Principles are almost never perfectly achievable, but they are always important to keep in mind. The Golden Rule is a principle that we try to follow in polite society. That’s the sort of thing I mean. Continue reading “Qualities, Principles, Practices”