TDD requires an expenditure of developer effort. All such effort is an investment, and thus should yield a return. TDD returns value in many ways, but here I will focus on one way in particular:
Tests prove their worth when they fail.
When a test fails, this is the point when we say “wow, we’re glad we wrote that test” because otherwise there would be a defect in the system that was undetected. But we can also ask how *much* value a test’s failure provides, and the answer is that the value is relative to the quality of the information it provides in that failure.
When a test fails for the reason it was intended to, this means several things: Continue reading “TDD “Good” Tests Part 1. The test must reliably fail for the reason intended”
As consultants, we are often asked to review the work of others. One of the things we review is the quality of the design of some part of the system. Is it cohesive, decoupled, non-redundant, encapsulated, open-closed, and so forth? Often the developers understand and agree that these qualities are important, but they are not certain they have achieved them adequately.
I often start like this, “I don’t know. Can you write a good test for it?”
I can ask this even before I look at their work because I know that bad designs are notoriously hard to test. It’s a great way to start an evaluation. Continue reading ““Good” Tests in TDD”
Here are some things I have learned from Scrum.
- Cross functional teams are good. Just having them achieves a three to tenfold improvement over a group of people working on several projects at once. And they improve innovation by the team.
- Time-boxing increases discipline, visibility and the ability to pivot.
- Small batches are good and breaking work down into small pieces is essential.
- Smaller release cycles improve most everything.
- It is useful to have a team coach.
- Do not expect people to figure out what they need to do just because you have put them in a framework.
- Focus on learning the practices of a framework makes learning what you actually need to accomplish (flow) harder.
- People like to be given a set of practices to use.
- Defining a simple set of practices to use can lead to rigid dogma.
- Take an approach that transitions you to the behaviors you need.
- Approaches that work well in one context may not work well in another even though people them everywhere without noticing this.
- And, just because you can put whatever you want into a framework, that doesn’t mean the framework is not prescriptive. In itself, the framework has things you must do.
Continue reading “What I have Learned from Scrum”
Most organizations have some type of reporting mechanism allowing customers to alert them to defects they have encountered. Typically, a “trouble ticket” or similar artifact is generated, and someone is assigned the task to 1) locate and then 2) fix the errant code.
TDD views this very differently.
In TDD, a “defect” is code that causes a test to fail after development was thought to have been completed. If buggy code makes it into production and is released to customers, this is not a defect. It is a missing test. Continue reading “TDD and Reported Defects”
“Refactoring” refers to the discipline of improving the design of existing code without changing its behavior. It is usually thought of as a way to deal with old legacy code that is functional but poorly designed and thus hard to work with.
Since TDD focuses on driving new behavior from tests, how would refactoring play a role in a TDD team? In three ways: Continue reading “Refactoring Applied to TDD”
Some organizations who have adopted TDD as their development strategy have assumed that they no longer need SAT/QA testers, since the developers are now writing tests.
This is a mistake. Continue reading “TDD does not Replace Traditional Testing”
In TDD, we seek to create granular, unique tests, tests that fail for a single reason only. To achieve this, when testing an entity that has dependencies, a typical way to prevent the test from failing for the wrong reason is to create mocks of those dependencies. A mock can be simply a replacement that the test controls.
As with every part of TDD, mocking can tell you things about the design of your system. Continue reading “Mocking as a Design Smell”
One aspect of strong design is that separation is created between the various concerns of the system. This adds clarity, promotes re-use, improves cohesion, and in general adds value to the work.
It can be difficult to know, however, if one has separated things sufficiently, or perhaps has overdone it. This is one area where TDD can help.
Example: An object with asynchronous behavior has, at minimum, two categories of concerns.
The Open-Closed Principle (Bertrand Myers, Ivar Jacobsen) states, “Software entities (such as classes, modules, and functions) should be open for extension, but closed for modification.”
This means that a “good” design will allow for a new behavior to be added to a system without having to change the existing code, or at least to minimize those changes.
Of course, one cannot perfectly achieve such a thing, but trying to get as close as possible leads to systems that are far more resilient and extensible in the face of new business challenges/opportunities.
TDD relates to this in the following way. Continue reading “TDD and the OCP”
I mentioned earlier that TDD offers qualitative measurements about production code, namely that a large average fixture size can be used to measure relative coupling in a system. Similarly, tests can reveal whether, and to the extent, that the Single Responsibility Principle has been adhered to.
The Single Responsibility Principle states that every class in a design should have a single responsibility. It is an aspect of cohesion in design. The reason that tests will reveal when this principle has been violated has to do with the number of tests needed for that class’ behavior. Continue reading “TDD and the Single Responsibility Principle”