I mentioned earlier that TDD offers qualitative measurements about production code, namely that a large average fixture size can be used to measure relative coupling in a system. Similarly, tests can reveal whether, and to the extent, that the Single Responsibility Principle has been adhered to.
The Single Responsibility Principle states that every class in a design should have a single responsibility. It is an aspect of cohesion in design. The reason that tests will reveal when this principle has been violated has to do with the number of tests needed for that class’ behavior. Continue reading “TDD and the Single Responsibility Principle”
I was first introduced to XP in 1999. I remember getting into an avid discussion on the XP user group about TDD. I could never remember what it stood for, often calling it “Test-Driven Design” because I recognized that TDD helped people design their code when using it. This was consistent with the first mantra of design patterns, “Design to the behavior of objects.” Focusing on behavior informed design.
I also remember getting some flack for it. People said things like, “tests couldn’t possibly inform design.” My opinion was, and is, that they not only could they, but they should. Continue reading “Understanding the Concept of Testability: A Worthwhile Remembrance”
In TDD, there are always more potential scenarios to test, other than the “happy path” of desirable behavior. We need a way to decide how far to go.
This is often a question of risk assessment. Having a framework for thinking about risk can be useful. Consider this framework that has two dimensions: Likelihood and severity. Crossing these produces a four-quadrant matrix. Continue reading “A Framework for Thinking About Risk”
In many large organizations there is a kind of wall between development and testing. Developers do their work and “throw it over the wall” to be tested. This can create negative attitudes on both sides.
Developers see testers either as a source of no information (“it works? Yeah we knew that”) or bad news (“there are bugs to find and fix, ugh”). Testers are adversaries to overcome before developers can be said to have succeeded.
Testers see developers as a source of hard work, and code that must be wrestled with. Often testing lags hopelessly behind development and because of this testers see developers as a source of a never-ending avalanche of work.
Continue reading “TDD Makes Developers and Testers into Valued Colleagues”
Project Managers and Product Owners are sometimes dubious about the development team doing TDD. They are concerned that the team will slow down because they’ve been burdened with additional work, and that developers might “game” the system with bogus tests to satisfy the process. Also, it seems like a nonsensical idea to write a test for something before that thing exists.
All of these concerns are addressed by the observation that, despite its name, TDD is not really a testing activity. The “tests” that are written in TDD are actually the specification of the system.
The effects of this shift in thinking are profound. Continue reading “TDD Provides Value to Everyone in the Development Process”
Defects can either be prevented or detected.
Let’s say you write a method in C# that takes, as its parameter, one of the nine players on a baseball team.
If you decide to make that parameter an integer (1 is the pitcher, 2 is the catcher, and so forth), then the code will still compile if a number greater than 9, or less than 1, is passed into the method. You will have to take some action in the code if that happens: correct the data, throw an exception, something along those lines. This code would be written to “detect” the defect, and would be driven from a failing test in TDD.
On the other hand, we would not have to allow for the possibility that someone will pass a non-integer, like 1.5, into the method, because the compiler will not allow this. Anything the compiler, linker, IDE, etc. will not allow is considered a “prevented” defect. Continue reading “Figuring Out the Test You Missed is Job One”
It’s not unusual these days for development organizations to adopt a code coverage requirement. This is usually expressed as a percentage: at least X% of all code developed must be covered by tests.
Measurement tools are used as a process gate, where the team must achieve this minimum coverage level before code can be checked in. This is pointless and may be dangerously misleading. Code coverage tools can only measure how many lines of code are executed by tests, but not what the test do with the results of that execution. Continue reading “TDD and Code Coverage”
You cannot meaningfully test that which you do not adequately understand. The time to find that out is before you start development. TDD tells us what we do not know. Sometimes, it tells us what our stakeholders don’t realize they also don’t know.
Imagine you are developing the software for a casino’s poker slot machine (loosely based on a real case). Part of the behavior needed is to shuffle the “cards”, mixing them up into a new order. That would be the stated requirement. If we try to write a test about this, we realize that this is not nearly detailed enough. What is meant by a “new order”? How new? How will we know when the shuffling is adequate? Are there any regulatory requirements about this? Industry standards? Not being casino experts, the developers probably don’t know and would ask the customer. The customer might realize that they aren’t clear themselves.
Testing something requires far more rigor than most people apply to their businesses, and that means the development team that does TDD not only finds good questions to ask, but can also help the customer to more fully understand their own business domain. At times, this leads them to realize even more business value than they knew they wanted.
A typical question those adopting TDD ask is: How much testing is enough? Or, put another way, does everything really need to be tested? How do you decide what to test and what not to test?
It’s an interesting question, but I prefer to address it this way: everything will be tested. The real question is, by whom? Will it be you, or someone else? Continue reading “How Much Testing is Enough?”
In TDD, the test suite can serve as a tool for quantitatively analyzing the qualities present (or absent) in the production code.
One example: A test will need to access the production entity that it is testing, obviously. However, sometimes a test needs to access another entity or entities as well, even though they are not currently under test. We sometimes refer to these collectively as a “fixture” for the test.
Continue reading “TDD and Coupling”