TDD is typically part of an agile process. This means that we embrace change, that new requirements flow into the team’s work either on a time-boxed pulse, or through some kind of pull system (like Kanban). In TDD, a new requirement always starts out as a new, failing test or “specification.” We write the test to express the requirement before it has been fulfilled.
Then, work is done in the system to make this new test pass. Over time, however, developers begin to experience the syndrome where making the new test pass makes older tests fail. Those tests must be maintained (you cannot leave tests “in the red”) which burdens the team. This problem tends to get worse over time.
Some interpret this as being an inherent cost of TDD, but in fact it is an indication of coupling. If one test causes another test to fail, then it would appear the tests are coupled to each other. But testing frameworks are designed to stringently protect against this. What this means is that the effect observed in the tests is actually an indicator of excessive coupling in the system. The tests are coupled to each other through their connection to the production code.
We don’t “live with this.” We use it as a diagnostic tool to improve the health of the system.
This is Scott Bain. Visit us at www.netobjectives.com.