Commonly Missing Tests in TDD

Because TDD is “test” driven development, people tend to think of TDD as “writing tests first.” In fact, TDD is not a testing activity per se. It is the creation of an executable specification prior to the creation of each system element. Unit tests are a very useful by-product of this process.

Because of this point of view, TDD dictates that different tests be written, or written differently than the QA point of view would lead us to do. There are points of overlap, but there are distinct differences, and this means that those new to TDD often miss certain important tests. Continue reading “Commonly Missing Tests in TDD”

TDD and Design Patterns

Design patterns in software came from the work of the Gang of Four in the mid-1990’s. Similarly, TDD was first promoted around the same time in history as part of eXtreme Programming. Some have suggested that these two points of view stand in opposition to each other, saying

Design patterns are about up-front design, while TDD is about emerging design through the test-first process.

In truth, TDD and design patterns are highly synergistic. Understanding each of them contributes to your understanding of the other. Continue reading “TDD and Design Patterns”

Tests are Client Number 1

When a test precedes development, it essentially becomes the first “client” for the behavior being developed. This fact is helpful in several ways.

First, the interfaces TDD drives into the system are always client-focused. They are not implementation-focused because at the moment they are created there is no implementation yet. In their seminal book on Design Patterns, the Gang of Four recommended, among other things, that we “design to interfaces.” TDD promotes this in a fundamental way.

Also, the tests themselves provide a glimpse into the qualities of future clients. For example, Continue reading “Tests are Client Number 1”

TDD “Good” Tests Part 3. There must be no other test that fails for this reason

When organizations adopt TDD as their development paradigm, early results can be quite good once the teams get over the initial learning curve. Code quality goes up, defect rate goes down, and the team gains confidence which allows them to be aggressive in pursuing business value.

But there is a negative trend that can emerge as the test suite grows in size over time. Continue reading “TDD “Good” Tests Part 3. There must be no other test that fails for this reason”

TDD “Good” Tests Part 2. The test must never fail for any other reason

When a test fails for a reason other than intended, then upon investigating the cause of that failure the natural assumption will be that it is failing for the reason intended. Thus, the failure will mislead the team into investigating the wrong problem.

Anything that wastes developer time is to be avoided resolutely. Developer time is a highly crucial resource in modern development in that a) you need it to get anything done, and b) you can’t make more of it than you have. There are only so many hours in the day, and only so much time, focus, and energy a given person can devote to a task. Wasting this resource is like burning money in the street. Continue reading “TDD “Good” Tests Part 2. The test must never fail for any other reason”

TDD “Good” Tests Part 1. The test must reliably fail for the reason intended

TDD requires an expenditure of developer effort. All such effort is an investment, and thus should yield a return. TDD returns value in many ways, but here I will focus on one way in particular:

Tests prove their worth when they fail.

When a test fails, this is the point when we say “wow, we’re glad we wrote that test” because otherwise there would be a defect in the system that was undetected. But we can also ask how *much* value a test’s failure provides, and the answer is that the value is relative to the quality of the information it provides in that failure.

When a test fails for the reason it was intended to, this means several things: Continue reading “TDD “Good” Tests Part 1. The test must reliably fail for the reason intended”

“Good” Tests in TDD

As consultants, we are often asked to review the work of others. One of the things we review is the quality of the design of some part of the system. Is it cohesive, decoupled, non-redundant, encapsulated, open-closed, and so forth? Often the developers understand and agree that these qualities are important, but they are not certain they have achieved them adequately.

I often start like this, “I don’t know. Can you write a good test for it?”

I can ask this even before I look at their work because I know that bad designs are notoriously hard to test. It’s a great way to start an evaluation. Continue reading ““Good” Tests in TDD”

TDD and Reported Defects

Most organizations have some type of reporting mechanism allowing customers to alert them to defects they have encountered. Typically, a “trouble ticket” or similar artifact is generated, and someone is assigned the task to 1) locate and then 2) fix the errant code.

TDD views this very differently.

In TDD, a “defect” is code that causes a test to fail after development was thought to have been completed. If buggy code makes it into production and is released to customers, this is not a defect. It is a missing test. Continue reading “TDD and Reported Defects”

Refactoring Applied to TDD

Refactoring” refers to the discipline of improving the design of existing code without changing its behavior. It is usually thought of as a way to deal with old legacy code that is functional but poorly designed and thus hard to work with.

Since TDD focuses on driving new behavior from tests, how would refactoring play a role in a TDD team? In three ways: Continue reading “Refactoring Applied to TDD”