Specifying a workflow in #TDD means writing a test that says, “When entity A is called upon to accomplish a task, it must interact with entity B is the following way.” This is done when the interaction in question is part of a required workflow, and not simply an implementation decision that should be changeable.
The best way to accomplish this is to create a mock of entity B. Such a mock would “log” how it is used by entity A at test time, and then allow the test to examine the log and compare it to what should have happened. There are many ways to accomplish this. Continue reading “Specifying Workflows in TDD, Part 2: How”
The term “workflow” in this sense is meant to indicate the way two system elements interact when performing a task. Often these interactions themselves are not exposed outside the system, and so only their resulting behavior should be tested. This is as true in TDD as it is in traditional testing.
Most workflows are implementation details that developers should be able to change so long as the right behavior is still achieved. If the developers have a better idea, or technology improves, these changes should not break tests. Continue reading “Specifying Workflows in TDD, Part 1: Why”
Recently I wrote about the necessity in TDD of specifying both sides of a behavioral boundary, and its epsilon. This is important because we want a complete, detailed record of the critical business rules and other values in the system.
But there is another reason that this is important. It has to do with the notion of “a specification.” Continue reading “Specifying Boundaries in TDD: Part 2”
Often a given behavior will change due to a certain condition: sometimes the system will behave one way, sometimes another. We call this kind of behavioral change a “boundary” and it should be specified as such in TDD.
For example, let’s say there is a business rule that dictates how a passed-in value should be handled. If the value is greater than 100, we decrease it by 10%; a kind of quantity discount. Continue reading “Specifying Boundaries in TDD: Part 1”
TDD is different from QA in many respects. Part of this involves the tests we choose to write. A case in point is the specification of public constants. Continue reading “Specifying Constants in TDD”
Because TDD is “test” driven development, people tend to think of TDD as “writing tests first.” In fact, TDD is not a testing activity per se. It is the creation of an executable specification prior to the creation of each system element. Unit tests are a very useful by-product of this process.
Because of this point of view, TDD dictates that different tests be written, or written differently than the QA point of view would lead us to do. There are points of overlap, but there are distinct differences, and this means that those new to TDD often miss certain important tests. Continue reading “Commonly Missing Tests in TDD”
Design patterns in software came from the work of the Gang of Four in the mid-1990’s. Similarly, TDD was first promoted around the same time in history as part of eXtreme Programming. Some have suggested that these two points of view stand in opposition to each other, saying
Design patterns are about up-front design, while TDD is about emerging design through the test-first process.
In truth, TDD and design patterns are highly synergistic. Understanding each of them contributes to your understanding of the other. Continue reading “TDD and Design Patterns”
When a test precedes development, it essentially becomes the first “client” for the behavior being developed. This fact is helpful in several ways.
First, the interfaces TDD drives into the system are always client-focused. They are not implementation-focused because at the moment they are created there is no implementation yet. In their seminal book on Design Patterns, the Gang of Four recommended, among other things, that we “design to interfaces.” TDD promotes this in a fundamental way.
Also, the tests themselves provide a glimpse into the qualities of future clients. For example, Continue reading “Tests are Client Number 1”
When organizations adopt TDD as their development paradigm, early results can be quite good once the teams get over the initial learning curve. Code quality goes up, defect rate goes down, and the team gains confidence which allows them to be aggressive in pursuing business value.
But there is a negative trend that can emerge as the test suite grows in size over time. Continue reading “TDD “Good” Tests Part 3. There must be no other test that fails for this reason”
When a test fails for a reason other than intended, then upon investigating the cause of that failure the natural assumption will be that it is failing for the reason intended. Thus, the failure will mislead the team into investigating the wrong problem.
Anything that wastes developer time is to be avoided resolutely. Developer time is a highly crucial resource in modern development in that a) you need it to get anything done, and b) you can’t make more of it than you have. There are only so many hours in the day, and only so much time, focus, and energy a given person can devote to a task. Wasting this resource is like burning money in the street. Continue reading “TDD “Good” Tests Part 2. The test must never fail for any other reason”