It’s not unusual these days for development organizations to adopt a code coverage requirement. This is usually expressed as a percentage: at least X% of all code developed must be covered by tests.
Measurement tools are used as a process gate, where the team must achieve this minimum coverage level before code can be checked in. This is pointless and may be dangerously misleading. Code coverage tools can only measure how many lines of code are executed by tests, but not what the test do with the results of that execution. Continue reading “TDD and Code Coverage”
You cannot meaningfully test that which you do not adequately understand. The time to find that out is before you start development. TDD tells us what we do not know. Sometimes, it tells us what our stakeholders don’t realize they also don’t know.
Imagine you are developing the software for a casino’s poker slot machine (loosely based on a real case). Part of the behavior needed is to shuffle the “cards”, mixing them up into a new order. That would be the stated requirement. If we try to write a test about this, we realize that this is not nearly detailed enough. What is meant by a “new order”? How new? How will we know when the shuffling is adequate? Are there any regulatory requirements about this? Industry standards? Not being casino experts, the developers probably don’t know and would ask the customer. The customer might realize that they aren’t clear themselves.
Testing something requires far more rigor than most people apply to their businesses, and that means the development team that does TDD not only finds good questions to ask, but can also help the customer to more fully understand their own business domain. At times, this leads them to realize even more business value than they knew they wanted.
A typical question those adopting TDD ask is: How much testing is enough? Or, put another way, does everything really need to be tested? How do you decide what to test and what not to test?
It’s an interesting question, but I prefer to address it this way: everything will be tested. The real question is, by whom? Will it be you, or someone else? Continue reading “How Much Testing is Enough?”
Tests pay you back for your effort:
- When you are writing them. They help you to understand the problem you are attempting to solve, they reveal gaps in your knowledge, and lead you to useful questions.
- When they fail. They inform you of a defect, and if written well, specifically where that defect is.
- When they pass. When you are enhancing or refactoring the system, tests passing confirms that you are only making the changes you intend to make.
- When you read them later. Tests capture knowledge that might otherwise be lost. And their accuracy can be instantly confirmed at any time in the future, by running them.
TDD does not cause extra work. It is just the opposite; it is one effort that provides value in multiple ways.
A test reacts to everything currently in scope that it does not control. Ideally, that should be only one thing. Everything else in scope must be controlled by the test, or it may react to the wrong thing and give misleading results.
For example, if a production entity uses a service call as part of its implementation, and the service being called is not what the test is testing, then that call must be controlled by the test because it is in scope.
This is a major reason to use a mock object.
Automated tests pass by default. A red test turning green proves everything.
The red test proves the validity of the test (that it can fail). Tests that cannot fail indicates an error in the way they are written.
The green test proves the code is accurate to the test. The code is written to pass the test, and so we know that it will forever be covered by the test going forward.
The transition from red to green proves that the test and the code are connected to each other. This is because we make failing test pass not by changing the test, but by changing the code.
TDD creates, therefore, meaningful test coverage. Nothing else can ensure this.
Test-first yields analysis, it helps us determine what is clear, what is unclear or missing, and ferrets out misunderstandings. Unit tests are unforgiving, they don’t let you get away with anything.
But Test-Driven Development also creates better design. Bad design is hard to test, and so moving tests into a primary position reveals the pain of a bad design very early, before much commitment has been made to it.
Write your tests first, but learn how to listen to what they tell you about your product design.
Design patterns are often described as “solutions to recurring problems within a context.”But the real power of patterns is to see the forces that each pattern resolves. They should be used as a way to help analyze what’s needed to create a quality design. That is the goal.
Given a situation where, say, the Strategy Pattern was not quite present but its concepts could be used, no one who understood patterns would criticize the solution by saying ,“Well, that’s not a Strategy Pattern!” So why do we hear these sorts of critiques in the process world? Let’s think about it. Continue reading “How Design Patterns Give Insights Into Process Patterns”
In previous posts, I discussed that the first leg of emergent design is TDD, which provides code quality and sustainability. The second leg is design patterns, which provides insights into handling variation. The third leg is ATDD, which provides us a way of discovering and clarifying the value we will get. Continue reading “The Third Leg of Emergent Design: Acceptance Test-Driven Development (ATDD)”
TDD is the first leg of emergent design or what could be called Agile Design. Design patterns are the second. They’re often described as “solutions to recurring problems in a context.” In this way they can be thought of as recipes that have been learned. But patterns open the door to much more; they are examples of ways to think. Patterns hide varying behavior, structure or which objects to use. Continue reading “Design Patterns: The Second Leg of Emergent Design”