Listen to this post
It’s not unusual these days for development organizations to adopt a code coverage requirement. This is usually expressed as a percentage: at least X% of all code developed must be covered by tests.
Measurement tools are used as a process gate, where the team must achieve this minimum coverage level before code can be checked in. This is pointless and may be dangerously misleading. Code coverage tools can only measure how many lines of code are executed by tests, but not what the test do with the results of that execution.
A test can call all the public methods of a class, assert that 1+1=2, and coverage will appear to have been achieved. Why would a developer do this? To get past the process gate.
In TDD we don’t measure code coverage for this purpose. We don’t need to. All code is written to satisfy failing tests. These tests were written to express a requirement and provide needed guidance to developers, and is therefore meaningful. Developers write these tests to help them make their work and careers successful, not to satisfy an externally-imposed regulation.
Don’t trust semantic-free code coverage measurements. Trust the TDD process and its connection to self-interest.