Listen to this post
TDD depends on a strong connection between the automation of the test suite and the system itself. The suite should record the specification that is implemented in the system, and the connection allows this to be confirmed at any point.
The problem is automated tests pass by default. So, if errors creep into the test code that breaks the connection to the system (the tests are not really doing anything) they would still pass under most circumstances.
As with any code base, the larger the suite becomes the more likely it is that such errors will be made. This would seem to indicate a finite size for a test suite before it becomes unreliable.
The solution is this: not only must every behavior of the system be driven from a test that fails initially, but so must every change to the system going forward.
When a behavior must change, in TDD, the test must be changed first and run to observe its failure. This ensures that the test is legitimate (can fail). Then, the change is made to the system. The test is run again and is observed to pass even though the test code was not touched. This ensures that the right change was made and that the tests and the system are still strongly connected.
TDD is a process. It only works if you follow it.