TDD and Naming: Part 2

Listen to this post
Subscribe

Tests often establish example values used to compare the behavior of the system with the actual behavior indicated in the requirements. For example, if we had a system behavior that converted a temperature expressed in Fahrenheit to one expressed in Celsius, then the test that specified this might have an assertion along these lines:

assertEquals(100, converter.fahrenheitToCelsius(212));

However the use of these values directly in the assertion sidesteps an opportunity to express the meaning of those values in the specification. Why did we choose 212 and 100, specifically? If there is a reason, then we would want to capture that information as well, in order to form a more complete specification.

The creation of temporary variables, sometimes called “instrumented values” creates this opportunity. For example:

int boilingPointFahrenheit = 212;
int boilingPointCelsius = 100;
assertEquals(boilingPointCelsius, converter.fahrenheitToCelsius(boilingPointFahrenheit));

This not only captures the semantics of the test more completely, but also helps to create the separation of Given/Setup (the instrument is part of this) from the When/Trigger (the calling of the indicated method).

This also makes the test more readable as a specification.

2 thoughts on “TDD and Naming: Part 2”

  1. I appreciate this reminder about using good names to clarify intent. This is simply a basic practice of good software engineering. At a higher level, it seems to me that nearly all good software engineering practices should apply equally to both TDD tests and production code.
    Therefore my question: Are there any Generally Accepted Software Practices (GASP) that do NOT apply to TDD tests? Off hand, I can’t think of any.

    1. In my opinion there are. I’ll give two examples.

      1) Verbose method names. In production code they are a smell of poor cohesion, and beg errors. In TDD, however, the test methods we create should create a complete narrative view of behavior, as they are part of a specification. Also, nobody will ever type them into code; they are run by an automated test-runner.
      2) Empty catch blocks. An empty catch block (“swallowing an exception”) is seem by most as a big no-no in production code, and for good reason. However, in TDD, when the system *should* throw an exception under a given circumstance, and it does, this should cause a test to pass. It should fail if the exception does not appear. I like to specify it like this:

      try {
      // some code that should throw an exception
      fail(“Exception should have been thrown and was not”);
      } catch(specifiedException) {}

      There is nothing needed in the catch block here, because the behavior (the fail) is in the try, and unit tests pass by default. I would never do such a thing in production code.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.