It’s not GETTING Agile that takes the time, it’s the NOT getting it

I had a mentor a long time ago who would say “it’s not getting it that takes the time, it’s the NOT getting it that takes the time.” I have seen this over and over again. I remember when I was learning design patterns 20+ years ago I spent 6 months NOT getting it. Then, in one 15 minute (imaginary) conversation with Chris Alexander I had the epiphany that led to my truly understanding what patterns were (they are well beyond “solutions to a recurring problem in a context” and was the basis for Jim Trott and my Design Patterns Explained book).

6 months NOT getting it.

15 minutes GETTING it.

Most of the 6 MONTHS of NOT getting it was studying and thinking about patterns. The 15 minutes of GETTING it was preceded by 6 HOURS of working on my problems.

The insight was not based on information but came from a slight mind-shift.

Maybe we need to learn by doing and not by being in a course where we are not working on our own tasks. This is the basis of scaled, flipped classroom learning.

Refactoring Applied to TDD

Refactoring” refers to the discipline of improving the design of existing code without changing its behavior. It is usually thought of as a way to deal with old legacy code that is functional but poorly designed and thus hard to work with.

Since TDD focuses on driving new behavior from tests, how would refactoring play a role in a TDD team? In three ways: Continue reading “Refactoring Applied to TDD”

Mocking as a Design Smell

In TDD, we seek to create granular, unique tests, tests that fail for a single reason only. To achieve this, when testing an entity that has dependencies, a typical way to prevent the test from failing for the wrong reason is to create mocks of those dependencies. A mock can be simply a replacement that the test controls.

As with every part of TDD, mocking can tell you things about the design of your system. Continue reading “Mocking as a Design Smell”

TDD and the Separation of Concerns

One aspect of strong design is that separation is created between the various concerns of the system. This adds clarity, promotes re-use, improves cohesion, and in general adds value to the work.

It can be difficult to know, however, if one has separated things sufficiently, or perhaps has overdone it. This is one area where TDD can help.

Example: An object with asynchronous behavior has, at minimum, two categories of concerns.

Testing vs. Testability

We should consider testing, which is an activity, versus testability, which is a quality of design.

If we start with the word “test” itself we note that it is both a noun and a verb. I can say, “I have a test for that behavior,” or I can “test a behavior.” As a noun, it is an artifact that can express the nature of a desired behavior (if the test is written first) and it can capture that knowledge for the future. As a verb it can drive that behavior into the code, and then be used to subsequently verify its correctness later.

“Testability”, on the other hand, is a quality of the system being tested. What kinds of tests can you write about it? What kinds of test do you have to write about it? What will tests tell you when they fail? How much effort is required to create these tests?
Continue reading “Testing vs. Testability”

TDD and the OCP

The Open-Closed Principle (Bertrand Myers, Ivar Jacobsen) states, “Software entities (such as classes, modules, and functions) should be open for extension, but closed for modification.”

This means that a “good” design will allow for a new behavior to be added to a system without having to change the existing code, or at least to minimize those changes.

Of course, one cannot perfectly achieve such a thing, but trying to get as close as possible leads to systems that are far more resilient and extensible in the face of new business challenges/opportunities.

TDD relates to this in the following way. Continue reading “TDD and the OCP”

TDD and Encapsulation

The type and nature of the tests that you write in TDD helps you to understand how strongly your system is encapsulated.

Everything the system must do, and yet might not, needs a test. Here, “must do” comes from your stakeholders’ requirements, and is therefore connected to business value. The more your tests are about these issues, the clearer your specification will be.

Everything the system must not do, and yet might, needs a test. Here, “might” means that the unwanted behavior is possible, that it cannot not prevented in some way. Anything that the compiler, linker, etc will catch and report is prevented and does not require a test. If such tests were written, it would be impossible for them to fail. Continue reading “TDD and Encapsulation”

TDD and the Single Responsibility Principle

I mentioned earlier that TDD offers qualitative measurements about production code, namely that a large average fixture size can be used to measure relative coupling in a system. Similarly, tests can reveal whether, and to the extent, that the Single Responsibility Principle has been adhered to.

The Single Responsibility Principle states that every class in a design should have a single responsibility. It is an aspect of cohesion in design. The reason that tests will reveal when this principle has been violated has to do with the number of tests needed for that class’ behavior. Continue reading “TDD and the Single Responsibility Principle”

The Purpose of an Assessment

Assessments are not about where you are. They are about where you want to go. By seeing where you are and what challenges you are having a roadmap for improvement can be made more effectively.

Assessing can be done in several ways. The most popular Agile method is to see how well the company is doing from the perspective of the approach they are taking. For example, a common assessment for Scrum is the Nokia test which specifies how well teams are doing Scrum. SAFe® has its own assessments. But consider these as assessments in how well a framework is being adopted, not how well the company is delivering value. We have found that focusing on the work, not the framework, is a better approach.

Because FLEX is based on a model of flow, it can be used to see where an organization is having troubles with achieving flow; that is, performing its work with few hand-offs, turmoil, delays, and rework. Reducing these helps achieve business agility. It is more effective to attend to how work is being delayed or how extra work is being created than how well a practice is being followed. Therefore, an assessment should focus on the value stream and what is impeding it.

For more, see Using FLEX to Perform an Assessment for Small-Scale Organizations