Focus on the work, not the framework

Talk to most any team that is having trouble with Scrum and you’ll see some common patterns of problems:

  • can’t write small stories
  • work doesn’t get finished within a sprint
  • requirements are not clear

Learning how to write small stories with agreed upon acceptance criteria prior to writing code would be a good idea. Continue reading “Focus on the work, not the framework”

What We Can Learn From Mob Programming

Originally published Dec, 12, 2017

First, full disclosure. I have never used or even seen mob programming.  When its creator, Woody Zuill, first mentioned it to me I was intrigued but wasn’t sure it would work efficiently. But, knowing Woody, I didn’t doubt it worked. Just figured only for small, independent teams.  After talking to him recently I have changed my mind. This blog is my understanding of mob-programming now, inspired by Woody’s insights. Attribute anything valuable about mob-programming to Woody, anything incorrect to me.

The question for me about mob programming has always been – how big can you go with it?  I have known that paired programming was good (having done that).  Of course, people ask the same thing about that as well – how can two people doing the same thing be effective?  Of course, that’s a misunderstanding – they are not doing the same thing – they are working together. So, we already know that micro-mob programming (a mob of 2) works.  But what’s the upper limit?  How would you discover that?

In the conversation I just had Woody told me:

With some questions it’s very useful to identify an “opposite” question.  This can lead us to finding more meaningful questions. He said the first question he was ever asked while speaking about Mob Programming was “how can you be productive with five people sitting at one computer?“

The reverse question he came up with was “how can we be productive if we separate the people who should be working together?“ The purpose of asking the reverse question is to show that there are more possibilities in the questions we could ask, and in particular when the original question is not easily answerable. This led him to a slightly better question, which is “What are the things that destroy productivity?”   And of course, productivity is probably not a good thing to strive for, and he usually prefers to talk about effectiveness.

So what destroys productivity (or effectiveness)?

  1. Hand-offs
  2. Waiting
  3. Meetings
  4. Unfinished code that you’re not actively working on
  5. Unclear understanding between team members
  6. Delayed feedback on errors you’ve made
  7. Integration problems

There are probably other things not listed. Woody tells me these things don’t happen when you do mob programming.  While I have not seen it, I believe this to be true because Woody is a very trusted source whose only agenda is helping people. It also makes total sense when I stop to think about it.  Furthermore, when I look at teams whose individuals don’t work together I see these things taking up about 80% of their capacity. So even if mob-programming is a bit less efficient because of more people working together than is needed, the elimination of 80% of what teams normally do probably more than compensates for that.  It also produces higher quality code and a broader understanding of how it was built – meaning the team won’t run into the constraint of only one person knowing how it was written.

So how many is too many? I like the observation – “In theory, theory and practice are the same, but in practice they are different.”  So I’m not looking for a theoretical limit.  The real question is when are people not contributing?  Contributing doesn’t mean just doing the work but includes learning since that, as we mentioned, improves the people and the organization. Woody, of course, has thought this through. This is one of the great things about him – he’s not trying to promote or defend anything – he’s just looking for what works. So, the solution is easy – just people the option of self-selecting out when they feel they are not making a contribution. They will still be close by when needed.

There is another aspect to mob programming.  It sounds like fun.  So give it a try and pick a number you feel comfortable with.  Then try a little bigger (remember, people can self-select out).  Having fun has clear personal value, but also clear business value.

Al Shalloway
CEO, Net Objectives

Postscript: I talked to Woody at the Deliver Agile conference in Nashville this week. Here’s another point on mob-programming. It’s all about flow and true value delivery. When I consider that most organizations are running at around 5% efficiency (not a typo – five per cent) then mob-programming, even if it were a slight waste with 5 folks around at once, would be a massive improvement).

Step 1: Acknowledge the need to move from a team focus to systems thinking

“We cannot solve our problems with the same thinking we used when we created them” Einstein

The team focus of 2001 is no longer viable. Early adopters could adopt Scrum without regards to the bigger picture. The key was having a cross-functional team that Scrum and XP were designed for.

But as soon as Agile spread beyond one team, it ran into challenges. Team dynamics are different from organizational dynamics. Scrum ran into problems because forming cross-functional teams required committing someone who was needed on several teams to one team. Agilists didn’t have methods that solved this problem.

Continue reading “Step 1: Acknowledge the need to move from a team focus to systems thinking”

The falsehoods in the truth

I think Scrum is a good framework when it is used where it was designed for – a cross-functional team creating a new product. This, of course, represents a very small number of the time where it is used. The variation in where it is used requires a variation in the framework. This seems self-evident to me. But the response I get to this is “people need a well-defined, clear, set place to start.” I agree with this. But does it mean the place to start is the one people are promoting? I don’t think so.

From the Scrum guide, “Scrum’s roles, events, artifacts, and rules are immutable.” To me this means they are not as applicable as possible to most teams’ context since no one size fits all well. And Scrum is designed not to adapt – it works because of how it is defined.

When pressed on this, I get “we must keep it simple.” But this is the second falsehood. It implies that tailoring it to the need at hand will be more complicated. It doesn’t need to be. We must remember we need sufficiency as well – “as simple as possible but no simpler.”

My solution? Get a consultant who can quickly identify the ‘well-defined, clear, set place to start’ that works for your situation. Many consultants can do this, others can’t. Find one who can.

Putting Lean-Kanban practices into Scrum is not the same as being Lean

It’s nice to see the Scrum community finally accepting the importance of Lean and Kanban. But putting Lean/Kanban practices into Scrum does not make Scrum the same as Lean.

Perhaps the biggest different between the two is where our attention is when we have to remove an impediment. In Scrum our attention is “how do we remove the impediment by being faithful to Scrum’s practices.” In Lean, there is no attachment to any practice. Instead we ask “how do we remove the impediment by attending to Lean principles?”

In Scrum’s case, the assumption is that the team will figure out how to alleviate this impediment. But the best solution may not include having a stable team. For example, when multiple teams are working on a common codebase it may be best to dynamically form a feature team to work together (think of this as spontaneous mobbing).

This is a practice that I first used over a decade ago when test platforms were highly constrained. Immediately worked well and adoption was simple. The 7 teams that worked together to do this were following Scrum (with difficulty) before shifting to this model. Afterwards, they were no longer doing Scrum. We had to think in a Lean way to get to a Lean solution.

The Real Reason the “Agile Wars” Are Destructive – It’s Not What You Think

This was originally published in June, 2014

I am enthusiastic over humanity’s extraordinary and sometimes very timely ingenuity. If you are in a shipwreck and all the boats are gone, a piano top buoyant enough to keep you afloat that comes along makes a fortuitous life preserver. But this is not to say that the best way to design a life preserver is in the form of a piano top. I think that we are clinging to a great many piano tops in accepting yesterday’s fortuitous contrivings as constituting the only means for solving a given problem. Buckminster Fuller

Continue reading “The Real Reason the “Agile Wars” Are Destructive – It’s Not What You Think”

Why Shu Ha Ri and Scrum Can Make for a Dangerous Combination

This was originally published May, 2017

Note: This blog assumes the reader understands the basic roles and practices of Scrum.

Scrum suggests that the way to improve a team’s workflow and the organization within which it works is to remove impediments to its core roles (product owner, team, Scrum Master) and practices (cross-functional teams, daily standups, and using time-boxing for work, demos and building backlogs). It takes an inspect and adapt approach that requires little understanding of the underlying laws of software development other than an acknowledgement that reducing the time for feedback is essential and that small batches are better than large ones.

Continue reading “Why Shu Ha Ri and Scrum Can Make for a Dangerous Combination”

Smart People, XP and Scrum – Is there a pattern?

This blog was originally written January 2010.There is a division in the agile community about whether one should rely on people or focus on people supported by systemic thinking (no one I know of suggests systems alone are enough). This debate is often the people over process vs. people and process (or as Don Reinertsen would say people times process). I’ve been in the agile community for some time and have seen some interesting things that I think shed some light on this debate. This long-time perspective has enabled me to see an interesting pattern. This blog will discuss the pattern of what happens when smart people do not have the proper understanding of what they are doing.

I’ll start with what I consider to be my most embarrassing moment in my career. It was in 1984, 14 years into my development career. I contracted to build what would be the software system to power the touch controlled information kiosks at Vancouver Expo ’86. At the time, this was very avant garde. I was essentially in charge of rewriting a Basic language prototype into C for both improved performance and features. Since I was experienced in both languages, I remember thinking it’ll be easy – it’s just a re-write.

There were two main components of the application. Mine was the user component that defined how the system should work. Basically you entered events on a timeline that the system would run when the screen was touch. The other was a run-time component that ran the pseudo code mine compiled. I sub-contracted someone else to do the executable program because mine looked to be the more complex beast. At the time, I had a reputation for functioning code extremely quickly (and yes, I intentionally did not use the word maintainable).

It only took me a few week to get the basics of the system up and running – everyone was pleased. I was confident of success because, given this was a rewrite I figured the customer would know what was needed and I would just be adding functionality. Unfortunately, after they started using the system for a while bad things started to happen. It seemed every time they wanted a new input feature (e.g., specifying when a new event, like touching the screen, or starting audio) I would put it in quickly and it would work, but a couple of days later I would find out that I broke something that had been functioning. The problem was that I had tightly coupled code and was not following Shalloway’s principle. Up to this time I had studied how I could code better (e.g., structured programming, etc.) but hadn’t done a study on what caused errors (e.g., tight coupling, lack of encapsulation). BTW – this is not the embarrassing part yet.

The next few weeks followed this pattern – 1) get a customer request, 2) get the request working, 3) be told by the customer the next day or two that something else was no longer working, 4) fix the new bug. This extra bug fixing work was taking a considerable amount of time. It was clear that we were in serious trouble. Now, with what I know today, I would have concerned myself with writing better code (stopping errors instead of fixing them). But what I did back then was recognize that I was causing bugs because I just wasn’t finding all the coupled cases (I was unaware of Shalloway’s Law at the time – in fact, it was this experience that inspired Shalloway’s law). I figured if there were just a way I could tell I was about to commit an error, I could continue programming fast. The problem of having to type something in several places didn’t bother me. At the time I could type about 100 wpm (not my highest speed, but still pretty fast).

I thought the answer to my problems was detecting errors quickly, and (mostly) effortlessly. So here’s what I did. I spent a day essentially writing the equivalent of a UI Test runner and sub-contracted someone to run the tests for me. While I could re-run the test cases automatically, I needed someone to set them up and check the results against good cases. I had basically instituted semi-automatic acceptance testing in 1984 (still not the embarrassing moment – this was actually pretty cool).

From this point on we zoomed along. My quick coding style was no longer holding us back. I’d make a change, give it to my tester and within 15 minutes he’d tell me what I unintentionally broke by forgetting to change something that was coupled to my fix.. I fixed it almost immediately because I knew it was something I just changed. Bottom line, we got our system out in very good time. We even became a real product whereas we were originally only supposed to be a tactical solution for the Expo. The strategic product was being built in parallel with a longer timeframe and 30 people (compared to our 4). However, our product ended up being better so they released both.

So what’s embarrassing about “inventing” automated acceptance testing in 1984 and building a product for my client within budget and time while exceeding functionality initially envisioned and with a high quality? It was that I didn’t do automated acceptance test again until 2000 when I read about XP.

This episode was one reason I knew XP would work the moment I heard about it. I had done an iterative, close customer, automated acceptance test, continuous build (there was only me! ) project 16 years earlier. Only now I had 16 years of experience in considering what made for good programming.

This was why I immediately questioned why XP worked (not if, I was clear that it did). I remember this not being very well received. At the time, Kent Beck and Ron Jeffries (two of the originators of XP) pretty much insisted that you had to do all of the twelve practices of XP or you’d lose its power. There was also little in the way of explaining how to code.

Yes, I know about the four rules of writing simple code:

  1. The system (code plus tests) clearly communicates everything that needs to be communicated at the current instant in its development. This means that it runs every existing test, and that the source code clearly reveals the intention behind it to anyone who reads it.
  2. The system contains no duplicate code, unless that would violate (1).
  3. The system contains the minimum number of classes possible without violating (1) or (2).
  4. The system contains the minimum number of methods possible, consistent with (1) (2) and (3).

The problem with this definition is that it is practiced based. It also is stated in a way that is understandable to someone who already understands these practices (that is, has intuited the principles underneath them) but will cause great misunderstanding for those that don’t have this intuitive sense.

Of course, Kent, Ron and Ward (the third originator of XP) are all brilliant developers and had the necessary intuition. Unfortunately, most of the people getting excited about XP didn’t. I remember talking to several of my associates about XP and said that without the proper understanding of what was underneath XP (something no one wanted to talk about at the time) there would be serious problems for any undertaking it. I even gave a time frame – 6 months. Now be clear, I though XP was brilliant. I just said it was dangerous without the key understanding of it. Sure enough, while many people had great success, many others had great problems with code poorly written (ironically, mostly in the test code).

Those of you who know me know I’ve said pretty much the same thing about Scrum. I’ve written on why it works and why it doesn’t. Ironically, here, as in the XP case, my comments/concerns were pretty much ignored by the Scrum community. Today we have many (most?) Scrum teams practicing what the Scrum community calls “Scrum-but” (that is, we do Scrum, but …”). I wrote a blog on this as well The 5-whys of Lean as the answer to the but of Scrum. Even Ken Schwaber, Scrum’s co-creator and biggest evangelist, has said, I estimate that 75% of those organizations using Scrum will not succeed in getting the benefits that they hope for from it.

So what is the pattern of these three things?

  • My not doing automated acceptance testing for 16 years
  • XP teams running into code problems after a few months
  • The prevalence of Scrum-But and the general lack of success by many companies undertaking Scrum

I would suggest it is counting on smart people to find the right thing to do is not always a winning strategy. That giving people understanding of the principles and rules underneath programming and development will make them much better. I admit this begs the question that I am a “smart” person. But, I do think I qualify – summa cum laude, 2 masters degrees (one from MIT), successful author, have run a successful business for 11 years (and still going), … I’m not trying to toot my own horn here. In fact, I’m saying, how could someone as smart as me do something as stupid as not use automated acceptance testing for 16 years (isn’t that embarrassing?).

Well, my answer is that relying on practices even if you are smart is insufficient. You must learn why those practices work. Of course, this makes sense only if you believe there are rules underneath what we do. Many in the agile community don’t believe this (I’ll be writing a blog on this next week). Bottom line for me is, get the best people you possibly can. Then, make sure they study their methods, as explicitly as possible, so they can create solid, support systems and understanding of what they do. You will get a much greater return from their efforts if you do so.

In my case, the understanding would have had me look to see where I could apply automated acceptance testing effectively. Years later, I now understand that one key aspect of automated acceptance testing is to eliminate the added work that comes from the delay between code and test. I clearly knew this at some level in 1984. But not at a deep enough, or consciously high enough, level to take advantage of it on a regular basis.

XP has been around long enough that people have finally gotten to the why it works. In Scrum’s case I believe we find people doing Scrum But because their lack of understanding of the principles underneath Scrum prevents them from effectively changing the given practices. They often think they are doing the right thing, when in fact, it is not effective.

This is why, at Net Objectives, all of our training and consulting starts with why things work. If this makes sense to you, and you think you can use some help in doing this, please send me an email alshall AT netobjectives.com to see if we can help.

If you want more information on what we now consider to be useful principles and guidelines for coding better, check out these resources pages (you’ll have to register to get access to some of these):

Challenging Why (not if) Scrum Fails

This was originally published in May 2009

Virtually 2 years ago I wrote a blog called Challenging Why (not if) Scrum Works.  Basically, I was looking to see why Scrum worked so I would be able to best take advantage of Scrum. I believe Scrum works very much due to the structure of the team, the iterative nature of development and the proper context within which the team works.  In this prior blog, I reported my experience with teams that were co-located, had all team members work together and worked on only one project compared with those who were not co-located, had team members of different roles report to different managers so they were not always working together and these same people were typically on 3-5 projects at once. The co-located teams were three times more productive than the other ones – even though the people, domain and customers were virtually the same. I thought this was a great insight for two reasons.  First, it meant if you couldn’t deliver (or even build) in increments, there were things you could do to improve your development methods.  Second, if you could do incremental development, these practices were some of the first things to implement.

Continue reading “Challenging Why (not if) Scrum Fails”

Challenging why (not if) Scrum works

This was originally published in May 2007. Minor edits made. I have left the # of years the same to keep context.

I have repeatedly heard that “Scrum succeeds largely because the people doing the work define how to do the work” (see From The Top by Ken Schwaber for his full text). However, I do not think that is even a third of why Scrum works – and be clear, I am certainly not questioning that Scrum works – I just want to get at the why it works.

Continue reading “Challenging why (not if) Scrum works”