Agile with External Clients: Testing Is Not Optional

Today's installment in the Agile with External Clients series covers the topic of testing. A decade after The Agile Manifesto and over 16 years since Scrum and XP came on the scene, I still encounter a large number of teams where the use of testing is lip service at best and non-existent all too often. In this context, testing means the use of frameworks like xUnit et al to create of a suite of unit, integration, and functional tests that exercise a body of code by executing it and making assertions about the outcome of that execution at multiple levels of focus and granularity. Of all the practices of Agile software development, both process and technical, testing is the one people most readily acknowledge the value of while at the same time avoiding it altogether. So, let me make this quite plain:

Testing is not optional.

Testing does not slow you down; it allows you to speed up. It is the fundamental feedback loop in the creation of software. Testing is one of the strongest means of minimizing and eliminating waste by virtue of how it allows you to catch defects as early as possible. This is true both for errors introduced directly in the code being created and indirectly through breakage of existing code relies upon or integrates with what is being made. Manually exercising your code by running it locally and following along in your debugger is not an adequate substitute, but rather a companion to the judicious use of testing.

While this lesson applies to any Agile group, it is especially critical if you want to run projects for external clients using an Agile approach. Recall that for many clients, you may be their first encounter with Agile software development principles. Their historical context views "testing" as an afterthought, a separate process that does its best to verify that what was built approximates what was documented. That makes testing a separate value proposition and one to cut if cost or timeline become an issue.

Given the disappointment with phased approaches to software development, it should come as no surprise that the testing "phase" has a hard time living up to its potential. At that point, there's already an entire project of sunken cost in analysis and development. Feedback on things that aren't working as expected are taken as bad news, since some of the software was developed months if not over a year ago. If you read reports from testing groups that have gone over waterfall applications at the final testing phase, they all seem to fall under the theme of "Things That Would Have Been Great to Know at the Time I Could Have Done Something about Them".

An Agile approach that includes the technical practice of unit, integration, and functional testing restores what we have been missing for decades in classical approaches; we now have feedback and information at the time when we are best able to act upon it. It gives us a flexibility and confidence to move swiftly in implementing new features and responding to inevitable change without recklessness or irrational optimism.

This is particularly true in the case of working with external clients. If a team is effectively applying testing, the lack of mid-to-late-project "surprises" is refreshing. Functionality whose implementation can spin on a dime without tearing the ship apart astounds clients who've become accustomed to changing their mind or acting upon new information being a quite painful and costly activity. If User Stories are the most astounding process practice, then Testing is definitely the counterpart as the most profoundly beneficial technical practice.

Having testing be an inseparable thread in the fabric of how you deliver software can separate you from the rest of the pack who claim to be the "Agile gurus" in your market. However, the absence of testing doesn't merely deprive you of that benefit. As changes are introduced to the system and allowed to pass for days if not weeks unnoticed, your team invariably runs aground on one or more of those issues. You find yourselves in uncomfortable conference calls explaining how this iteration is going to have only half its stories delivered due to things that have surfaced on stories that had already been declared complete in previous iterations. Unchallenged assumptions arise late in a project to bite the team on the ankle, making the project less flexible should the client require nontrivial changes to the architecture.

As clients observe this behavior (and believe me, they notice), they come to the conclusion that maybe this Agile business is no different from everything they've seen before. Your group is just another one claiming to have "the secret", while their impression is that paying a premium for experts doesn't really deliver what it promises. And if that's the case, maybe they should have just gone with the lowball bid.

Ask an Agile Coach: How do I handle the effect of carryover on velocity?

Our previous installment of Ask an Agile Coach had a new question in the comments:

As for the case when some of the user stories didn’t get completed, what happens to the user stories which were partially completed–say, 80% finished–but didn’t quite make it? How do you keep your velocity metric from getting hosed?

Practitioners have asked me variations of these questions many times over the years. I'll paraphrase them into a single question:

How do I handle the effect of carryover on velocity?

When we gather data about something, there's an innate temptation to filter the data to effect a desired outcome. It is often subtle; sometimes we don't even realize we are doing it. This is a form of sampling bias, a term from the field of statistics. I love this sentence from the Wikipedia article on sampling bias:

A biased sample causes problems because any statistic computed from that sample has the potential to be consistently erroneous.

You handle carryover by letting it accurately affect velocity, whether that effect is positive or negative.

The purpose of tracking velocity is to provide feedback on how well a team can estimate, break down, and execute work within a fixed interval. Carryover implies a need to improve in one or more of these areas. When a team has a drop in velocity, be sure to talk about it in the retrospective. Are stories too big and bulky? Do tasks sit for days on the board waiting for the next handoff? It the team consistently over-committing during Sprint Planning, hoping for unrealistic throughput?

Allowing a skewed velocity sets a team up for disappointing its stakeholders. If velocity looks higher than reality (inflated velocity is far more common than deflated velocity), stakeholders are going to have expectations that cannot be met. Embrace the bad news, and use it to reinforce the message that our only hope is to get better at working together as people.

Barry Hawkins of All Things Computed provides coaching and mentoring in how to successfully apply the process and technical disciplines of Agile Software Development.

Ask an Agile Coach: What do I do with a sprint that ends with only incomplete stories?

Today's Ask an Agile Coach submission comes from Jake Gordon via Twitter:

Anyone (@barryhawkins)  have any good articles on reaching the end of an iteration with only partially completed user stories? #agile

What do you do with a sprint that ends with only incomplete stories?

When a sprint ends and every story is incomplete, it is typically a symptom of one or more of the following underlying causes:

  • The stories were all larger than the team had estimated due to lack of cross-functional participation in the story writing and estimation process.
  • Team members kept switching between stories instead of focusing on single ones, completing them, then moving on to the next in priority. Minimize work in process (WIP).
  • Core parts of the process are being left out, such as a highly-visible task board, a burndown chart, effective daily stand-up meetings, etc.; as a result, feedback and handoffs are unnecessarily delayed.
  • The team is working on a platform or problem domain that is new, and its estimations are commensurately less accurate, leading to over-commitment.

When a sprint like this happens, effective retrospectives are essential. Ensure that all parts of the process have transparency. Visibility into how work flows from concept to customer is necessary for inspection. Use the insights gained from inspection to guide an incremental, sustainable rate of adaptation. Strive to eliminate waste and improve communication.

A single sprint where nothing gets completed is a warning sign that should not be ignored. Multiple sprints where nothing gets completed calls for a full-blown intervention. If you can't get out of that rut on your own, seek external assistance.

Barry Hawkins of All Things Computed provides coaching and mentoring in how to successfully apply the process and technical disciplines of Agile Software Development.