Skip to content

Agile with External Clients: Testing Is Not Optional

Today’s installment in theย Agile with External Clients series covers the topic of testing. A decade after The Agile Manifesto and over 16 years since Scrum and XP came on the scene, I still encounter a large number of teams where the use of testing is lip service at best and non-existent all too often. In this context, testing means the use of frameworks like xUnit et al to create of a suite of unit, integration, and functional tests that exercise a body of code by executing it and making assertions about the outcome of that execution at multiple levels of focus and granularity. Of all the practices of Agile software development, both process and technical, testing is the one people most readily acknowledge the value of while at the same time avoiding it altogether. So, let me make this quite plain:

Testing is not optional.

Testing does not slow you down; it allows you to speed up. It is the fundamental feedback loop in the creation of software. Testing is one of the strongest means of minimizing and eliminating waste by virtue of how it allows you to catch defects as early as possible. This is true both for errors introduced directly in the code being created and indirectly through breakage of existing code relies upon or integrates with what is being made. Manually exercising your code by running it locally and following along in your debugger is not an adequate substitute, but rather a companion to the judicious use of testing.

While this lesson applies to any Agile group, it is especially critical if you want to run projects for external clients using an Agile approach. Recall that for many clients, you may be their first encounter with Agile software development principles. Their historical context views “testing” as an afterthought, a separate process that does its best to verify that what was built approximates what was documented. That makes testing a separate value proposition and one to cut if cost or timeline become an issue.

Given the disappointment with phased approaches to software development, it should come as no surprise that the testing “phase” has a hard time living up to its potential. At that point, there’s already an entire project of sunken cost in analysis and development. Feedback on things that aren’t working as expected are taken as bad news, since some of the software was developed months if not over a year ago. If you read reports from testing groups that have gone over waterfall applications at the final testing phase, they all seem to fall under the theme of “Things That Would Have Been Great to Know at the Time I Could Have Done Something about Them”.

An Agile approach that includes the technical practice of unit, integration, and functional testing restores what we have been missing for decades in classical approaches; we now have feedback and information at the time when we are best able to act upon it. It gives us a flexibility and confidence to move swiftly in implementing new features and responding to inevitable change without recklessness or irrational optimism.

This is particularly true in the case of working with external clients. If a team is effectively applying testing, the lack of mid-to-late-project “surprises” is refreshing. Functionality whose implementation can spin on a dime without tearing the ship apart astounds clients who’ve become accustomed to changing their mind or acting upon new information being a quite painful and costly activity. If User Stories are the most astounding process practice, then Testing is definitely the counterpart as the most profoundly beneficial technical practice.

Having testing be an inseparable thread in the fabric of how you deliver software can separate you from the rest of the pack who claim to be the “Agile gurus” in your market. However, the absence of testing doesn’t merely deprive you of that benefit. As changes are introduced to the system and allowed to pass for days if not weeks unnoticed, your team invariably runs aground on one or more of those issues. You find yourselves in uncomfortable conference calls explaining how this iteration is going to have only half its stories delivered due to things that have surfaced on stories that had already been declared complete in previous iterations. Unchallenged assumptions arise late in a project to bite the team on the ankle, making the project less flexible should the client require nontrivial changes to the architecture.

As clients observe this behavior (and believe me, they notice), they come to the conclusion that maybe this Agile business is no different from everything they’ve seen before. Your group is just another one claiming to have “the secret”, while their impression is that paying a premium for experts doesn’t really deliver what it promises. And if that’s the case, maybe they should have just gone with the lowball bid.

3 thoughts on “Agile with External Clients: Testing Is Not Optional”

  1. Todd Costella

    Great read Barry.

    The one bit I continue to struggle with wrt testing, is that a well designed test suite for a feature/story can easily take as long as the feature itself and in many cases, can take 2x to 3x as long to write. Now I have no problems with this but find that when this is brought up during estimation, everyone agrees but it’s also the first thing that gets thrown on the floor if things start to take longer then expected. It takes a fair amount of “courage” to quote a recent JPR recording to go back after the fact and get the tests in place once the sprint is done. It can be done to be sure but it’s not an easy sell to the business folks.

    Yes one could argue that this wouldn’t be an issue with TDD however it’s been my experience that TDD only works really well in a subset of the kinds of stories we tend towards.

    I’ve had my bacon saved a number of times by our test suite. I still struggle with conveying the message to the business folks around the value of a well designed, well factored test suite. I say words like “improved quality” and “fewer production bugs” and while that’s true, we still have our fair share of “crappy code” and “OMFG how could this have ever worked”.

    It’s all part of the evolution of our process. I remember talking to you about this three years ago and it still seems like a hard problem ๐Ÿ™‚

  2. First off, I agree with everything you are saying. As the development manager for one of those groups that is only doing some testing but at the same time says that we understand the value of testing, I can say that one of the challenges with testing is that it is just plain difficult to test. I often tell our new developers that once you learn how to write unit tests then writing other code is easy by comparison. It really is an advanced skill that takes many months if not years to master.

    That being said, I still don’t think it’s a valid excuse not to test, but you certainly need to have a skilled team in place to make the most of testing. Writing “bad” tests is often times worse than writing no tests at all. You can end up with more code to maintain and brittle suite of tests to update every time you tweak the functionality of existing code.

  3. I completely agree with Barry in saying that “Testing is not optional”. One definitely does not want to deliver only half the story. Having the experience of managing some agile projects, I have come across this attitude of “No Testing” every now and then. I have seen the following two reasons as explanations by development managers –

    (a) โ€œIt takes a long time to write those test cases and even longer to execute them.โ€

    My answer to them is to build a progressive test automation suite during the course of each sprint. There are solutions in the market (with inbuilt automation framework) that can help in creating those test suites to a large extent before the actual application is delivered to the test teams.

    (b) โ€œNo matter how much effort you spend, the tests are flawed and do not bring out the defects. Why take the pain then?โ€

    This is a problem that everyone speaks about. However, it cannot be addressed overnight. One of the disadvantages of the agile methodology is that the Project Managers conveniently choose to ignore some of the basics of project management for a light weight process. This includes data collection and generation of metrics.

    Most often, the project managers end up in situations where the project has failed and they do not possess the right data points and metrics to detect the real issue. It is extremely important to conduct a detailed examination on the test execution. The examination needs to consist of finding the metrics related to the quality of test cases and the quality of overall testing that was done. This data will then need to be fed back into the system to improve the overall quality of tests that were designed.

    To read more about the Infosys Testing blog, click here

Comments are closed.