I recently spoke at a Project Management Institute (PMI) chapter meeting and discussed some experiences from "My Agile Journey.” After my presentation, I was asked a question that made me pause; it is the interaction and interesting questions that motivate me to give such talks. While I provided a response, I realized afterwards that I have a lot more to share on the subject and since I don’t have the attendee’s contact information, I’m hoping that my answer will make its way back to her.
As I recall, the question went something like this: "We've been trying to adopt an Agile approach but we have a problem with one of our established metrics that people track. That metric being the days a bug report is open. With sprints, sometimes we don't get around to fixing a bug for weeks which is presenting an issue when it comes to our Agile adoption. Do you have any suggestions?"
My quick answer was that they should consider resolution of the bug as part of sprint planning every 2-4 weeks. If the Product Owner and team made an executive decision that other work was more important, then that should become priority. I then recommended that the team make sure all the necessary stakeholders participate in sprint planning.
As I reflect on this response, I would like to add the following to expand on the real issue at hand:
First and foremost, when a team begins the “Agile Journey,” they tend to carry over some habits or patterns from the waterfall world and just look at Agile as a series of small waterfalls. Similarly, they look at the work in a sprint as just the development tasks and don't establish any quality goals or metrics that help define what it means to be done with work in the sprint. If teams are sprinting and bugs are continuing to build up without getting fixed then there is a need for someone to address that concern. A healthy team adopting Agile should not be building up an increasing stack of open bugs.
That said, Agile teams need to be better about identifying when something should be considered a bug. Let me describe two examples:
Case A - when someone clicks the OK button after a certain set of steps the system responds with "Access violation" or "Invalid object reference.” Messages like these are clear indications that there is some kind of implementation error; this is a classic bug. I would advise the Product Owner not to accept the story with this type of behavior if it was exposed before story acceptance. Therefore the metric is eliminated.
Case B -Someone clicks the OK button and the system responds with a message indicating "no x selected”. The screen then shows that there is only one choice for things of type x. The person in this situation might think, "Why do I have to select black as a color when black is the only choice available?" As users, we expect the program is smart enough to realize there is no choice to be made and should therefore make the selection automatically. This may also be considered a bug.
In Case B the problem is likely due to a flaw in the requirements of a User Story. It is sometimes referred to as an emergent requirement. This would be a difficult requirement to specify until the customer had some experience with the system and could actually recognize the problem.
The difference, however, is that in Case A the error effects the end result because the user is unable to move forward whereas in Case B, the problem presented is more so a frustration and does not prevent forward motion. While the bug in Case A is something that should be resolved quickly, the team and Product Owner may prioritize other work before working on the bug presented in Case B.
Maybe the frequency of the user encountering this case is very low. The bug in Case B may stay open for several sprints or even across multiple releases. The key point here is an explicit decision (or multiple decisions) was made not to address Case B in favor of something else that had higher business value.
Assuming these trade-offs and decisions are being made, the question of the standing of the traditional metric bug open duration is called into consideration. One would have to consider if the metrics truly supported governance of the development process or if they are a carryover from a different process.
Have you encountered a similar situation with your metrics? Don’t let situations like these prevent you from moving forward. We would be happy to answer any questions you might have. Send us a note and we will contact you to discuss: firstname.lastname@example.org