Shigeo Shingo, one of the founding fathers of Lean, once said “It is more important to know-why, than to know-how”. This is an observation that we feel is just as relevant when looking at the adoption of Agile now, as it was for production line assembly back then.
Sometimes Naimuri is called in to take over software delivery from another supplier. In one such case we uncovered an interesting phenomenon: something that looked like Agile on the surface but, when you dug a little deeper, clearly wasn’t. It is what we now refer to as ‘Scrum-by-numbers’.
In theory this team was practising Scrum – they were working in 3 week Sprints, book-ended by Sprint Planning and Retrospective meetings, holding daily stand-ups etc. However, they only rarely produced potentially shippable software at the end of a Sprint (a central tenet of the Scrum approach). The clearest indicator of this was that for every four or five ‘Development Sprints’ there were one or two ‘Hardening Sprints’ – where the software was tested and bugs fixed before the release was considered to be ready for production (very much like what Forrester have described as water-scrum-fall). The reality was that the work in each Sprint was only ‘done’ insofar as the code had been written. But of course code is not what any customer wants, they’d like working software. In other words, it isn’t done until it is tested.
When the team was first told to adopt Scrum they were also told to start using User Stories instead of the Use Cases they had been using up to that point. However, since there had been no communication about why this change was made or how Stories should be used, the team produced their requirements in largely the same way as before and simply labelled them User Stories instead. The User Stories were significant pieces of functionality which could rarely be fully completed in a single Sprint. Often, very little was really ‘completed’ in a Sprint at all: usually there was very little to show in the end of Sprint demo and, in fact, those demos involved no one from outside of the team (let alone an actual customer or end-user) so it became culturally acceptable to talk through progress instead of actually demonstrating working functionality.
This way of working had two main negative consequences. Firstly, there was poor visibility of actual progress. No one really knew how the development was progressing until the start of the “Hardening Sprints”. Just as in waterfall development, much of the software development risk was not being addressed until late on in the process (and close to the desired release date). Secondly, there were significant peaks and troughs in demand for different skill-sets: in the early Sprints a significant burden was on developers, with the balance tipping towards testers in the later Sprints.
We also saw that customer acceptance was driven by an ‘acceptable level of defects’. In other words, it was agreed that the software could be released once there were, for example, one or fewer serious defects and three or fewer minor defects. This is really a polite way of agreeing to compromise on quality (rather than scope) in order to hit a deadline. It also drove behaviours in ways that were not always aligned with what the customer wanted: if there was only one serious defect then people would concentrate on getting the number of minor defects down to three, even if actually the serious defect was much more important to the customer. Similarly, more energy would naturally be spent on fixing the quick-win serious defects – which might not be the ones that mattered most to the client.
When Naimuri took over this work we started by implementing Kanban, covering the full scope of work from initial requirements through to the software ready to deploy. We also ensured that requirements were broken down into much smaller chunks. These changes immediately had the impact of giving far better visibility of the real progress being made, as each requirement flows from the backlog right through to release, and is only considered ‘done’ when it is fully tested (and not just when the code is written!). This approach also smoothed the workload as developers were continuously making changes available for testing. We also found it improved quality. It has been well established in the industry that limiting the Work In Progress (a fundamental part of the Kanban approach) increases quality (by up to 4 times). Our experience matches this, and we’re convinced that when teams can see clearly that the definition of done means having tested code that is ready to deploy, the behaviours of the team change to support this.
A concept that also emerged was the idea of MVP User Stories. For those unaware of it, MVP stands for Minimum Viable Product (a term coined by Frank Robinson and more recently popularised by the Lean Startup movement which is used to describe the minimum requirements that have to be fulfilled for the software to deliver enough value to make it worth building). In the backlog we now flag all the User Stories as either MVP or non-MVP. Our priority then becomes to ensure that all the MVP features are delivered. And, in line with this, the key acceptance criteria with our customer is whether all of the MVP-stories are complete. Rather than compromising quality to achieve a ship date, we work with the customer to make sure that high quality software is delivered, and delivered when they need it.
This experience, and others like it, have shown us that you can’t make an organisation Agile by simply telling people to follow a set of rules. Agile isn’t something you do; it is something you are. And to become Agile, just like Shigeo Shingo said, it really is important not just to know-how, but also to know-why.