In 1900, 1901 and 1902, brothers Wilbur and Orville Wright tested a glider, then another glider, and then a third glider during three consecutive fall trips to North Carolina. They meticulously built and scrutinized each machine before flight. They tested each as a kite before piloting it themselves. The quality was impeccable, down to every last wire and bolt. But the features of a powered engine and propeller (crucial to any “airplane”) were not yet added. The brothers were still iterating on the wings — their riskiest feature.
I’ve gotten a lot of great questions since publishing my Shoddiest Viable Product article a few weeks ago. One question I’ve gotten is: does a shoddy product imply sub-par technical quality?
No! Shoddiness describes the number of features included and not the engineering practices. To the contrary, a team practicing Lean product methodology embraces changing requirements, and thus implements a standard of engineering excellence and flexibility. At Pivotal Labs, we talk a lot about how product requirements change , and how that requires a low cost of change. We don’t avoid refactoring. We see it as inevitable and embrace it.
Test Driven Development (TDD) is Pivotal’s choice for safeguarding this low cost of change. A full description and endorsement of TDD is outside the scope of this article; see one Pivotal engineer’s take on it here: https://goo.gl/EMbHcb.
In brief, TDD is the practice of writing an automated test to correspond with every new feature added to a product. In essence, this eliminates pesky bugs from popping up in week 50 of a project that originate from an old assumption made in week 4. Whenever an engineer pair delivers a new feature, they automatically test all previous features. If one test fails, we know exactly why; we know what broke and where to go to fix it. Like the Wrights, we build even our first iteration with impeccable quality.
The shoddiness, then, in SVP refers to the pared down product requirements and number of features. As I said in my previous post about an app my colleagues and I built with an insurance company client, “The product [an online insurance purchasing app] had no required form fields, no real error messaging, no ‘go back’ function, no ability to ‘save for later,’ used month-old data, only worked during business hours, and didn’t even quote a price.” What goes unsaid here is that the code which did exist was meticulously crafted and rigorously tested.
The product was “shoddy,” then, in the sense that many users/stakeholders might look at it and think, “this is missing key features!” If we design an app to sell insurance online, it should certainly display a price at some point, right?! And yet we stripped this away. Our app collected user information and then connected them with call center reps to do the purchasing. Our first goal was to learn how customers preferred to enter their personal data. Our airplane not only didn’t have seats, seat belts or a drink cart, it didn’t even have an engine.
We didn’t build a poorly-built airplane, we build a well-crafted glider. We implemented rigorous examination before we ever allowed our app to leave the ground, and we limited our first version to include only those features most important to test first. The Wrights started by testing their wings; we started by testing how exactly customer wanted to be greeted and asked personal questions. Today, our app currently has a higher conversation rate (% users who go one to buy insurance) than any other line of business at our client company.
There’s a broader point here about quality. It is the Product Manager (PM) — and not the engineer — who decides which error cases to handle when.
I have worked with companies in which, when a business person requests a feature (say an email entry field), the engineers take that request, implement it, and then make sure it can withstand a direct nuclear strike. That is to say, they ensure that the field is never blank; add validation for valid email address; they make sure that the field is resistant to any security risk they can imagine. What could have taken two hours to make ends up taking days (though albeit is well polished). To that engineer, this is what it means to write quality code.
At Pivotal we consider every error handling case to be its own user story. The Product Manager on a project will add them to the backlog, and prioritize or deprioritize them as needed. Perhaps the PM wants to test functionality in a controlled setting, and getting a working version quickly is far more important than extensive error handling. Or — as in the case with the insurance app from my previous post — we might discover that no users are entering improperly formed email addresses and so don’t prioritize error handling above other new features in need of testing.
Certainly there are times when a team should prioritize edge cases/error cases, and the PM is free to do so. As the one in charge of the business interests and testing plan for the app, he or she will be the most informed on what is and isn’t needed. Error handling is prioritized in a context of business need and experimentation, not engineering thoroughness.
Shoddiness thus is not about a lack of quality, but rather about making a conscious choice about what to include, what to focus your quality engineering work, and about allowing a team to be daring with what it chooses to build and test. Engineers build one tiny slice of value at a time, and do so with commitment to rigorous testing. Engineers who spot edge cases bring them to the PM and can pair on writing a story to handle them (which the PM will then prioritize).
This approach saves us from spending months polishing a bad idea. It helps us answer early the question, “should we even build this?” It frees the team to focus on building a lean, strong product which — like the three unpowered Wright gliders — tests the core (riskiest) principles of the future product.
PS In case you didn’t know already, the Wright Brothers went on to invent the first powered aircraft.
Part 2: The SVP in Flight. Does “Shoddy” Mean Poor Quality? was originally published in Built to Adapt on Medium, where people are continuing the conversation by highlighting and responding to this story.