In recent years there has been a growth in popularity of concepts of short design cycles to focus on developing simpler products. From my perspective, this appeared to originate in the software development world in concepts collectively called agile practices. My impression is that these originally were meant to be practices for web-based developing user interfaces for various services. The short sprints had the goal of producing a minimum viable product that could actually be put into production at the end of the sprint cycle.
The rationale for the minimum viable product was that the alternative of attempting to anticipate all user interface requirements for a final product was impractical, inefficient, or usually disappointing if not wrong. Delivering a viable product as early as possible allows for the possibility to start collecting data or suggestions for how to direct future development. The minimum viable product is meant to be something that can be developed quickly, perhaps in a single sprint of work, and still attract users who can start providing useful feedback as to where the product should go next.
My introduction of the concept in the context of user interfaces biases me to assume that is where it started to become popular. I also think it is a reasonable approach for introducing a user interface for an existing system to a specific group of users. I assumed that the back-end service already exists that provides value through a more traditional design and development approach that involved more up front design and analysis before that service was available.
Agile development concepts are not limited to user interfaces of static services, but instead are promoted for all types of new product developments including back end systems far removed from users. A wide range of products are being designed around the concept of a minimum viable product in order to get something operational as soon as possible, no matter how minimal it is.
We have made shift in thinking about product design. We have discredited earlier notions that humans can derive requirements for a design based on first principles or prior knowledge and wisdom. We found a replacement in collecting requirements through observations of something that is already being used. To introduce something new, we need to first build something in order to begin to collect requirements. Evolution through periodic sprints will eventually match the requirements.
This shift represents a major change in our expectations of capabilities of computing technologies.
Earlier practices, sometimes called a waterfall model, followed a sequence starting with analysis and design approach to produce all requirements. Implementation and deployment were allowed to proceed only after all requirements are complete and validated. This approach leveraged computing technologies for simulations and mathematical modeling to evaluate designs long before anything was implemented. Correspondingly, there was an emphasis on human intelligence to operate and interpret the computed results in order to translate them into requirements specifications.
The newer agile approach sometimes explicitly dismisses the value of this anticipatory activity that seems to never end satisfactorily and often produces requirements that do not work out well in practice. The agile approach is to assume inherent human incompetence in anticipating requirements. Instead the focus is to build something simple that can serve some minimal operational need so that we can begin to collect data about how well it is working and where there is a need to for improvement. Through a cyclical process of selecting for development the new needs that can be accomplished quickly, we grow our understanding of the requirements and eventually will arrive at a complete solution that will meet our needs. Although the final solution may be in the future (and may never completely arrive), we are gaining some value from these early implementations. Not only are we collecting data for requirements, we are obtaining some return on investment.
The agile approach rededicates our modern computing capabilities away from simulation and mathematical modeling based on first principles and toward analysis of observed data. Both agile and waterfall approaches exploit computing resources, but in completely different ways: data crunching instead of number crunching.
Requirements identification in the agile approach becomes another market for data science. To make it work, we first need to collect data about something that is already working. The minimum viable product gives us the opportunity to begin collecting data.
The minimum viable product is essential for progress in the agile development world. The general user population is becoming aware that introduced products are now minimum viable products that are a work in progress. Users understand that the products they are using will lack some capability. They embrace this concept by readily offering feedback for suggestions or complaints.
Experience with minimum viable products have changed user expectations for products. Whereas before users would have high expectations for success and little or no tolerance for failures, now users expect failures and seek them out in order to offer suggestions for improvements with the understanding the improvements appear regularly on short intervals. Users of a new product expect the products to disappoint them in some way.
Suggestions for future minimum viable product-improvements get placed into a feature backlog. Although a particular feature may be delayed in the backlog, some new features will appear in the next iteration of a minimum viable product-improvement. The users have something new to evaluate while they wait for their specific request.
Beneath the new user expectation from minimum viable products is the expectation and tolerance for failures. In fact, the users approach the minimum viable product as a failure from the very beginning. It is their task to find the failure. The success of a minimum viable product is the production of useful data for future improvements. To succeed in meeting this goal, the minimum viable product must fail.
We design for failure, but failure for a good cause. We do this because we dismissed the ability of humans to anticipate requirements for success. Human anticipation of requirements takes too much time and often turn out to be unsatisfying. Minimum viable products, in contrast, make no promise of being satisfying and in fact promise to be unsatisfying. We embrace this approach because we anticipate frequent releases of new capabilities that would be impossible with the waterfall approach. The waterfall approach produces new release on a scale of years while agile approaches produces incremental releases on a scale of weeks.
We feel agile approaches are superior because we can experience a tangible product early. Even though that early experience is far more disappointing than the more completely designed product of the waterfall approach, we expect to be disappointed and we convince ourselves we are making progress. Evidence of this are the rapid introduction of new products across a wide range of applications. Our being overwhelmed by the innovation distracts us from the fact that every one of the individual innovations are inherently failures. Failure is a sign of progress. We even celebrate failure as an opportunity for new progress.
Progress becomes dependent on failures of minimum viable products. We lost sight of the earlier notion that progress can occur through analysis and design far removed from prospect of operational failures with the goal to avoid operational failures.
In the past week, we experienced two catastrophic failures of commercially-developed space-launch systems. The first was the failure of the Antares launch to resupply the international space station. The second was the in-flight launch of Virgin’s SpaceShipTwo that resulted in death of one pilot and serious injury of the other.
Despite the unfairness of comparing space launch systems with Internet user-interfaces, I see a parallel to the mentality of minimum viable product and agile approaches. Both commercial space ventures were progressing rapidly to meet some business needs.
In the case of first mishap, this was an actual case of a viable product in that it was contracted to deliver supplies to the space station. I consider it to be a minimum viable product because it was delivering non-critical payload (supposedly, the loss of the resupply does not impact space station operations). It succeeded by failing so that now we can collect data to figure out what needs to be improved.
The second mishap involved a flight test so it was not serving any customers. A minimum viable product would be one that is actually serving a customer. However, its payload was critical in that it included two humans. It also succeeded by failing so that we can collect data to figure out what needs to be improved.
Both failures came with very high price. Antares resulted in a loss of expensive material and an operational setback for space station. Virgin’s loss involved humans.
After every space launch failure, we hear the same excuse that space is hard. A launch vehicle must contain a huge amount of fuel that can burn very quickly and that fuel must burn very quickly in order to reach the speeds necessary for orbit. With such a critical design even the smallest flaw can be catastrophic. For accessing space we are told that we need to accept risk that it may fail occasionally.
In these two cases, I’m not convinced the failures are acceptable. Both failures occurred during the initial moments of the launch. The engines failed shortly after they started. The concept of burning rocket fuel to quickly accelerate a payload has been around for nearly a century, and the technology for starting a launch to reach orbit has been around for well over a half century. Given our wealth of experience with rockets, the term rocket-science is not a intimidating as it used to be. We were reliably launching payloads 50 years ago that managed to land on the moon and then come back.
These particular failures gives the impression that rocket science is actually harder today than it was 50 years earlier. Perhaps it is harder because we are going about it the wrong way.
Certainly the newer rockets are employing newer designs for cost savings. However, I believe we have a solid understanding of what is going on in the initial moments of either liquid or solid fueled rockets. In 2014, we do not need launch failures to teach us lessons about how fuels are supplied to be burned and how to control the burning of the fuel to meet objectives.
The first stage of the Antares rocket burns in 235 seconds, or 235 million microseconds. The stage itself occupies a volume of 330 million cubic centimeters. Unlike our predecessors of a half-century ago, we are no longer using slide rules to compute equations. Instead we brag of our big data prowess of handling trillions of data points, with fast analytics of petabytes of data. We also have the experience of a half century of burning fuels to get a first stage to do its job. We have precise understanding of the composition, density, and wind patterns of every layer of the atmosphere that the vehicle has to travel through.
Modern computing technologies should be able to compute precisely what will happen at every one of those cubic centimeters during every microsecond of operation through every conceivable variation of a successful launch. These simulations are doubtlessly complex and require a lot of work, but we have the computing technology to complete the simulations in a reasonable amount of time without any risk to any real payload.
The simulations can even be specific to the exact components that are put into the engine. As the engine is constructed, we can measure critical properties of every component and measure every operation required to assemble the component. This data can also be available for the simulation. The simulation can run after the system is assembled to verify that it will burn correctly. Simulation technologies and development techniques are mature. Our understanding of every aspect of the launch is mature. Our ability to precisely measure and model every component of the vehicle is mature. We have the computing technologies to evaluate this launch readiness through a huge number of iterations to identify potential failure scenarios. Computer simulations should be able to identify problems without requiring an operational trial run of a test flight or a minimum viable product.
We don’t need launch failures to observe failures. We don’t need minimum viable products on the launch pad to collect failure information. We should be using our computing technologies to predict what will happen with the precise configure we intend to launch. Instead, we seem to be ignoring this old-fashioned approach despite the luxury of abundant computing resources. We focus instead on one-time physical tests that only succeed if they fail because the failure provides us information about what needs to be fixed. We neglect our massive computing resources because our current expectation for computing is the trivial one of analyzing the minuscule and arguably irrelevant data collected during these one-time tests.
The two recent launch failures illustrates the downside of over-reliance of the concept of the new design approaches of the minimum viable product. The modern design approach distrusts our ability to understand physics and compute mathematics to generate hard requirements. Instead we want to focus on short design sprints to get something minimally capable in the field so we can employ our supposedly superior design approach of learning from actual failures. The new approach conveniently sets our expectations that failures will occur because they must occur in order for us to get a better design. Rocket science will perpetually be rocket science in its original meaning of being hard. Computing technologies are only useful to tracking data about the inevitable failures.