In earlier posts (such as here), I discussed the need to automate decision-making based on recommendations from appropriately-selected predictive algorithms working on data. If there remains a human decision maker, we must take away that decision maker’s autonomy to consider private fears or doubts and instead obligate him to follow the recommendations from the algorithms working from trusted data.
Decisions typically have some kind of deadline. Deadlines may be imposed externally where some external event demands a response. Deadlines may also be imposed internally where the prediction is based on a specific starting time, where a different time would involve different initial conditions that would require a fresh analysis to prescribe recommended actions.
When the deadline arrives, we need to act on the best evidence available. In recent years and controversies we have described this collection of best evidence with the phrase of settled science. Settled science is the best explanation at the time the decision is needed.
I want to tie decision making to the my recent posts (such as here) that discussed the notion of a minimum viable product as an integral part of the notion of agile development practices. In that those discussions, I emphasized the word minimum where we constrain the minimum viable product to be what is practical to accomplish with present resources in a very short period of time. In agile practice the constraints are the current capabilities of the team and the need to finish the product in a single 2-4 week sprint.
I think the minimum viable product concept can also apply to decision making itself. The decision must consider actual resources available at the time. I presume that the analytic algorithms would consider these resource constraints as part of the recommendations. The assessment of available and accessible resources should be part of the evidence that supports a decision. The decision should be of something that has sufficient resources to accomplish its objective.
For example, in an earlier post, I described a very long drawn out approach to send humans to Mars over a time-frame of centuries from now. The concept used an agile approach of first developing robotic miners and metal foundries to begin accumulating material in space from nearby asteroids. The actual event of landing on Mars would occur when the resources were available. Our current plans insist on putting humans on Mars in the next decade or so primarily for entertainment purposes. My concept can not work in this case because the resources can not be available in time. Instead we must launch everything directly from Earth.
I bring up this Mars example to describe how the core decision (getting to Mars) must consider the resources available (factories only exist on Earth). The evidence in evidence-based decision making must include the available resources for what can be done right now.
For this post, I’m equating the concept of settled science invoked in many popular debates to the concept of evidence-based decision making. The evidence is settled science. I am also suggesting that settled science and evidence-based decision making recommend a minimum viable product to put into production in a reasonably short period of time. The agile interpretation of a policy may involve the following:
- A recommendation should be minimized to accommodate known and accessible resources.
- The settled science supports the viability.
- The product is a policy of some form.
I think it is instructive to conceive of policy-making as an agile process involving multiple iterations of minimum viable products. Compared to current policy thinking that attempts to make a single policy decision to forever followed (such as most of USA policies that are not dragging the country deep into debt), an agile approach recognizes that the policy is inherently short term in nature. The agile concept of a policy as a minimum viable product introduces the notion that the policy is expected to fail. The success of a minimum viable product is the wealth of data we learn from its operational failure.
An agile approach to policy making will set up the expectation that every policy will fail. In my opinion, most of our policies have failed but our expectations for success prevents us from acknowledging these failures. Affordable Healthcare Act, Medicare, Medicaid, Social Security, Unemployment Insurance, and Social Security Disability Insurance are all great examples of minimum viable products at the time that have since failed (if we allow ourselves to honest about the accounting). I presume each of these was very good policy based on the best information at the time. It was legitimate evidence-based decision-making to put these policies into practice. But once put in practice, we learned some things we didn’t know earlier. The failure of the policies gave us new information for better policies.
What turns policies into minimum viable products is the expectation that their implementation will produce new data. In particular, the implementation produces data that surprises us. That surprise is a failure of the original settled science. I think it is inevitable for settled-science-based policies will fail. The new information will allow us to come up with new settled science. We need the opportunity to make new policies to match the new settled science. We need to retire the old policy to make room for for the new one.
If we approached policy-making as a agile approach involving sprints producing minimum viable products, we would start with the expectation that the policy to have a limited life-span. The policy is the minimum viable product of a sprint and it only needs to last long enough to get to the next sprint. In short, we will immediately expect the policy to ultimately fail and need to be replaced. We will not be expecting perfection or perfectibility of policies introduced many generations ago (by people who are no longer living). Instead, we will be expecting a new release of policies just as mass consumers wait anxiously for the next release of the iPhone to replace the old one that was not much different except in a few key areas.
I also think it is useful to think of settled science as a motivation for a minimum viable product. The science is settled in part because we ran out of surprising data. We need to find a policy that is feasible with current resources to put the settled science into practice in order to give it an opportunity to generate new data from its inevitable failure. The experience of implementing the settled science will unsettle the science and start a new sprint to come up with something better.
If we enter the policy-making project with an agile-mindset we will already be expecting this failure. We will more realistically interpret the concept of settled-science as the justification to give a new policy its chance to fail in order to collect new data for future new policies. An agile mindset anticipates frequent new releases of policies. The agile-mindset also alerts the general population to look for failures instead of encouraging them to rationalize successes. The agile goal is to find the failures so we can obtain better information to produce a better policy for the next scheduled release.
An agile approach to decision making may still obligate the decision making to follow the settled science. I described this in my last post where I explored the rationale of disapproving quarantines for Ebola-exposed individuals because the settled-science does not indicate a risk. Although many people back the need for quarantines for returning Ebola health workers, the science does not support a reason to be concerned. As a result, we should base the policy on science instead of popular opinion.
Here is where the perspective of the policy making matters. The current and historic perspective is that the policy making is forever a good policy. The science guarantees success. When confronted with evidence of failure, we will attempt to preserve the original policy with minor adjustments without undoing the policy itself. This perspective will not permit the overturn of the no-quarantine policy but instead recommend modifications in behaviors of the population to allow the no-quarantine policy to remain on the books.
The alternative agile perspective is that the policy making is temporary and will likely fail. From this perspective, we will look for evidence of failure and collect that evidence in time to influence the next release of the policy. The new data will convince us of the wisdom of the revised policy even if it completely contradicts the prior policy. Agile thinking prepares us to embrace the notion of quarantines for exposed individuals despite a prior policy that disapproved that notion.
In either perspective, a failure in the policy’s settled science will result in a period of risk for the public. The agile perspective will limit this period because we expect the policy will be replaced as soon as new data is available. Agile techniques involve short sprints of new releases. Every policy should be short-lived.
Combining these concepts suggests an agile governing strategy.
- The governing strategy identifies minimum viable products based on settled science and the available known-accessible resources at the time.
- The agile concept of a minimum viable product requires immediate implementation of the policy. We can not implement a policy that requires future negotiation to free up resources. The agile-based policy must be consistent with immediately committed resources.
- The agile policy making produces new policies with a frequent release schedule. This frequency of policy updates allows us have replacement policies that best reflect the immediately current settled science.
- The frequent release cycle sets up the expectation that the current policy will fail. We will be alert to any observations of failures so that we can make their observations available for consideration of a future policy release.
- The immediate implementation of settled science permits us to obtain new data that may unsettle the science and this will advance our understanding for better policies in the future.
The previously discussed example of the disapproval of quarantines of Ebola-exposed health-workers is an example of a policy that settled-science obligates us to follow. Unfortunately, we do not conceive of this policy in agile terms so the policy is expected to be perpetually valid until extraordinary evidence proves it wrong. An agile approach would expect the policy to inherently be short term. The expected high frequency of new releases allows us to adapt to any evidence for failure.
All of our policies should be consistent with settled science and available resources. In an earlier post, I criticized recent models that predict a huge increase in Ebola cases in western Africa. I want to revisit that study here as an example of a settled science. A few months ago, I heard many authoritative reports presenting scientific cases for an exponential growth of Ebola cases. A recent study claimed that by December 2014 we will see 171,000 cases and 90,000 deaths in a single county of one country. Given that December is only a few weeks away and the latest number of cases worldwide is still around 10,000, I don’t think this is going to happen. However, at the time of the analysis, this estimate reflected sound scientific analysis.
Modeling data is often a major part of the settled-science for consideration in policies. In this case the settled-science as of late October is that there will be an extremely large number of Ebola cases in a very small area. Also, by the time of the study, it should have been obvious that there is no feasible way to move sufficient healthcare resources into the right places to handle this load. This requires an immediate decision. At least 100,000 people currently healthy and disease free will be sick by the end of the year and many will die. There will be no significant increase in health-workers workers or medical supplies to prevent this.
In this particular example, the recommendation was for immediate massive foreign aid for medical supplies. This recommendation was not a policy because the resources are not immediately available. The required resources would require further complex negotiations for identifying precisely what is needed and how to get it there. It should have been settled-science that this was impossible to happen before the December catastrophe.
We needed a policy in place to protect the 100,000+ people who are not yet infected but who will be infected before the end of the year. That policy would have to consider available resources. No such policy was proposed.
This is where the obligated decision making comes into play. The reason why no policy was proposed was because we still have human decision makers who are considering non-evidence of fears, doubts, and ethics. This is contrary to the evidence-based decision making that obligates us to enact policies based on the settled science at the time. The settled science (or at least the best science) at the time was that 100,000 people will get the disease unless something is done immediately with resources currently available.
An obligation to follow the science would automate a decision with no room for doubts or fears. Evidence-based decision making obligates us to follow the best science at the time. The best science concludes that insufficient medical resources (at a minimum to provide care for effectively isolated patients) would be available to prevent this catastrophic increase in patients.
One possible policy is to do nothing. We have exhausted our options and now have no choice. This option also reflects the historic approach to decision-making that strives to be permanently valid. The permanently valid options are no longer available to us so we can only do nothing.
There are other possible non-medical policies. This disease is transmitted primarily to caregivers of those patients who are suffering later stages of the disease. This suggests a policy to forbid or to prevent this dangerous phase of care. In an earlier post, I observed that the last heroic attempts at lifesaving (with dialysis and resuscitation techniques) required considerable exertions by care givers at the time when the patient was most infectious. These extraordinary efforts at that particular time may have greatly increased their risk of becoming infected. I suggested it might be better in these cases to have an implicit do-not-resuscitate order for Ebola: at some point the extra effort is riskier than the potential benefit. In addition, euthanasia may be a good policy once we recognize the disease is getting out of control and will likely end in death any way. An early death will prevent the virus from multiplying to even more infectious levels and it allow for a quick disposal of the body to minimize potential infections.
The scientific evidence as of mid October was that there will soon be more than 100,000 new cases of the disease without any additional medical resources. An evidence-based decision could recommend immediate euthanasia or murder of any patient with advanced stages of the illness but before the illness becomes most infectious. Given the impossibility of any medical relief, this seems to be the only actionable decision available to us to save a multitude of new cases. An obligated or automated decision making would have to follow this recommendation.
As far as I know, no one has proposed this policy and I am pretty certain no one is following it. But I take this as evidence that we do not yet live in a world of authoritarianism by data. If we did, then this euthanasia policy would likely be activated and with the good result of saving so many people from contracting the disease and likely dying as a result. We instead have human decision makers who reject the ethics of murder of those who by unfortunate chance contracted the disease early. Human decision-makers (in public health policy) admirably reasoned that is more ethical to accept a multitude of natural-cause sufferings and deaths than to impose premature death of the few who already have the disease.
The automated decision making of mandatory euthanasia of symptomatic Ebola patients may be more ethical. Not only does it prevent a huge number of new cases, but it also avoids a collapse of social order where all commerce will stop during widespread epidemic.
However, December 15 is only a month away and the latest data suggests there are still just a few thousand cases of the illness spread in multiple areas. One possibility is that the problem is so bad we are not getting reliable data of a far worse situation, but another possibility is that we are indeed coping with the cases. My bias is that the local populations are becoming educated for good practices in dealing with this disease. The catastrophic predictions increasingly appears unlikely. If we had automated the decision-making on settled-science of just a few weeks ago, we may have ended up with more deaths than we are going to experience using the human-decision to struggle on with usual health care with the available resources.
This example illustrates my concern about our increasing trust in big-data or science to supply adequate data to automate evidence-based decision-making. Automated decisions will be contrary to human decision makers who will consider doubts, fears, and ethics that are not permissible as evidence. The automated decisions may be wrong but the analogy to the agile minimum-viable-product sets our expectation that it will fail. When it does fail (in this case unnecessarily prematurely ending lives of Ebola patients), there is no accountability. Instead we must accept the failure as a necessary cost for obtaining better data for better future automated evidence-based decisions.
In this example, human based decision making, the type of decision making we are currently using, appears to be working fine.