Predictive analytics mindlessly following protocols

The motivation for many big data projects is the use of analytic algorithms such as machine learning to make quickly impose some action based on similarities of current events with earlier events that produced either favorable or undesired outcomes.   These algorithms compare data about present events with data about historic events to produce recommendations that acted quickly enough will produce beneficial outcomes.   We anticipate that eventually we will have sufficient data to produce high confidence that this project will succeed all the time.   The payoff occurs when we automate the decision making based on the analytic results from big data.

There is nothing new about the idea that to be able to respond to new events, we rely on prepared actions for conditions that appear similar to previous events.   We have always employed methods of inquiry into disasters or missed opportunities in order to learn lessons.   These lessons often become expressed as recommendations for better training and protocols for action so that we can respond to similar circumstances quickly enough to avoid past mistakes.

Historically, these were rules that required intensive human thinking and debate about the circumstances and about the recommended protocols to use when similar circumstances arise.    The debate often includes skepticism to point out possible flaws in the reasoning.   Such criticism may point out that other circumstances may look the same but not have the same consequences if left to proceed without interference.   Or they may argue that the proposed protocols will cause other problems even when it solves the intended problem.

The counter arguments was a key part of the process to decide on what is the right protocols to demand on future practitioners.    The counter arguments often use counter-factual or hypothetical information: information that is not in the historical record.   We accept this non-historic data a part of the debate because we recognize that the historical record is an incomplete record of only one possibility that actually happened from a much larger set of possibilities that could have occurred.

A recent news story concerned some person who died due to a wasp sting.  The fact is that the sting occurred and ultimately resulted in death.   The implication is that a wasp sting can be deadly.   However, death resulted due to sting happening to one particular person, someone who had a previously unknown allergic over-reaction to a sting of this particular type of wasp.  The same wasp sting on anyone else would not have caused death even though it may have caused varying degrees of discomfort.  We conclude that although we need to be aware of the risk of death from a wasp sting we don’t need to to change our approach to dealing with wasps or their stings.

It is easy to dismiss this example because it is a one time tragic outcome of a relatively common experience.   We would not change protocols for responding to wasps or to their stings based on this one case.   We understand that stings can be surprisingly deadly, but we also recognize that most of the time they are not.

In many other areas we do radically change protocols based on one time events.   Once there was a person who attempted to smuggle explosives on a plane by hiding it in the soles of the shoes.  Once we discovered that, we now require scanning everyone’s shoes.   Certainly a shoe bomb is something that can be repeated and is worth preventing.   But it is also an example of a single occurrence resulting in a drastic change in protocols despite the huge variety of shoes and their capacity to hide explosives.

The decision to impose a shoe inspection protocol involved the old process of human debate and decision making to balance the relative benefit of the inspection with the inconvenience of the traveler or the slowing down of the inspection stations.   The decision included at least some assessment that there is little downside to require people to walk a few steps without their shoes.

In many areas of protective services either the public or for material assets, we readily revise protocols based on single occurrences.   We ban and force a recall of a toy when one or a tiny fraction of children choke on some part that can disconnect from the toy.  We impose new restrictions on access to certain parts of facilities that were previously open when there was one case of someone doing something that could or did cause damage.

Predictive analytic algorithms for quick or automated decision-making follows the same concept of identifying prescriptive actions that can improve the outcomes.   When used with big data we have additional confidence that there are more data observations.   Sometimes the data includes both observations of what happens with and without a certain condition.  In these cases we can show the condition results positive when present and negative consequences when absent.   The volume of data and the breadth of possibilities gives us more confidence to immediately impose decisions based on the data alone.

This kind of decision making implicitly excludes the counter-factual arguments.  Sometimes we explicitly describe this type of decision making as evidence-based decision making.   We will only consider actual evidence when making a decision.   This explicitly rejects the validity or appropriateness of counter-factual arguments.   It also reinforces the big data project because big data is data about what actually happened.   Even the data I criticized in earlier posts where the data is derived from preconceived theories rather that direct observation is still evidence because those theories are widely accepted as true.

Human history has a long tradition of respecting the counter arguments that looks at what might have happened but did not happen.   Often our decisions consider credible non-evidence in addition to the actual evidence.   For example, in judicial proceedings we demand abundant evidence to overcome doubt.   Doubt is about non-evidence of considering credible alternative possibilities that failed to be recorded as evidence.   For judicial proceedings, convictions are not evidence-based in the same sense as meant in evidence based decision making.   Instead, the evidence must overcome our doubts and those doubts are based on non-evidence.

The danger of big data analytic approaches is that it makes decisions based only on evidence.   The algorithms can only consider what actually occurred.    They can not imagine alternative realities where circumstances might have been different.  They can not replicate human doubt.

In recent posts, I expressed my concern about the need for accountable decision makers.   These are people we designate to make decisions that they can explain to us, defend against criticism, or modify to accommodate unanticipated needs.   To be accountable for a decision, the decision-maker must consider multiple options.   That consideration includes raising and addressing doubts, counterfactuals, or non-evidence.   A powerful tool for decision making is simulation of what-if scenarios, things that did not happen but we can imagine could possibly have happened.   The simulation can reinforce a decision (run actual data against a new protocol to show what the outcomes would be) or it can present doubt by showing other possibilities that might have happened.

Evidence-based decision making is decision making that considers only what happened in the past.   In contrast we hold human decision makers to be accountable for the consequences of their decisions in those events that occurred after the decision is made.

I have been thinking a lot about recent civil unrest in Ferguson Missouri, in the context of accountable decision making.   I do not know enough to come to any conclusion about the precise details.   However, I have been thinking about the some broader issues that appear to be consequences of decisions made long before the recent events.

For example, police are trained to respond in intense situations requiring immediate decision making.  To assure rapid decision making, they follow their training of following specific protocols of what to do when something specific happens.  Police officers do not have the luxury of lengthy deliberation.  They need to act decisively that addresses the situation in a previously acceptable way.    The trained protocols often are justified by prior events.   In the past, there may be some unambiguous event that turned out bad because police failed to respond aggressively enough.   The subsequent investigation following the historical event was to recommend improved training and response protocols to prevent this from recurring in the future.

Based on the information I have seen, I can see imagine prior scenarios that would justify the aggressive responses in the initial incident and the subsequent response to crowd control and rioting.   The long past scenarios provide evidence that situations may be far more dangerous than they appear.    At some point for whatever reason we convinced ourselves it was better to be better prepared for the worst.    According to those decisions, the situation called for following particular protocols that were trained for quick and decisive actions.  Prior to the initial event in Ferguson, if the community reviewed the procedures and the evidence behind those procedures, I would imagine that the community would agree about the appropriateness of the procedures and expect them to be carried out when the circumstances warrant them.

This evidence-based approach of preparing protocols for police response is analogous to the big data analytic-based decision making.   Based on actual evidence recorded in history, these recommendations make sense and should be acted upon when the circumstances recur.

The problem with the Ferguson episode was what happened after applying this previously approved protocol to a specific present-tense event.   Part of what is so confusing about the episode is that both sides appear to be right.   The response to the circumstances was at the same time appropriate and inappropriate.

The example of Ferguson illustrates our need for human accountability.   Even though we might have earlier agreed with the following of these protocols in hypothetical scenarios that appeared similar to what actually happened, we demand to be convinced those protocols were appropriate in this precise case.   We demand human accountability even as we recognize that everyone was doing what we earlier agreed they should be trained to do.   Someone has to explain this to us, or be in a position to modify the practices so this doesn’t happen again.

This demand for accountability appears to be happening.  The problem escalated in part because there was no one available to be accountable.   There was no one to explain the actions and to defend their appropriateness for the current circumstances.

I don’t see how that kind of accountability is possible with big data analytic decision making.   Evidence based decision making asserts that the evidence is both necessary and sufficient for a decision.   We must accept the consequences of decisions that are purely based on evidence because evidence based decisions are the only valid decisions.   Accepting evidence-based decision making (that ultimately can be automated) means accepting we have no grounds to demand accountability when we don’t like how what happens after the decision is acted upon.

We demand accountability for recent events.   We need to know that the decisions were appropriate for this particular circumstance even though it is obvious this particular circumstance could not have been part of the evidence considered when the decision to establish specific protocols were made or approved.

We demand that decisions be justified for future evidence as well as past evidence.    In order to be effectively accountable, the decision makers must consider non-evidence (doubts, hypotheticals, what-if simulations) in their decision making.   I think part of the problem of the escalation of break down in public trust was the unpreparedness for accountability.   The response seemed to take the form of everyone followed previously agreed upon protocols.   The subsequent escalation of events indicate that this appeal to past evidence to justify protocols followed was not sufficient accountability for this particular case.

Now it appears that leaders at all levels of government are reviewing a wide range of issues about what kind of response is appropriate and what kind of resources should be available to different levels of law enforcement, crowd control, and riot response.    This reaction is a recognition that the decisions we previously accepted were wrong.

Consider the role of the decision makers who came up with the protocols that they trained police officers to follow.   These decision makers need to be accountable.   They should have stepped forward immediately to defend the protocols that were followed, especially in the later cases of the response to crowd control and rioting.   They needed to speak persuasively to convince everyone that the protocols are justified based on the similarities of the current circumstances with historic circumstances that suggest things could get much worse.   They were not only unable to present a persuasive case, they seemed to be silent entirely when their voices were most needed.  Lacking a clear explanation for justifying the actions, people came up with their own contradictory and sometimes cynical conclusions.

The events spiraled out of control in part because there was no human accountability.    Everyone from top to bottom were following rules that they could not explain, defend, or change.   Those decision makers who are paying attention should recognize their ineffectiveness in accountability.   As long as they will be exposed to accountability, they should demand to be convinced the evidence supporting a particular protocols will overcome doubts and counter-arguments.   They will weigh evidence against these doubts.   Their decision making will not be purely evidence-based.

Evidence based decision making does allow for evolving protocols as new evidence becomes available.   A present incident obviously could not be evidence for the earlier decision for rapid-response protocols, but it will become evidence for changes to those protocols in the future.   Accepting a purely evidence-based decision methodology means unquestionably accepting all of the immediate consequences of those decisions.   Socially, we need to think differently.   We need to look at current responses as being inevitably the consequence of historic evidence.  We should not rebel at the application of such evidence-based decisions in current events.   Instead, we need to find consolation that the current events becomes evidence that may modify future decisions in a beneficial direction.

I don’t see this happening.   The nature of human society is to demand human accountability to justify the consequences of decisions based on the the immediate circumstances.   We need to be persuaded that the decisions were appropriate for this specific case or that someone will accountable for why it was inappropriate.   That demand for accountability includes the expectation that the decision maker considered the doubts in addition to the evidence.

The demand for human accountability for current consequences is incompatible with the ideals of evidence-based decision making.  This is especially true of big data analytic algorithm based decision making based purely on observed historical data.   I don’t see us ever dismissing bad outcomes as an inevitable consequence of spurious similarities of the current events with some historic event.  The hope that perhaps future decisions built on this new evidence will turn out more beneficial will not soothe our complaints about the immediate consequences.

Unfortunately, that attitude of dismissal and hope is necessary to accept automated evidence-based decision making based on big data predictive analytic algorithms.   Adopting a purely evidence-based decision making approach risks large scale social disintegration because no one can be accountable for the specific consequences of the decision based only on evidence from history.

Advertisement

One thought on “Predictive analytics mindlessly following protocols

  1. Pingback: Predictive analytics mindlessly following protocols | Hypothesis Discovery

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s