A definition of what separates data from big data is the triple challenge of large volume, high velocity, and wide variety. The technologies of big data system (data collection, storage, retrieval, analytics, and visualization) present new opportunities for making positive decisions that predict where to find benefits, or that prescribe actions that can beneficially influence the course of events. Much of the recent promotions of big data focuses on early success stories of realizing real benefits from specific isolated actions. This promotion encourages much higher wider adoption of the technologies and a broader application to solve more problems rather than isolated ones.
The trend is to build businesses around a core data capability that supports the majority of decisions for running that business. This presents a new kind of challenge that could be described as the decision-making counterpart to big data. For this post I’ll call this concept big decisions to reuse the definition of big in the term big data. Big decisions are decisions facing options that are presented in large volumes, at high velocities, and with wide variety.
We are tackling the technical challenges of big data to rapidly retrieve, analyze, and visualize data for all aspects of operating a business or mission. The new problem is that now we have too many recommendations to pursue where these recommendations are arriving too quickly. Due to the comprehensive goals of the big data project, the recommendations necessarily will encompass a wide variety of different parts of the business. In addition because the decision maker needs to allocate limited resources be selecting the opportunities to pursue and then to sequence them in the right order, he encounters additional variety in terms of selecting which recommendations to pursue and in what sequence.
In recent posts (here, here, and here), I described a problem of the decision-maker becoming ineffectual because he can no longer understand how the rationale for recommendation from analytic algorithms and because he is powerless to change the algorithms. Even if the decision maker were brilliant enough to have that kind of deep understanding and flexibility, the decision maker may become ineffectual because he is overwhelmed with options.
One way to summarize a theme from the recent posts is to point the the uniquely valuable resource of the effective decision maker or leader. This is especially true in governments involving voluntary participation and support of the governed. People will support effective leaders. The earlier posts focused on the need for effective leaders to explain their decisions well enough to persuade the super-majority or to revise the decision to accommodate the needs of a vocal minority.
Big data analytic algorithms and visualizations may meet the needs to explain the decision or modify the approach. However, they are likely to overwhelm the decision maker with too many options to pursue. I emphasize here that these options are individually solid recommendations already proven to be the best of various alternatives. The problem now is that there are too many opportunities with solid recommendations and two many ways to sequence them within constraints of resources.
In my last post, I described a way that a news-feed personalizing algorithm of a social media site could be exploited by third parties to get an unintended and potentially unfair private benefit. I assume there was appropriate diligence in testing the algorithm’s behavior and justifying its implementation into the service. But what if this unintended third-party exploit is causing so many complaints that it needs to be replaced. There may be a superior algorithm to use but it needs to be implemented and phased into the production service. This is probably one of numerous recommended changes (each recommendation having eliminated any of its competing options) that needs to be implemented. I assume the decision maker is comfortable with the understanding and his ability to control each of these recommendations. Now he must decide which ones to implement and when to introduce them. This selection and sequencing decision itself needs justification in terms of being able to persuade or to accommodate in order to preserve community harmony either within the organization or among its customers.
Now consider the problem of the big decision where the decision maker confronts lots of competing and equally justified recommendations. Assuming that big data succeeds in delivering high quality recommendations that span the scope of the mission, the decision maker must now select which recommendations to implement and in what order. The required active cooperation of the affected population depends on their becoming persuaded of the justification of this selection and sequencing.decision. Without effective decision making leadership, none of the recommendations may be implemented, or worse the implementation halts after implementing only the first recommendation when the full benefit for the community requires all recommendations to be implemented.
I return to the contrasting rescue events I described in an earlier post (here). Both cases involved passengers needing to be rescued from dangerous but currently stable situations. The first described the lengthy wait for emergency response followed by a very specific sequence of recommended best practices to reach the final recommendation of rescue. The second described a rescue that started ok with the assurance that the train will not move but then devolves into an improvised mob action that just happened to have a happy ending. In that that post I speculated that the spontaneous mob action was from a sense of emergency due to lack of perception of an alternative of waiting for an emergency response team.
In both of these scenarios the key implemented recommendation is the last step that rescues the passenger. To get to that step, the decision maker needs cooperation to allow the preceding steps to occur in the order he decided is best for this scenario. First allow the emergency response team to arrive and assess the situation. Second allow a deliberate sequence of actions to assure the safety of each operation. The final action is to complete the rescue. To execute the plan, the decision maker needs everyone’s cooperation. Cooperation is needed even within an intermediate step such as accepting the order of the individual cars being rescued.
In the stuck roller-coaster incident it is pretty easy to understand the reasoning for each step and their sequences even though the rescue required several hours. It was easy to understand the consequences of not properly performing that step.
In contrast large scale big data solutions will present many individual recommendations that are individually difficult to comprehend. Also the need for executing recommendations in a particular order may be based purely on resource limitations so that multiple sequences are equally valid. The decision maker or leader needs everyone’s cooperation to follow through one sequence. An uncooperative population may start implementing a later recommendation before resources are freed up and thus causing the entire project to stop.
A recent good example of a big decision is the 2010 affordable care act to reform health insurance practices in the USA. In order to accomplish the goals for universal access to health care, the legislation required implementation of multiple individual recommendations that each would provide some benefit in addition to allowing other recommendations to work. The legislation identified the major areas to implement and a basic schedule. I realize that many of the details for specific recommendations and sequences were delegated to the executive branch, but the basic plan identified may key recommendations and a basic sequence of milestones. For this discussion, I assume the plan represents well documented and supported recommendations.
I think the affordable care act is an example of a big decision: a decision that involved a large volume of recommendations, with a high velocity of decisions by the executive branch, and a variety of options for sequences for implementation. At the current time four years after the law was signed, the progress of implementing this law is disappointing compared with the original goals. One reason for this disappointment is partly because some of the recommendations turned out to be more difficult to implement. This may be blamed on a failure to come up with a well-justified recommendation in the first place. Another reason for this disappointment is the because of the lack of cooperation of those who feel the recommendations were not well-justified in the first place. Currently the implementation of the reforms are still making progress but it is unclear how long they will take of how much it will cost.
The affordable care act was a big decision and its implementation is struggling. A major reason for the struggle is that it is overwhelming the decision making groups with too much volume, velocity, and variety. The affordable care act came from older legislative practices with committees to prepare individual recommendations.
In contrast, big-data driven big decisions are comparable to confronting comparably complex collections of recommendations at a far faster rate. As mentioned above, one of the problems slowing down the affordable care act was inadequately researched recommendations. The big data solution may offer automated analytic solutions to more fully justify a recommendation so that it is less likely to fail. However, the other problem of maintaining full cooperation of the population will remain. That cooperation depends on the population’s being persuaded that the recommendations are executed in the right sequence and at the right speed.
The challenge of getting all of the affected parties to cooperate and contribute appropriately to the different recommendations of the affordable care act has enjoyed only a modest amount of success. Some of that lack of success may be attributed to overwhelming the decision makers with attempting to accomplish too many things too quickly.
Big-data driven big decisions will require leadership able to manage decision making of comparable scope at a much faster rate. Ultimately the project requires cooperation that comes from trust and faith in the decision-making skills to select the best sequence for the identified recommendations. Implementing these recommendations at the speeds implied by big data solutions can present a crisis of leadership. We may not be capable of granting that level of trust and faith for decisions occurring so quickly.
Without trust in the decision maker to make decisions, we have to eliminate the decision maker as an accountable human for the machine-generated recommendations. For this option, we ask the population to trust in the benevolence of the machine algorithms working with extensive data even if it causes short term disadvantages that can’t be explained. Given the high volume and velocity of recommendations that will come from big data driven systems, this option may be inevitable. The only way to achieve the goals may be to implement recommendations too quickly for human interaction at all. This is analogous to the current program trading in commodities where trades occur within milliseconds in order to find a profit. Big data solutions applied to businesses or governments will result in a similar automation of recommendations with no human in the process.
Without a human held accountable to justify and negotiate the recommendations, we will encounter a more unstable society where some people will be frustrated by their inability to have their grievances addresses and the remaining population being ambivalent because they lack sufficient understanding to defend the recommendations.
My observation of the recent civil unrest in Ferguson Missouri is that it exemplifies the fact that we depend on civil cooperation and support to implement recommendations. Looking beyond the precise details of the triggering event and the subsequent actions, I see a pattern of automatic policy-driven actions especially by the police force. To respond quickly to a sudden crisis with unknown risks, they needed to pull out a standard recommendation for preparing for those encounters. Such recommendations could easily be justified by historical data in the cases where police encountered losses when arriving to a volatile situation without being fully protected and armed. I can even imagine that the population had earlier demanded that police be well equipped when responding to similar situations.
My point here is that it appears they made decisions automatically from some predefined action plan. This seems to be directly analogous to the big decision problem that will occur with big data. Decisions will have to be automated to use some recommendation based on historical data. The aftermath of the police response is an illustration of what can go wrong when decisions are automated. The people were not persuaded of the appropriateness of the response. There was no effective leadership to persuade them that this was a standard procedure that was democratically approved earlier. There was no effective leadership to negotiate with to change the procedure because the circumstances this time made the procedure inappropriate.
To me this incident illustrates the fragility of automated decision making that will become necessary in order to realize the supposed benefits of big data. The decisions will have to be implemented automatically and may be fully justified by historical data. There is no effective human decision maker available to persuade the current population of the wisdom of the recommendation. There is no effective decision maker to respond to the population’s well reasoned demands to accommodate the current conditions that are different from historic ones. The inevitable result is civil unrest and resistance.
Even when big data projects succeed in generating rapid recommendations based on extensive data collections and advanced algorithms, we have inadequate human-accountable decision making capability that will assure cooperation to implement these recommendation. Automated decision making can lead to disaster, very quickly.
Pingback: Big decisions responding to volume, velocity, and variety of recommendations | Hypothesis Discovery