In recent posts, I discussed the job of the decision maker to balance non-evidence of doubts with the available evidence. I contrasted realistic decision making from the concept of evidence-based decision making as making decisions strictly based on the evidence. For realistic decision making, it is essential for the decision maker to be able to explain the consequences of his decision especially when some group perceives harm in those consequences. I described this ability to justify a decision as accountability. We need accountable leadership to prove that the decision making process selected the best possible option at the time that included consideration of possibilities similar to what actually occurred. This accountability can assuage the aggrieved and it can convince the bystander population to lend active support to the decision. The ability to demonstrate an awareness of doubts that such outcomes were possible and to show how they were answered by available evidence can help a lot to meet these objectives of maintaining peace and cooperation within the community.
Sometimes the concept of evidence-based decision is meant to obligate us to make decisions suggested by the evidence. In earlier posts, I alluded to this with the concept of automating decision making. Since the evidence is sufficient to come up with a unique best recommendation, there is no need for a human decision maker. There is only one option available that represents the best option available given the available evidence.
Although we can choose to automate decision making based purely on trusted analytic algorithms working on trusted evidence (data), there is still a desire to retain the choice of not automating that decision making. The compromise approach presents the results of the algorithms and data to the decision maker but reserves the right for the decision maker to make the final decision that may contradict the evidence-based recommendation. As I mentioned above, he may chose this because the recommendation and evidence did not satisfy his doubts about what might happen. I argue that this option to reject an evidence-based recommendation due to doubts can provide a future benefit of providing a stronger accountability in case the decision causes some unanticipated harm. Even if the decision maker chooses to follow the recommendation, his demonstration that he had the option to reject the recommendation could provide support that he was satisfied that the recommendations overcame his doubts.
As an aside, I will mention that accountability is about obtaining a satisfying explanation for a decision that had unfavorable consequences. That explanation may satisfy us that the decision maker did address the relevant doubts and made a judgement that the recommendation was a worthy one. With that explanation, we may still decide that we don’t agree with the judgment of the decision maker. This turns the debate away from the actual decision and turns it instead to the quality of judgement of the decision maker.
A recurring theme in recent posts may be expressed as an acknowledgement that our society will always involve some form of human debate. Especially when consequences cause harm or are perceived to cause harm to a subgroup, that subgroup will demand a satisfying explanation for why they must accept the result or when they can expect some relief. If left unsatisfied, this demand can escalate to protests, riots, or worse.
I’ve also noted that much of the successes using big data technologies involve examples that occurred without people’s knowledge. This population ignorance meant that people with grievances did not have the opportunity to connect their grievance with the result.
The early years of big data solutions using high velocity analytic algorithms enjoyed a unique opportunity of operating outside of the attention of the bulk of the population that in many cases were the subjects of the big data projects. That era is ending as the concepts and capabilities of big data become popularized. In an earlier post, I suggested that this growing public awareness will result in a demand by the population (customers, citizens, or subjects of study) to include even more data and for that data to be more current: in other words, the customer will demand higher volumes and velocity of data. In more recent posts, I emphasized the risk of the population connecting their grievances to the big data projects and demand accountability for decisions that impact their lives in negative ways.
The motivation for this post comes from another blog that made a very simple observation that the investment in big data should be accompanied by a commitment to abide by its recommendations. The post is short and it expresses a gentler requirement for the decision maker to take into account the results of the analytic algorithms based on big data. The message is that people should pay attention to the results that come out of their investment in big data.
It is interesting that such a message needed to be expressed. My vision of the diligent decision maker is someone who would be eager to use all objective data available to satisfy his doubts and come up with the best judgement decision. I may be naive, but I believe most responsible decision makers do pay attention to the data results. I know many of them endure periodic (sometimes daily) review of data summarized in dashboards of key performance indicators. When a big decision is needed, they participate in the often lengthy review meetings that are heavily defended by data. The decision maker is surrounded by products of data if for no other reason than the fact that visualizations of real data produces very unique graphics to capture everyone’s attention.
I suspect there are very few decision makers who manage to escape the data products or who ignore them when they are presented. Despite this, there remains this complaint that many big data derived recommendations are ignored when it comes to making a decision to change. Apparently, too many companies invest in big data projects but do not show any evidence of changing as a result. If they are not going to change, then their investment in big data is as frivolous as the above blog’s analogy of Sisyphus’s fitness tracker.
The data is not being ignored. It may be true that decision makers are not making changes that big data analytic algorithms recommend. We should expect the decision maker to consider and reject recommendations from algorithms as readily as he would from humans if those recommendations do not come with sufficient support to overcome his doubts.
I have experienced first hand the rejection of compelling presentations supported by captivating visualizations of results of elegant algorithms using large volumes of good data. I was in the same audience as the decision maker. It was an impressive case in favor of making a certain decision, but in the end the decision maker rejected this recommendation. I was focused entirely on the data and the literal description of the problem to be solved. I also had the bias of being a data scientist: I wanted the data to win.
In contrast, the decision maker attends many other meetings that I do not attend and I wouldn’t have been interested in attending even if I had the opportunity. These other meetings presented quandaries to the decision maker. He has to come up with a decision that makes progress in the least disruptive way. The explanation for the rejection involved one of these external considerations. I thought the data did address the concern, but it was not a very strong case. I can see that this particular aspect of the argument was not sufficient to overcome doubts.
We have decision makers because we need people to consider their doubts (lack of data) as well as the evidence-based recommendations. For critical decisions, we want decision makers to employ good and highly developed judgement. We require them to evaluate how well the facts satisfy their doubts. Putting someone in the role of a decision maker is to grant him permission to reject a recommendation for change if it does not satisfy all of his doubts.
In an earlier post, I described the goal of automating the decision making process necessarily requires eliminating the human decision maker. As long as we hold someone accountable for a decision, that person will demand evidence to overcome his doubts. Those doubts may be the very basic fact that he doesn’t comprehend the reasoning for the recommendation and he sees no trusted adviser who convincingly demonstrates that comprehension.
Big data projects deliberately strive to absorb data with volumes, varieties, and velocities that are outside of human comprehension. Recently, there has been a popularization of one of the traits of a good data scientist is to be a good story teller. Because the recommendation is inherently incomprehensible, we need to sell the recommendation with a satisfying story. In this sense, the story telling is just after-the-fact rationalization.
The data scientist is no better offer in comprehending the root explanation of a recommendation than anyone else. He can understand the operation of the algorithms and the nature of the data, but the final recommendation is one that he can not independently verify by himself. Offering a story telling of the recommendation is to offer a fiction that replaces the recommendation where the author can explain the fictional story.
Experienced decision makers recognize story telling rationalizations as distinct from comprehending the actual recommendations. When they spot a story teller, they will ask pointed questions that distinguish the recommendation from the story. The recommendation again fails to win support because the story teller can’t answer this question.
As an aside, this story-telling requirement may be what makes it makes it difficult to find effective data scientists because their success in part depends on their ability to conceive of stories that will satisfy experienced decision makers. Fundamentally, the volume, variety, and velocity of data is incomprehensible to humans so the success of the project hinges on the ability to invent and present a comprehensible and credible story. This is a rare talent especially among those with technical training.
The main point of this post is about the impression that too many big data projects are not resulting in changes in decision making. I think it is natural for a decision maker to be reluctant to accept a recommendation that he doesn’t understand well enough to defend it himself. The big data analytic algorithm recommendations are fundamentally incomprehensible. The incomprehensibility of the recommendation makes it unlikely to convince a decision maker to adopt the change. This limits the potential benefit of the incomprehensible but evidence-based recommendation.
In order to act on most big data analytic algorithm based recommendations, we may need to automate the decision maker. In particular, we need to eliminate the human accountability of the resulting decision. I don’t think this is possible. Peaceful societies based on consent depend on human accountability of decisions that can adversely impact some subgroup. If we hold a human to be accountable for a decision, then that human will demand a persuasive argument that someone comprehends the recommendation. He will demand it because he will be held accountable.
Another message in the above linked article is the obligation to follow recommendations that come from big data. That article starts with the quote from this blog:
“Don’t measure anything unless the data helps you make a better decision or change your actions. If you’re not prepared to change your diet or your workouts, don’t get on the scale.”
Adopting big data solution with analytic algorithms and visualization obliges us to make changes in accordance with the recommendations. At some point, the volume, velocity, and variety of data must be accepted as a article of faith similar to what religious faiths ask of subjects in theocracies. The reasoning may be incomprehensible. Because this reasoning is based on trusted algorithms acting on trusted data, we must be prepared to accept the recommendations on faith of its beneficent nature and of its self-evident truth.
Another angle of this obligation concerns what happens when a decision maker decides against a big data derived recommendation. He decides reasonably that his doubts are not sufficiently persuaded to make a change. The possibility of adverse consequences remains. When these consequences do occur, it will be argued that he ignored the evidence that could have avoided this particular consequence. Even when his doubts concerned a different possible consequence, there will be very strong case for incompetence or negligence for not following a evidence-based recommendation that would have prevented this consequence. Thinking this through, the decision maker may find himself again obligated to follow the recommendation despite his inability to comprehend the recommendation or his concerns about other scenarios that could result in unintended bad consequences.
Adopting big data solutions increasingly obligates the decision maker to follow the recommendations.
This obligation to follow a recommendation obviates the role of the decision maker. It is only fair that we eliminate the responsibility of accountability from such as role. This leaves us with automated decision making without accountability. We need to believe that this lack of accountability will not impact peaceful consent following consequences that result from such decisions.
Addendum added on 9/11/2014: This post in Harvard Business Review blog presents another argument for turning over trust of decision making to algorithms. Also my subsequent posts here and here further explore the ethical questions of automating decision making.
Addendum 9/22/2014: This post on Wired presents another optimistic recommendation to automate the decision maker.
We’ll get much better results, however, if we automate this feedback loop, thus accelerating our ability to respond to changes in the environment, as well as taking error-prone, lazy humans out of the equation. Now we’ve reached Level 3: feeding back our analysis to improve results automatically over time. In other words, we’re now more responsive to change in the business environment — an essential aspect to business agility.