The problem of automating the Decision Maker

I am observing a common theme of many disruptive or highly promoted technologies that are transforming businesses.   The common theme is the elimination of the buck-stops-here concept of a decision maker.    People are accepting decisions made by machines.   When I started my career, there was much talk about technologies enhancing decision support for decision makers, but now it seems we are solving that problem by eliminating the decision makers.   Do we need humans to make decisions?

In the context of big data promotion, most of the fully automated decision making through high velocity predictive analytics remains mostly a promise of a future reality, although that future is quickly approaching.    However, we have seen increasing elimination of decision making through the rise of social media based businesses.

The first example that comes to mind is the rise of the independent journalist reporting researched articles through a private blog or photo-journalism reporting of current events through Twitter feeds.   The missing decision maker is the editor or publisher.    From the journalist’s perspective, the decision making didn’t disappear but merely transferred to his own hands.   However from the consumers perspective, that journalist is but one of a vast number competing viewpoints some more diligent and trustworthy than the others.   How does the consumer decide what content to read and to trust?

Decades ago, the consumer relied on an trusted publisher with a trusted editorial team to present a consistent product.    The consumer bought the product and held that editorial team (and the editor in chief in particular) accountable for all of the content.   Today, the consumer selects specific individuals to follow and builds their own list of trusted sources.   The consumer frequently sets up a list of followed providers that are generally consistent so it is unlikely the consumer will find a reason to suspect the quality of one, but when that does happen the consumer simply stops following that particular source.

The consumer decides to follow a particular blogger or twitter account based on machine generated data such as the number of followers of that blogger, the number of likes or links to that blog, or the number of posts.   Frequently, the consumer makes the decision based on a machine generated list of recommendations based on the machine’s interpretation of the consumer’s interests in terms of prior choices for liking, linking, or following.

The consumer could be making his own decision based on that data and that data is generally machine generated and presented.   While this is a form of decision making, it is not the past form of decision making where a single person is accountable for decisions affecting an entire population.   We’ve eliminated the accountable editor the decided what a published journal’s based of readers will be able to read within the covers of that journal.

In today’s social media, I may choose to follow a machine generated recommendation.  If that turns out to be disagreeable, at least from my personal experience it does not occur to me to write a letter of complaint to the machine-intelligence that made that recommendation.    It is not even clear that algorithm even still exists because today’s recommendations never looks the same as yesterdays.    Although there is nothing to complain to, there is also little incentive to complain because the resolution is simple: stop following the objectionable content producer.

This model of consumers making their independent choices based on immediate query results for numbers or based on machine-learned recommendations is rapidly expanding beyond the  publication markets.    In recent news are the smart-phone based applications that allow consumers to arrange for rides (bypassing managed taxi services) or accommodations (bypassing managed hospitality services).    These services use a similar model of providing the technology that allows the consumer to brows through options based on numeric values (such as ratings) or based on machine-generated recommendations based on the consumer’s past and current circumstances.   I am not very familiar with these services in practice, but it seems these are optimal for making very immediate arrangements rather than scheduling distant future reservations.    But it does appear  these models are putting pressure on established businesses characterized by being led an accountable manager.

We are seeing increasing examples of successful and enduring models where a market operates without a population-scale decision maker.   These markets operate on a model of individuals making their own decisions affecting only themselves.    It appears whenever this no decision-maker model is attempted, it succeeds more often than not.     We are becoming accustomed to the idea that we don’t need population-scale decision makers.

The technology trend is moving so fast, that we are encouraged to replace even more decision makers of different markets.    Because machines can provide successful recommendations for choices to be made at the individual level, machines are capable of providing successful recommendations in general.   We are beginning to accept more broadly that we may not need an accountable human to make decisions for a population of individuals.

I wonder why we had earlier decided we needed such accountable humans as decision-makers in the first place.   There is not really anything fundamentally new about having access to numbers about volume of content or sales, or anything new about having access to recommendations for what to choose.    These concepts exist in barter economies and open area marketplaces with individual stalls selling at the individual level.     We can easily recognize who gets more business or pick up recommendations by reading the surrounding crowds.

While these open air marketplaces still have some presence, for the most part we chose to prefer doing businesses with more corporate entities: entities with identifiable decision makers accountable for the entire corporation.

A recent news story about one particular transaction using AirBnb (a social-media based tool for arranging accommodations on private properties).   This particular customer entered into a short term lease through AirBnb and then refuse to pay or to leave.    The lede of this article sums up the problem nicely

Here’s one big difference between a hotel and Airbnb: If someone rents a hotel room and refuses to leave, the desk calls security and has him thrown out. If someone rents out a place using Airbnb and the “guest” refuses to leave, there’s no desk, no security and sometimes not much recourse.

When there is no accountable decision maker other than the two individuals who agreed to the terms initially, then it is left to those individuals to work out any disagreements.    This seems to be pretty much just as works with the barter economies or open-air marketplaces where the transaction is a personal contract between the vendor and the provider.   In case of disagreements, the only recourse it work out the differences between the two parties, sometime escalating within the limits that the community tolerates (which could be quite violent).

In this AirBnb example, there is still a level of recourse to resolve the conflict through the local government’s laws which happen to favor the tenant.    The laws did not anticipate the possibility of lackadaisical rental arrangements.   The laws assume landlords were professional landlords.   In addition, the new services often invest in lobbying to prevent new regulations that could impact their business.   The general population accepts the notion that these businesses do not need specific regulation.   The concept is of enabling nice people to make a friendly transaction.

In an earlier post, I mentioned my optimism in the human capacity to do confirm my pessimism about human nature to produce a sizable population of people who don’t play nice.   A lot of the modern hype and enthusiasm about disruptive possibilities of social media and big data has a core belief that humans are generally very nice people and that the not nice group are inconsequential enough to be ignored.

The trouble is that a single malefactor can do a lot of harm.   People who are capable of taking an unfair personal advantage of a single transaction are capable of taking similarly unfair advantages over a large group of transactions.  A single well publicized malefactor can set the example for others to attempt to copy with some improvements.

I think we adopted a preference for a corporate model to have someone accountable for a population of customers or clients.    We adopted this model so we have some other recourse than to resort to violence to resolve a difference between individuals in a transaction.   Without corporate accountability, we may have to return to thuggish behaviors or community lynchings to expel someone who is not playing nice in our social media or big data playground.

Nevertheless, it seems these stories of bad outcomes of individual transactions appears to be easily ignored.  So far the cases appear to affect just one individual at a time and the specific details seem to be so unlike what anyone else would do (an observation that would probably apply to any such transaction).

Meanwhile, we continue to receive enthusiastic advocacy of the new age of commerce without a corporate intermediary.    There is a large number of advocates each with large base of followers and content.   Each of these advocates are very enthusiastic about the promises of the technologies to usher in a new world of commerce.    Each of these advocates present a happy disposition and appears to be genuinely nice people.    There is very little evidence that there is any group of people who don’t play nice, or if they exist that do not appear to number large enough to be concerned about.

We can trust the social media and big data technologies to make recommendations for our own private decisions.   If something goes wrong such as the above example, we reason that we have only ourselves to blame for missing some clue.   Certainly by that time the machine will no longer be making that particular recommendation.  We wouldn’t think of holding the machine accountable for making a bad recommendation.  Even if were inclined to complain, we wouldn’t get very far in convincing others to join in a collective demand for accountability of the machine’s decision.

This thinking about eliminating the need for the decision maker is encroaching on other aspects of doing business.   The successful experience with these lower level markets is influencing our attitudes about how the technologies can similar replace decision makers at higher levels of responsibilities.   This comes through in the advocacy of big data technologies of predictive analytics and compelling data visualizations.   This advocacy promises a near term reality of machine generated recommendations that are so convincing that the need for a decision maker is superfluous.   If there is only one optimal option, then there is no decision to be made.   Also, if a decision maker chooses to endorse the one optimal option, he personally can not be faulted if that option does not turn out well.   He had no choice to make.

The enthusiasm of machine generated decisions (or incontestable recommendations) disregards actual decision makers.    The only way we will automate the jobs of accountable decision makers is if we fire them entirely (or not include accountable decision makers in new business plans as in the case of the likes of AirBnb).

As long as a human sees himself as personally accountable for a decision affecting a business involving a large population, that one individual will take that accountability seriously if for no other reason than fear of being held accountable.   The only way to eliminate the need for decision making is to eliminate the role of the decision maker.

We select decision makers, and especially executive level decision makers, for their strength in standing alone on some issue because in fact he is alone in accepting accountability.   Such a decision maker may demand to be personally convinced of the merits of a proposal despite unanimous opinion of the the leaders of the different parts of the organization.   When presenting a case to such a decision makers, there is no safety in numbers.   The decision maker may challenge each presenter to defend his case on its own merits.  Often that presenter may need to back and strengthen the case.   Despite being so outnumbered, the decision maker stands apart from the rest because he alone has the accountability for the decision.

My observations of decision makers is that they have very little patience for machine generated anything.   Even today, a common depiction of an executive office is of nice furnishing for guests and the only technology is a phone on the desk.   Certainly an executive will attend meetings in well equipped conference rooms with multiple screens for teleconferences or rich-media presentations, but he invites only people to his office.

A person with the burden of accountability wants confidence in making a decision.   Human nature is to want to get that confidence from other human beings.

I once heard an excellent description of this process when I wondered why vendors have so much trouble selling their solutions directly to decision makers.    The answer was that the decision maker wanted to hear a convincing case from his own staff who he can trust.    At the time this argument annoyed me because clearly a lot of the promoted technologies seem to present the notion of putting tools directly in the decision makers hands so he doesn’t have to rely on staff.

In some cases, the technologies promise that the decision maker doesn’t need staff to answer certain kinds of answers.   For example, that’s the idea behind executive dashboards showing key performance indicators (KPIs).    These tools are designed to be fully automated and readily comprehended by the executive without having anyone around to explain it to him.    The tools are marketed to the executive as if the executive were the equivalent of a consumer subscriber to a social-media service like AirBnb.

Executives are not consumers.   They are accountable for their decisions that affect large populations.

Decision makers need to hear the case presented by humans.   A typical question of the decision maker to the person presenting some case is a simple one of “do you understand what this is saying”.    The decision maker needs that knowledge that his trusted staff fully understands what is being presented.

The problem is that increasingly, the presenter will not be able to answer in the affirmative.   Even in the case of KPI dashboards, the KPIs are an extreme condensation of a very broad scope of the business.   The algorithms produce a result such as a green up-arrow that says today looks to be an improvement over yesterday.  Such algorithms involve tremendous amounts of data and complex algorithms.  It is doubtful anyone can defend the conclusion of the KPI.  One of the reasons the KPIs are automated is that the calculation is beyond the ability of any individual to produce manually.    A decision maker may challenge the presenter to give some examples of why today looks better than yesterday and the technology will obligingly give convincing examples.   However, the decision makers will be disappointed if he expects the presenter to understand why the totality of that slice of business is better than it was yesterday.

I recall the enthusiasm of decision support systems from the antique days of 1980s computing.   During my undergraduate years, I encountered a lot of enthusiasm for artificial intelligence and a sub-specialty known as expert systems.   We believed in the inherent virtue of having a machine instead of a human provide expert advice based on knowledge capture of an entire community of experts.   That didn’t happen (at least to the extent envisioned at the time).  Today, in the place of expert systems is the big data technologies with predictive analytics and data visualizations.   The result is the same: executive can consult a machine instead of a staff of humans.

My impression is that the executive who is accountable for his decisions will remain in an office furnished with comfortable chairs and a telephone.    He needs his trusted staff to inform him of the wisdom that supports a particular decision.   He needs to know that other humans fully understand the facts behind the decision.

Turning decisions over to machines will necessarily require us to eliminate the human accountable for the decision.   As long as we hold a human accountable for a business decision, that human will demand other humans to explain the data to him.    He is not going to commit his accountability to a machine result that no trusted human can explain to him.

Perhaps the time of accountable decision making is past.   The business models exemplified by the social media and big-data machine-learning recommendations will take over the older models of human accountable stewardship of businesses.    There will be unfortunate occasions such as the AirBnb example above, but these are issues for the individuals to work out between themselves.  It is probably best if we don’t pay too much attention to how that plays out.


14 thoughts on “The problem of automating the Decision Maker

  1. Pingback: Challenging the supremacy of evidence in driving decision making | kenneumeister

  2. Pingback: Obligation to act on big data analytics | kenneumeister

  3. Pingback: Media’s Ferguson Fable, a morality tale of dark data | kenneumeister

  4. Pingback: Useful Activity Tracking of government work requires flexible coding | kenneumeister

  5. Pingback: Fake but accurate: what we once called fiction has now become non-fiction | kenneumeister

  6. Pingback: Orphanocracy: a government by decisions accountable by noone | kenneumeister

  7. Pingback: Databases motivates philosophy with multi-valued logic anticipated by Buddhist thinkers | kenneumeister

  8. Pingback: Dedomenocratic party: restoring trust among people through data driven decision making | kenneumeister

  9. Pingback: Weather forecasters fail again in their mission to aid city planners with accurate warnings | kenneumeister

  10. Pingback: Spark data: distracting data deliberately introduced to influence analysis | kenneumeister

  11. Pingback: Dedomenocracy in action: forecast and response to DC snow event of 2/17 | kenneumeister

  12. This recent article provides another point of view of this same topic.

    There is a recognition of a problem of accepting automated decisions:

    “A decision is made about you, and you have no idea why it was done,” said Rajeev Date, an investor in data-science lenders and a former deputy director of Consumer Financial Protection Bureau. “That is disquieting.”

    So far it seems that only the technologists (not political scientists or sociologists) are addressing the problem.

    This means that the proposed solutions are technological instead of political or sociological:

    One solution, according to Gary King, director of Harvard’s Institute for Quantitative Social Science, may be for the human creators of the scoring algorithms to tweak them not so much for maximum efficiency or profit but to give somewhat greater weight to the individual, reducing the risk of getting it wrong.

    They recognize that there is a problem at the individual level of statistical decisions based on big data. A decision that works well for the aggregate over the population over time will inevitably produce adverse results for particular minority groups at particular times. These people are not going to be happy with these consequences. They will demand accountability from a human who can defend the decision in light of their specific circumstances or to make amends with better results in the future. They are going to be upset when they find no such human exists and that they have no choice but to accept their injuries.

    I don’t think this will remain peaceful. Minorities will revolt in protests as we’ve seen in other parts of the world. If protests are sufficient in size and there is insufficient rallying around the automated-decision cause, the protests will dismantle the system as surely was we’ve seen collapses in foreign countries in past several years.

    Solutions to this problem needs to involve the social and political sciences. I tend to agree that big data and analytics can come up with generally better decisions than humans can make. But there will be inevitable injuries that will need a satisfying path to redress grievances. I think that involves a preparation of the population to become participants in the data-science project: to be able to scrutinize both the data and the algorithms themselves to convince themselves of the soundness of the decisions, or even to conduct their lives in ways that future decisions will benefit them.

    Without popular participation in the data science of this project, we end up with a autocracy similar to a theocracy where data replaces the religion. The autocracy’s only option to quiet the dissenters is through coercive force.

  13. Pingback: Big data can re-identify de-identified data | kenneumeister

  14. Pingback: Do we need narratives | kenneumeister

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s