Data-driven technologies involving social media lead to new businesses that disrupt long standing business models. The word disruptive as taken on a broad meaning to encompass a wide range of innovations that challenge old ways of doing businesses and running our lives.
At the core of the innovations is a wide scale exploitation of large quantities of data collected through a huge number of sensors. Sensor technologies have become affordable, and the associated networking technologies have become robust enough to encourage collecting data from wide scale deployment of sensors. The centralized data collection has further been encouraged through the offerings of server-farm located (or cloud-based) applications that offer something of value for free in exchange for access to data.
Another commonly discussed impact of the disruptive technology is the emergence of a sharing economy. Sharing may be an explicit economic activity such as those services exemplified by Uber (ride sharing) or airbnb (accommodation sharing). However most of the sharing is in an implicit agreement to share data allowing its aggregating across the population to find opportunities that very effectively compete older business models.
We are turning over larger portions of governing our lives to the results of data analysis. To reach peak competitiveness, the data scientists strive to come up with better algorithms that can quickly handle the three V-words of big data: volume, velocity, and variety. The competitive advantage exists in taking advantage of larger quantities of data in large number of varieties (or columns) at with the goal of making decisions faster than their competitors. In order to enjoy the competitive benefits of these analytic generated recommendations, the recommendation need immediate implementation.
In earlier posts, I described the problem of the big data analytic results producing recommendations with rationales that are beyond human comprehension. Even granting that we may understand the concept of the algorithm and trust its implementation, and that we may trust the quality of data, the actual recommendation involves incorporating evidence (data) of far too large volume and variety for humans to comprehend in the time required to make a decision. Ultimately, we must trust the consequences of applying trusted algorithms on trusted data even though we can not independently verify the results.
The consequence of demanding decisions too quickly for humans to comprehend the evidence is the elimination of the human decision maker role. We have to automate the decision making. We may retain the position of the decision maker for practical purposes (for example, he is a leader we can not fire), but that person has lost all accountability for the decision. He must quickly accept the recommendations. When he is later challenged to the consequences of the decision, his only response is that he had no choice but to follow the algorithm’s recommendations. He may request an investigation to check that the data was good and the algorithms were implemented correctly. But he can not defend the rationale of the decision because his personal judgment had not contribution to the decision to act on the recommendation. The decision is automatic.
I raised a concern that this lack of clear accountability in light of injurious consequences can lead to social instability. Our long history of a largely cohesive society relies in large part on the opportunity for injured parties to present their grievances to the decision maker. The social cohesion came from the successful defense of the judgement involved, either to satisfy the injured parties or to solidify a super-majority support for the quality of judgement. Social cohesion may also come from the replacement of the decision maker who was not able to defend his judgement. Automating the decision making process eliminates both options for addressing specific grievances.
Later, I realized that this dependence of social cohesion on human accountable decision making assumes a freedom of society to demand accountability. Just as the big-data driven automation of decision making requires the elimination of human-accountable decision maker, it must also require the population to accept the decisions without complaint.
A recent post described the eventual necessity of participation in a decision even when a person recognizes it is not in his personal interest. In that post, I gave the example of the current option for patients to decline recommended treatment in order to enjoy a better quality of life for the remainder of a life shortened by a terminal illness. The example described diagnoses of likely fatal diseases that have brutal treatments with low assurance of meaningfully lengthened life. Even when life is extended, the quality of life even after treatment likely may not be as good as the quality available for the remaining days when the patient declines treatment.
I argued that in order to collect good data about procedures, there will be a need to eliminate the selection bias from patients declining the recommendation. The highest quality of data-driven recommendations for procedures needs to collect new data on the efficacy of the recommendations. To obtain this new data, the patient will have no choice but to accept the recommendations that apply to his own health care. If the recommendation is to proceed with a very uncomfortable treatment with considerable risk of not helping, then the patient must undergo this treatment in order to collect the data to refine the algorithms. This requirement is partially a consequence of our demand for other patients to accept without complaint the denial of treatment by that same algorithm.
I think this recent news story illustrates the latter requirement where a patient is initially charged with a crime for attempting to seek a treatment that was denied within his own country’s managed health care system. In this case, the demand for cooperation of the patient to follow the evidence-based managed health care recommendations escalated to a state-sponsored coercive remedy.
In either of the two cases of recommending or denying a procedure, we need the data to show that improve the algorithm for making better decisions in the future. That algorithm needs the negative results of the recommended procedure failing to extend the life or restore a reasonable quality of life, and it needs the negative results of the non-recommended procedure failing to save someone who might otherwise have benefited.
The promise of data driven (evidence-based) decision making requires eliminating human judgement from the process. The evidence itself is overwhelming for any human to comprehend in the time allowed to make a decision. This is true for either the governing leader making decisions for a population, or the individual making decisions concerning his own life. Data driven governance demands universal obedience to the recommendations from data.
In the earlier posts, I described this loss of autonomy in terms of leaders losing their ability to be accountable for decisions or of individuals losing their ability to address grievances about the way the decisions impacts their life. In both cases, we are taking human judgement out of the equation.
The concept of strict evidence-based decision making is equivalent to making human judgment illegitimate. Human judgment is how we exercise ethics. The automation of decision making by data-driven algorithms requires us to claim that we have conquered ethics. The volume and variety of high quality data and the availability of elegant statistical algorithms somehow assures us that the recommendations are ethical, or that ethical considerations are no longer relevant.
In other words, data-driven obligated decision making introduces a disruptive answer to the legacy problem of ethics. Classical concepts of human judgment based ethics can no longer compete against its disruptive counterpart in data-driven evidence-based decision making. Ethics is no longer a necessary consideration for individual decisions when that decision bypasses human judgment. There may still be some role for ethics in the choice and implementation of algorithms or the governance of data, but once we accept the general applicability of the algorithm and the data each evidence-based decision would be immune to ethical challenge.
I remain concerned. The recent promotion of data-driven and disruptive technologies scares me because what seems to be universal enthusiasm for its inevitable ethical benevolence. I am further concerned by the lack of any discussion of the consequence of eliminating human accountability and choice that comes from human judgment. The rapid adoption of disruptive solutions to resort to a sharing economy where data-driven decisions must be obeyed eliminates the concept of ethics from the individual decisions.
I do not think the free society that provided us so many benefits over the past couple centuries can exist without human accountable decision making and freedom to make individual choices. Our largely peaceful non-coercive society was the result of society’s managing of ethical human judgment.
A recent news story that exemplifies this managing of ethical decision making is the recent verdict of multiple counts of corruption by a former governor of the state of Virginia. Although I personally do not agree with a result that seems to criminalize common political behaviors, for this post I assume that justice was served properly and the jury came to the right conclusion. We can hold the former governor accountable for poor judgment in making a series of personal decisions that suggested a form of corruption. The conclusion of the accountability process is that sending this man to prison for the rest of his life will preserve society’s trust in government.
I want to examine this process from a different angle. Once the charges were introduced into the judicial system, the process of justice is very methodical and evidence-based. Although the judicial system still involves human actors, for the most part the rigorous procedures minimize the opportunity for human discretion. The task of the justice system is to reach a recommendation based only on the accepted best evidence. When this process concludes with its recommendation in the form of a decision, or a verdict, we require everyone to accept the results. In the judicial system, the jury’s word decision must be accepted. The appeal process may address possible faults in the conduct of the judicial process but it will not challenge the jury’s judgement directly.
The justice system is at least analogous to the big data analytic projects except the big data projects are even better at eliminating the need for human participation in the process. In my discussion above, I described the universal obligation of following individual evidence-based decisions where we may challenge the data or the algorithm, but we can not directly challenge the judgment of the individual decision itself. Analogously, a judicial verdict may be appealed on grounds of errors in evidence or in procedures, but not on grounds of challenging the judgement of the jury.
In this particular example, we have to accept the consequences of sending two people to prison for perhaps the remainder of their lives for their actions.
As I look at the case, I am uncomfortable with the disproportionate penalty to the crime committed. Although meeting the definition of corruption, the gift values involved did not seem to be extraordinarily exorbitant and nor did the exchanged favors seem to be extraordinary in value. The transactions involved exchanges between people who enjoyed some level of friendship and would not have been illegal if they were private individuals. The only reason why this is a crime was because it involved privileges available in a public office.
Again, I assume the verdict was justly arrived. The verdict is that there were actions that displayed unacceptably poor judgement for a person in public office. It just seems to me that an appropriate penalty would be to make it impossible for these individuals to ever again hold public office and perhaps pay restitution and penalty for the value of gifts. Otherwise they should be allowed to live their private lives freely where it is impossible for them to commit the same kind of crime. Instead, the required penalty will involve lengthy incarcerations comparable to penalties for far more damaging premeditated crimes.
At the beginning of this example, I described this case as an example of holding a decision maker accountable for his decisions. I then described an unfortunate consequence of a penalty that seems disproportionate to the actual crime. We are obligated to accept this consequence as the legitimate result of a judicial process acting on a legitimate law. Yet, at least personally, I feel uneasy about about the ethics of this punishment.
The prosecutor is the human who is accountable for this particular consequence. He had the discretion of what charges to prosecute. He exhibited this discretion by offering a plea deal of a single charge of a lessor crime. When the governor exercised his right for a trial by peers, the prosecutor made the decision to add multiple counts of more serious crimes. The prosecutor made this judgement with the knowledge of the relatively petty nature of the crime and relative severity of the penalties if all charges were found to be guilty.
The prosecutor had the opportunity to select charges that had a worst-case penalty that was still appropriate for the scale of corruption involved. In his judgement the scale of corruption merited the possibility of incarceration for the rest of the defendants’ lives. The prosecutor’s position gave him the full right to make this judgement of what charges to bring to the court of justice.
Within our current system of government, the prosecutor remains accountable to defend his ethics in making this judgment to pursue high penalties for this particular case. Given the possibility that the jury could decide guilt on all charges, the prosecutor made the decision that the resulting worst case penalty would fit the crime.
If the public disagrees with this judgement call, the public can demand accountability in the form of an ethical justification for pursuing this particular possible penalty.
For this post, I am looking ahead where we enter a data-driven decision making process that obligates decision making. In this future, we will no longer have human accountability for prosecution. As the jury noted, the evidence existed to prove multiple acts of corruption that the law demands long prison terms as punishment. In the context of a government obligated to follow recommendations by data, the evidence was good evidence and the algorithm to associate the evidence to the criminal statute was valid. The data-driven recommendation obligates the judicial system to prosecute all of these charges without regard to accumulation of the statute-specified penalties. Eliminating the prosecutor’s discretion about what charges to bring to the court, and eliminating his accountability for the decision results in eliminating the opportunity to apply an ethical consideration about whether in this particular case the penalties fit the crime.
The future society must accept the inherent ethics of any decision made by a data-driven government. I doubt such a society can be a free one.
Pingback: Obligation to act on big data analytics | kenneumeister
Pingback: Improving government with frequently updated laws: rule by data | kenneumeister
Pingback: How might government work when government is by data and urgency | kenneumeister
Pingback: Government by data, a different approach to separation of church and state | kenneumeister
Pingback: Obligation to act on big data analytics | Hypothesis Discovery
Pingback: How might government work when government is by data and urgency | Hypothesis Discovery
Pingback: Improving government with frequently updated laws: rule by data | Hypothesis Discovery
Pingback: Government by data, a different approach to separation of church and state | Hypothesis Discovery