In this blog, I have written about an imaginary form of government I called dedomenocracy. This is a form of government that uses algorithms instead of humans to make enforceable policies, and where humans do not even have veto power over those policies. My specific concept requires democratic participation in every aspect of the process except for the final decision making of what policies to select and how those policies will be enforced. The democratic aspects include:
- Selection of the algorithm, and in particular, the identification of long-term objectives and how they relate to each other in terms of priority,
- Selection of the data permitted to the algorithm,
- Determination of the time when a policy is needed.
This concept of dedomenocracy presupposes a nearly exhaustive data collection of every aspect of the world and of people’s lives. The concept places highest priority on collecting reliable data about the natural and the human world with as little bias from policies as possible. Ideally, there would be no enforceable policies so as to collect the best data about what is really going on in the world, and in the human condition. To accomplish this, this concept will only implement temporary policies. The policies may be arbitrarily authoritarian, but the policies automatically expire in a relatively short time frame. After expiration, the public would need to demand a new policy before one is introduced.
I described the process of demanding a new policy as urgency. The public expresses urgency when a super majority agree that the situation needs policy intervention. Once triggered, however, the algorithms make decisions based on its own assessment of all available data and based on the prioritized objectives in the algorithm. The chosen policy may have nothing to do with the urgency, and it is independent of any prior policy. This form of government is permitted to contradict itself even in immediately successive policies.
Machine learning, such as neural networks, inspired this idea. In recent years, we have been very accepting of delegating previously human decision making to the products of machine learning. For example, we are allowing machines to drive our cars, a process that can put lives in danger: both those within the car and those in other cars or outside of cars entirely. There is precedence to our relinquishing of our decision making to algorithms.
In addition, the inspiration came when I tried to understand how machine learning comes up with its decisions. It occurred to me that the machine was building a world view within its system. Through testing and demonstration, it proves to us that its world view is at least practical. Often the machine-learning world view rivals human world views in terms of practicality.
There is a fundamental divide between world views of humans and of machines. Human world views are challenged through human rhetoric. Machine world views are hidden and private to each instance of machine learning. There is no rhetoric within machines, and certainly not between machine and humans. In order to take advantage of machine learning, we have to accept that we cannot challenge the machine’s conception of reality. In practice, we just assume that the machine has no world view at all, that it is just following what it was trained to do. But, to take full advantage of the technology, we have to let it act with little or no intervention by us.
Dedomenocracy is the ultimate end point where we trust machines to govern us. As with any other form of government, some people may like the policies, and others will hate them, but overall I presuppose a super-majority consent to trust this form of government. And, in analogy to the machine learning examples, we forfeit our veto power over the machine’s decisions. This forfeiture may be limited at first, but I’m talking about a final state where humans are totally excluded from the policy making itself. We have to submit to whatever the machine decides is the appropriate policy at the moment.
The safeguard is that the policies come with automatic unalterable expiration dates. Once the policies expire, we are free until we collectively demand a new policy. That new policy would consider the data available at the time of the request, and it is not constrained to be consistent, neither with the immediate matter of urgency, nor with prior policies.
Additional safeguards are the public participation in the data collection and in the selection and prioritization of goals for the algorithm to achieve, long before the actual urgency arises for the algorithm to act upon. I imagine the algorithm and its prioritized goals would be relatively unchanged over long periods of time, similar to how we currently treat a constitution. As a result, most human participation will be in the collection, cleaning, and selection of the data supplied to the algorithm.
I presuppose a very extensive and intrusive data collection from a multitude of types and instances of sensors. The sensors have different limitations as to what they can record, and they also have varying degrees of reliability or validity. Humans would be involved in scoring the data sources so that the data fed to the algorithm is acceptable. In analogy to the machine-learning discussion, humans have no rhetorical access to the world-view within the governing algorithm. However, humans can argue and defend the data supplied to that algorithm.
I imagine some future date (probably not that far into the future) where people’s trust in their government derives from their trust in the data available to the government, instead of the world-view of the governing body.
There is an argument for accepting an algorithm instead of a human for making decisions. The algorithm is fixed ahead of time, typically long before a particular crisis emerges that demands a response. Within the algorithm are the public’s consensus of its long term goals and priorities. As a result, the algorithm will decide what best to do at the moment of the crisis, where best is in terms of how best to achieve those long terms goals and priorities. In contrast, a human governor typically responds to the specific crisis, and often emotionally without being consistent with previous promises or previously advocated philosophy.
The above argument supposes that the technology has matured to the point where we can trust it.
In such a government, the primary role for the population is in the collection and grading of data. I discussed earlier a particular grading system using analogies of light. Data can be bright, dim, dark, unlit, sparky, colored (infrared to ultraviolet), or decorative. In past writings, I came up with examples for each of these categories, but I particularly focused on the difference between bright and dark data.
Bright data is direct observations from trusted sensors that are accurate, precise, and replicable. Bright data is very specific, such as my example about body-mass index (BMI) that is the ratio of weight and height but by itself has no meaning.
At the other end is dark data based on analogy to dark energy or dark matter in cosmology: data that comes from calculations from scientific theories. I also chose the term for its analogy to the allegoric darkness associated with evil. There is a conflict between observation and scientific theories. Science is the practice of reconciling observations with theories. As a result, intermingling computations from theories with observations reduces the efficacy of the observations to challenge the theories.
In earlier posts, I gave the example of removing outlier observations when they do not agree with some scientific theory. Optimal decision-making needs to at least be aware of the outliers. Outliers can be hints of new opportunities, or of new hazards.
More broadly, the introduction of simulated data could negate observations that might inform us of something new, something very important. Simulated data is dark in a way that it robs the brightness of observations.
The idea of the different light-metaphors for categories of data is to segregate or mark the data for the algorithm to consider. The algorithm does not exclude any category of data, but it will consider the marking of the data. Bright data can suggest a new discovery that we do not want to miss. Dark data represents past-earned wisdom that we do not want to ignore.
In modern discussions, it is very popular to express a trust in science. Science represents truth, and so we should always act on truth. Truth is presumed to be eternal. As a result, we expect regulations based on science to be eternal. This summarizes how human governments operate: we have laws that remain in effect despite being enacted by generations no longer living.
Dedomenocracy dispenses with this concept of eternal truths, and consequently eternal laws. Dedomenocracy manages the transients. More fundamentally, dedomenocracy recognizes that machines may come up with truths we may never be able to comprehend, in part because the machines have no form of rhetoric to communicate their discoveries to us. All they can do it act upon their discoveries. We may object that it is not consistent with our science, but those objections would be countered by the fact that the machine’s decisions are working better than we expected. The decisions are based on extensive observations of great variety. The machine decisions can discover great opportunities or great hazards. Our science may never discover the opportunities, but it will explain the hazards after the catastrophe occurs.
Science-derived data darkens otherwise bright observations, but science does capture past wisdom we do not want to ignore.
Within data, there are observations that humans are reluctant to consider. In an earlier post, I described population statistics of IQ (Intelligent Quotient) or general intelligence as ultraviolet data. As with ultraviolet light, ultraviolet data can cause harm if mishandled. Our preference would be to ignore it entirely in our policy making, and we often do that by disqualifying the observation as non-scientific. This rhetorical trick keeps this data from consideration in human policy making. However, a machine algorithm working with observations has no rhetorical impediment from exploiting a statistical relationship it discovers but has no way to explain to us. When hidden in an abundance of data from a multitude of sources, we may never even suspect that the decision used ultraviolet information.
There is a similar problem within science. Science may come up with truths that may be very unhelpful. An example of unhelpful science is evolutionary psychology. This science says that human behaviors are influenced by the living conditions of our pre-historic ancestors, or even non-human species we evolved from. It is safe to say that I personally dislike this theory even as I follow the logic. I can raise a meager objection that we do not really know the conditions of our ancestors in the most critical times presumed by the theory.
One particular area of evolutionary psychology is the area of psychological differences between men and women. We often hear of the evolutionary influences on mating behaviors. Women seek the best genes and the best provider (not necessarily from same man). Men seek the most opportunities to pass along their genes. Evolutionary, this makes sense: survival requires descendants.
My concept of a dedomenocracy prioritized actual observations over conjectures about past. Evolutionary psychology is not helpful about what to expect from individuals in a current situation when we have access to direct observed data about those individuals. Evolutionary psychology makes us expect every man to be aggressive and motivated by sexual rewards, and it makes us expect every woman to be motivated by access to resources or obtaining good genes for her children. It also tells us that any man or woman not conforming to these imperatives must be suppressing their frustrations about this fact.
In a survival scenario, the evolutionary explanation is not necessary to cause us to recognize that men need to impregnate fertile women and that women need men to compete for the strongest stock. The theory is not helpful because the situation is obvious.
In modern times, overall human survival is not at stake (although some subpopulations might be). Observations give evidence that there are men and women who are much more motivated to pursue goals other then procreation, or even of the procreative activity. In the modern world, the concept of evolutionary psychology is not helpful, at least in terms of government. Even within the area of sexual partnerships, the theory has limited relevance in terms of decision making. Its primary benefit is a possible psychological remedy for the sexually frustrated.
Evolutionary psychology is an example of an area of science that is unhelpful.
To better define unhelpfulness in science, we can consider the question of how would we live differently if this scientific theory did not exist. The alternative to evolutionary psychology is the myth of creation of man in the image of his creator. A good part of human history operated under that model, and obviously that population produced descendants while also enduring the frustrated. The creationist model gave more latitude as to what to expect from men and women because they were a product of a superior creator. Instead of being driven by genetic coding that recalled hardships of primitive life, both men an women are driven by the design of the creator.
In terms of dedomenocracy, creation explanation is just as unhelpful as evolutionary psychology. Neither can provide actual observations of the formative periods. Meanwhile, we have abundant observations of actual named people who are demonstrating who they are by how they live their own lives. This data will give us more clues about what to expect from each individual than either theory would.
The example can be extended to all of science. When considering whether to admit dark data to the data stores available to our algorithm, we can ask what would be different if we did not know this information, even if there is good reason to believe it to be true.
The current example is with the pandemic caused by a virus. At the core of this pandemic is the scientific knowledge of the existence and behavior of viruses. Some have challenged this science by saying that the virus has never been isolated or proven to cause disease after being injected in its purified form. In any case, we can ask how we would act differently if we were entirely ignorant of the existence of viruses. As with the previous example, history provides examples.
Across history there were disease outbreaks that we may now presume to have virus as a cause. The population at the time did not know it was a virus, but it recognized it was infectious, and it recognized methods of treatment and quarantine. History has examples of horrific outcomes from pandemics that the modern era so far has thankfully avoided. It is not entirely clear that the present-era good fortune is a matter of good luck, good medicine, or generally better health at both ends of the age scale. Historic epidemics might not have been so bad if people had better nutrition and more affluent lifestyles.
There are a lot of details of the current pandemic that are inferred from science rather than observations. To combat the pandemic, we enacted public policies of mask-wearing, social distancing, closing of non-essential businesses, occupancy limitations of essential businesses, etc. These are based on science assertions that the disease an be spread through asymptomatic (or pre-symptomatic) carriers.
The question is whether this science is helpful. If we did not know about asymptomatic or pre-symptomatic carriers, and if we did not have a test to apply to asymptomatic persons, would the resulting public policy be better or worse than we have now? Time will soon tell as disruptions in the global economy worsen, and as otherwise healthy and prime-of-life people get exposed to hazards of vaccine injuries.
The public policy for the entirety of the pandemic has been based almost entirely on dark data to the point of ignoring bright data of observations of what is actually happening. I suggest that we would have been better off if we recognized early on that the science was unhelpful.