In this blog, I refer to dedomenocracy as a government by data and urgency. A super-majority of the population expressing urgency will trigger the creation of some new, but short-lived policy. The policy would consider all available data that the public facilitates in collecting though deploying and validating necessary sensors. The policy would consider all data about the entire human and earth situation, not just the specifics of the urgency itself. The public also provides through a democratic process a prioritized list of objectives and concerns that any future policy must attempt to optimize. The key distinction of the dedomenocracy is that humans have no contribution, approval, or override to the policy that some algorithm would select. The algorithm may select a policy that is incomprehensible in context of the immediate urgency, and it may select for future renewals of similar urgent situation a policy that contradicts the prior policy. The dedomenocracy enforces the policy with complete authoritarianism but only for the short time when the policy is valid. All such policies have an expiration date in the not too distant future.
I contrast this form of government from our current concept of democracy.
Our current concept of democracy select individuals to serve terms during which the individual makes decisions based on their personal emotions and aptitudes. Ideally, the population selects the individuals based on their demonstrated temperament and skill, but in practice the population selects the individuals based on affiliation with a particular party and that party’s perceived temperament and skill. The primary consequence that the actual response to some emergency is completely at the whim of the person holding power at the time. There is no obligation to abide or to even solicit public sentiment of what objectives to pursue or what hazards to avoid.
My particular imagined dedomenocracy removes the human from the policy making entirely. The policies are produced without human influence and without human override. Instead an algorithm considers all available data against the public’s prior expression of the objectives they most want for the future, and the hazards they most want to avoid. In my imagined implementation, there remains a strong democratic participation. The population selects and inspects the data available to the algorithm. The population also selects the algorithm for predicting and evaluating various alternatives against the public’s list of goals and fears. This is a purer form of democracy because the population is expressing what they want considered for any future crisis. The population settles on their preferences during less stressful and less emotional times before the crisis would hit. When some later crisis does occur, the public has greater confidence in the algorithm honoring their goals rather than having some individual acting alone or through some hidden influences.
In context of the current crisis, the public may have expressed their intentions prior to the crisis as prioritizing the opportunities for the future prosperity and that inherently weights in favor of the younger generations and in the detriment of the older generations. The future prosperity belongs to the survivors who reach that future date.
I imagine that the dedomenocracy may have made very similar initial policies that we actually experienced. The two forms of government would have departed dramatically after the first month or so. In our experience, the government by human leadership had to obey consistency and display leadership by extending policies both in time and in severity in order to pursue the initial goal. In contrast, a dedomenocracy could have and probably would have made a dramatic change after the first couple months when fresh data clarified the risks both in terms of the vulnerable and in terms of the future prospects of the less vulnerable. Unlike our democracies, a dedomenocracy has the explicit permission to make subsequent policies that contradict prior policies, and it has permission to pursue some new objective unrelated to ameliorating a crisis that has become less of a concern than originally imagined. Also unlike our democracies that are indefinitely obligated to pursue some objective, such as flatten to curve, or to eliminate the virus, a dedomenocracy produced policies that expire completely after a short period. In order to continue, a dedomenocracy must have a renewed expression of urgency that in my opinion would not have had supermajority support after the first couple months. If a renewed urgency demanded a policy, the resulting policy would consider the population’s expressed priorities in light of new data and may come up with a completely different policy that pursues some other objective and that objective may be incomprehensible to humans due to the quantity of data considered.
Clearly, humans will want to exert some control over the governing policy, and they will compete with each other for the privilege of making that kind of control. In my fantasy government, I assume that the algorithm’s authority is absolute and unassailable by human actors. This is obviously a fantasy given the inevitable opportunities for exploits, but I fantasize that it may be possible to have an impenetrable algorithmic policy making system.
I further fantasize that the public’s expression of their prioritized list of objectives and concerns come from an ideal democratic process, where each person’s vote counts equally toward the final priorities.
This leave the data itself as the primary means for individual leaders to influence or even control policy. The algorithm will consider the data that the population makes available to the algorithm. People can influence the algorithm by controlling what data is available to the algorithm.
In earlier posts, I describe data in terms of categories that I described through the metaphor of light. I described bright data as recent direct observations from well-calibrated sensors. At the other end is dark data that is computed from human theories. Dark data fills in for missing bright data because it is very cheap. In the middle are most data sources that are indirect and need models to relate the observation to something meaningful for comparison. My imagined dedomenocracy prefers bright data over dark data even though that permits accepting correlations and rejecting causative theories.
I argued previously that given sufficient bright data, the algorithm should be able to rederive causal relationships from the recent data. Allowing the algorithm to rediscover some previous theory based on recent information provides confidence that the prior theory remains true, but more important it allows the algorithm to discover something previously unknown, and perhaps something that is beyond the comprehensible communication of humans.
While there is a preference for bright data, dark data will continue to play a role in dedomenocracy. The point is the segregation of the data into whether it comes from observation or from models. The algorithm will consider these types of data differently. This is very different than democracies that tend to prefer the dark data over bright data and also to treat both as similar, as long as observations support the theories.
In a dedomenocracy, the opportunity for policy influence by individuals exists in the control of data made available to the algorithm. This will likely lead to the next level of evolution of human conflict that previously advanced to cyber warfare from the prior purely kinetic warfare. The new warfare will be in the data itself. Although we do not yet have a dedomenocracy, we are experiencing battles on the data front.
Data warfare takes different forms. In context of my data categorization, the warfare can attack bright data or attack the dark data. Either one may be attacked by excluding data that does not support a desired outcome. Alternatively, an attack on one category of data may involved adding data of either category. The attack may introduce new bright data to counteract or negate existing bright data, or to raise doubts about existing dark data. Similarly additional dark data can raise doubts about the existing observations or it can counterbalance existing dark data.
One area of attack is to exclude inconvenient data. We see this with the current situation with active censorship of observations that conflict with the desired narrative. Suppressed information includes the overcounting of cases and deaths, the counter examples where poorer and less medically equipped nations are faring better than richer nations.
One may attack bright data by using different validation criteria for the observations. An example is that all deaths within a period of time of a confirmed positive test or a confirmed exposure would count as a COVID19 death, but no any deaths following a vaccination is presumed to be unrelated to the vaccine without direct evidence of a connection. Both information are available, but associating COVID to deaths is given higher confidence than associating the vaccine to deaths.
Another attack on bright data is to use dark data. One example is to use historic seasonal normal mortality rates to show attack the narrative that the COVID deaths present an emergency. Note that the historic record of daily mortality rates is bright data, but it becomes dark data when it is used to suggest a baseline of what the mortality would have been without the disease. Depending on the timescales to aggregate mortality, there are different levels of excess mortality. There is a worst case excess mortality at smaller timescales of particular weeks, for instance. The question is whether the excess is acceptable and below a threshold of emergency. I presume that threshold would have been decided in advance of the pandemic, and it would be compatible with prior seasons not being declared an emergency.
Attacking dark data is difficult in a democracy. Dark data is the computation based on theories that have some scientific testing for validity. This is a culture that explicitly expresses its trust in science, and this means in particular scientific theories supported by rigorous testing reviewed and replicated by peers. The dark data example for the current situation are the epidemiological modeling that continuously warns of exponential rises in case counts, and predicts dire outcomes under the assumption that the entire population is susceptible. We permit tweaking of the model parameters to account for varying degrees of infection rates, incubation periods, and case mortality rates, but we select one model over competing models that do not have as widespread peer support in the scientific community.
Recently, there was an example of a way to use dark data to counter the epidemiological spread dark data models. This new example introduced a new model that presumes that the epidemiological model is correct, but it introduces an additional prediction of what can happen when a new vaccine is deployed while the infection is at pandemic levels. This model recognizes the science that a vaccine takes several weeks before the body is fully prepared to ward off new infections. It proposes the problem of what happens when a person contracts the virus in the period after initial vaccination but before the body has fully benefited from the vaccine. It is in this window of time that certain variants of the virus may survive the suboptimal response. Such survivor viruses will learn how to evade the vaccine, and perhaps even evade the entire technology used by the vaccine. The result would be a virus that would resistant not only to the current vaccine but to any future vaccine using the same strategy.
This introduction of new dark data directly challenges the policies in our current system because it does not refute the existing dark data of how diseases spread and what proportion of the population will end up suffering. The new dark data uses science that shows that infection agent will likely select for resistant variants when exposed to a mitigation that can not completely eliminate that agent. A suboptimal immune response provides the opportunity for selection of a more dangerous pathogen, one that is resistant to vaccines. The new theory goes further in suggesting the specific antibody response can suppress the general immunity’s ability to handle novel variants. This predicts that the future variants will find more compromised immune systems and thus have higher mortality.
The attack on the current policy of universal vaccination counters the dark data of epidemiology of spread of this virus with a separate theory of what will happen to the virus when it encounters a vaccinated person who’s immune system has not fully prepared itself for the virus. The policy is placed into doubt not by denying the epidemiology but instead by raising concerns of much more dangerous situations of fostering the development of deadlier strain that will not offer a similar vaccine opportunity.
Another avenue of attack is to attack dark data with bright data. This is very difficult to the point of being forbidden in our current system of government. In our current government, in order for humans to lead, they need some foundation that is not unchangeable by evidence. Similar to other governments that base leadership on the foundation of some religion, our government bases leadership on a foundation of science. When an emergency arises, we have to freeze the science. For the purposes of the current pandemic, we have to assume that the science is settled. In other words, a crisis is not the appropriate time to challenge the science.
The consequence is that we can not properly assess new observations that challenge the science. This pandemic offers plentiful examples of contradictory observations. The virus did not spread as thoroughly as expected. There were larger populations that were not susceptible than expected. There were cases where there was no convincing prior exposure to another infected person. The mortality rate rates varied in counter-intuitive ways, where mortality rates were highest in the exact places where they should be the lowest: in areas with access to the best funded and best organized medical systems.
For the policies concerning social distancing and lockdowns, there were the contrary observations of no excess cases of those workers deemed essential, most of whom had jobs that involved entire workdays in direct exposure to the public. In the early months, these workers did not wear masks or other adjust their work patterns to increase separation from the public and from each other.
In a democracy, the declaration of an emergency is a declaration to freeze science, particularly in those areas that tend to predict the most pessimistic results if nothing is done. I suspect this is inevitable because a democracy selects specific individuals to be leaders, and human leadership demands steadfast determination to see a policy to completion and the install confidence of the population. Given the recent experience, this particular property of democracy raises doubts about a democracy’s ability to handle a new emergency that is inconsistent with established theories and the operational plans based on those theories.
All of the above attacks on data will be available to a dedomenocracy. A dedomenocracy particularly would open up more opportunities for directly attacking dark data with bright data as well as competing dark data. There is no settled science in dedomenocracy because it places its preference on new observations. This is the opposite of democracy that prefers settled science to the point of denying the legitimacy or relevance of contrary observations.
Given that attacking the data would be the most human accessible approach to influence policy, the attention on the data will intensify as various competing groups will try to get data that supports their preferences or that undermines their opponent’s preferences. The casualties in this battle is the data itself. Ultimately, the winning policy will be the result of the data that we permit the algorithms to consider. This will be a battle in the human realm with stakes as high as the earlier kinetic warfare as demonstrated by the current situation of risks of death by disease or death by medical invention or prophylaxis. It will also have the added stress of ambiguity and uncertainty that we experience with cyber warfare.
Kinetic warfare primarily affected warriors and populations unfortunate enough to be in the arena of battle. Cyber warfare extended its combatants to information technology professions that at some earlier time faced less deliberately hostile adversaries. Moving toward data warfare will likely incorporate the greater part of the population because it involves the data they make available to algorithms. The stakes both for privacy and for openness can have immediate implications on a person’s life experience. Too much privacy may prevent adequate response to control the pandemic, while too much openness about contacts and about cooperation with vaccinations can lead to severe adverse consequences. The data war will make everyone a combatant or at least anyone who has specific expectations from his government.