Data deception is a concern for automated decision making based on data analytics (such as in my hypothetical dedomenocracy). I think it is already a concern with our current democracy. I fear the current enthusiasm for data technologies because I do not see much in the way of appreciation for the possibility of deception. There is a huge confidence in the combined power of large amounts of data and sophisticated statistical tools (such as machine learning). Missing from our consideration is how well the data actual captures the real world. The data is not necessarily an honest representation of what is happening in the real world. It is very possible that the data may include deliberate deception.
My point is that the unexpected category is never a topic of analysis itself. There is no value in making policy based on volume of unexpected results. Instead, the unexpected category justified and directs more in depth investigation into explaining the members within that category. This should be the same same response for all of the negative categories. If the negative category draws attention, the appropriate reaction is to dive in and find out how to divide it into new positive categories so that they may support analysis of specific policies or decisions based on these categories. We have negative categories for the uninteresting elements.
When we identify a population with a label of low incomes we imply that their lives would be better if they had higher incomes. This meaning is similar to the above syllogistic fallacy of the illicit major. While there is no doubt that many poor people would desire higher incomes, there are many who choose lower incomes because of some other benefit they get from the jobs. The jobs may be less demanding, or may involve the kind of work they find more enjoyable.
Data should meet tests against fallacies that apply to data like errors in grammar, logic, or reasoning are fallacies in arguments. The above example of a medical health record of a birth with same-sex parents and the mother identifying as a male is analogous to a grammatical error even though the data itself meets the business rules for the form. We should be able to object to this data as valid to use for some purposes such as determining eligibility medical necessity for health services just like we would reject a grammatically incorrect sentence in an formal argument.
Democracy also can not afford to be distracted by spark data (stray voltage) for the same reason. The urgent issues need solutions that require hard and painful choices. Unfortunately, the modern practice of democracy demands obedience to daily public opinion polls that are easily manipulated by stray voltage or spark data. Instead of governing by the people, modern democracy wastes time on arguments over sparks.
There may be many other ways to identify fallacies in handling data that may have an analogous effect on dedomenocracy automated rule-making as classical rhetorical fallacies have on persuasive arguments. In order to defend against malicious or unfair manipulation of a dedomenocracy, we need to develop ways to identify data fallacies that we can use to govern the quality of data for automated rule making.
Today we are seeing a response to entrenched business where the young people are finding success in making disruptive new business plans. To be sure, these disruptive businesses take advantages of new opportunities made possible by new technologies. However, such innovations could have happened within established businesses. As implied by the word disruptive, the established businesses definitely regret not taking advantage of these opportunities for themselves. I think this missed opportunity is a direct consequence of the established businesses failing to hire sufficient numbers of young people, and more importantly to allow younger people to displace older people in positions of authority to allow innovation to happen.
Dedomenocracy is a scaled up version of modern data science practice using big data predictive analytics to automate decision making. As a data science project, there is a need to evaluate the data in terms of how closely it represents a fresh unambiguous observation of the real world at a specific time instead of a reproduction of a past observation through model-generated dark data. Darker data involves some level of contamination with historic observations or with our interpretation of past observations. The problem with darker data is that its use of old and potentially outdated data can discount more recent observations that can tell us something new and unexpected about the current circumstances of the world.
In present government, we often set absolutes in policy positions. These absolutes are claims of truth or natural law. When we set an absolute policy position, we categorically reject any policy that contradicts it. I think this is a bad way to govern because by rejecting as absolutely wrong certain things that people will do anyway we lose any ability to manage that activity. Our only response to a violation of an absolute policy position is to demand that everyone stop doing it. It would be more productive if we tolerate everything so that we can offer prescriptions for more acceptable or less offense methods of doing the same thing.