Dedomenocracy: unsupervised government

In my discussions of government or governing by data, I have been trying to imagine its implications on the relationship between society and the government.   On the side of government, I imagine that government will become more tyrannical and authoritarian that resembles more of a theocracy instead of a democracy.  The difference is that the source of authority comes from data and algorithms.   While this authoritarianism can be tyrannical, such an approach will require brutal suppression to keep the population cooperating and the result will be an emergence of a culture of poverty.

My somewhat more optimistic view is that democracy will evolve to grant super-majority consent where the population participates in the data science leading up to the decisions instead of participating directly in the decisions themselves.  While it is true that technology will eventually permit pervasive data collection and analytics, I imagine that the resulting rules will generally be short-lived and few in number.   Overall, the government will leave most people alone most of time so that the result is a mostly libertarian government with brief periods of authoritarianism for a small subset of the population.

In this view, I suggested that the government would use data to define urgency and focus of new rules as well as to define the rules through prescriptive analytics.   In a dedomenocracy, humans will have no role in defining and selecting the rules that will be in effect at any time.   This is an extreme view that allows for no human accountability, even from an autocratic tyrant.   This fantasy is my thought experiment to help imagine how data science technologies will evolve in the future.

Often, researchers in machine learning will describe their own optimistic views of how machines will eventually be superior to humans for solving the really hard problems such as poverty, economic equality with growth, environmental stewardship, etc.   While I agree that humans have not done a great job in these hard problems, I am not convinced that machines can do a better job.   The potential is there, but I have a huge respect for the informal reasoning of humans through debate and persuasion.   My view is that hard problems will never be solved except through some optimal compromise solution reached through informal human reasoning than machine reasoning.

The trend is definitely in favor of giving machines a chance to take over just to see what might happen.   I don’t think the experiment will succeed, but I’m resigned to the fact that we’ll have to live through the experiment anyway.

There are multiple approaches to how we may turn government over to machines.  I mentioned above the distinct choices of the making the rules, the choosing the must urgent rules to implement, and the determining of the duration for the rule.   We could allow machines to make rule but then turn over the remaining choices to some form of human government.   My thought experiment demands that all of these decisions will be machine decisions.  The human role is only in the stewardship of the data and the algorithms.

Machines will make all decisions.  These decisions include deciding on the scope of the problem to solve.  We should not expect the machine intelligence to answer arbitrary problems that concern us most.   For example, we may want the machines to solve the current economic problems that ultimately stem from over regulation of labor and of over extension of debts.   When forced to solve the broad problem with a demand to maximize overall productivity, the machine may find uncomfortable solutions such as lower minimum wages, longer working hours, and forced debt forgiveness.  It is easy to imagine a machine solution that will come up with very uncomfortable results and yet meet the goal of a more robust economy.    But, our insistence of the machines to come up with an answer to specific problem may miss the better opportunities.

In my thought experiment dedomenocracy, I would allow the machines to tackle the problems of the machine’s choosing.  Using data, the machines will find something that can be done to improve some aspect of our lives but this something will surprise us in advancing many different problems while not solving any one of them.

Many of the successes in big data analytics may have started off to solve some specific problem but ended up solving a different problem that happened to be useful in certain context.   One example is the use of machine learning (such as neural networks) to provide modern language translation engines.   These engines are able to match patterns in different languages to result in effective translations without any explicit modeling of the individual language rules of grammar and semantics.   Paired with similarly invented speech-to-text algorithms, these are becoming very useful for automated translation services to use in real-time voice communications between people who share no common language.   This technology appears to solve a long standing problem of automating translation services where an interpreter is fluent is both languages.  An interpreter uses thorough knowledge of the two languages’ grammars, vocabulary, and idioms to provide an accurate transfer of information between the two speakers.   The analytic solution does not do interpretation because it does not emulate the interpreter skills.  But it works well for cooperative speakers who are able to rephrase their statements or correct misunderstandings when they detect it.   This is a case of the machine learning finding a problem that it can solve and then solving it so well that we declare it to the problem we wanted solved, after the fact.

Recently I encountered another more basic example in a simple demonstration of using machine learning module in the R language.  This is just meant as a simple demo and not a finished product, but it illustrates the concept of a machine learning a problem to solve.  This simple example trains a neural network the concept of square roots using a training set of the squares of the first ten integers.  Once trained, it can estimate square roots within a couple percent of error.  Assuming prior ignorance of actual relationship of the data, the machine comes up with a reasonable model to predict what other data points in the range will be.  Playing with this model, I found that it fails to work for numbers far outside of the training set.  The training set was for squares of integers between 1 and 10 but the learned engine is unable to give a reasonable square root estimate of 13 squared or any larger number.   The model would work better if it had a more complex neural network or had more training examples, but it will never be able to generalize the concept of a square root to use at any arbitrary scale that is possible with the mathematical concept that works equally well for computing microscopic sizes as for cosmic sizes.   Despite its inability to learn what we know as the concept of a mathematical square root, it is able to learn something useful that works within the range of the training set.  If the training set included the most frequently requested square roots, then it will be reasonably accurate most of the time.  It can provide a utility that is comparable to the previous translation service.

Yet another recent example is a demonstration of machine learning to estimate the age and sex from a face in a photograph.  This prototype became instantly popular to see if the machine can guess an age.   I tried my photo (on this blog) and it guessed well enough to impress me.   However, there are many cases of the same algorithm getting it very wrong.  I found examples using other images I found on the web where the estimated age and sex were very wrong.  From my limited investigation of reactions of this tool, I’m guessing that it works for the majority of people who tried to use it, but it failed for a large number of others.   Like the square root example, the training set was relatively small.  A far larger training set might improve the results.   This particular example was meant for entertainment purposes, but I can see that it may be useful even though it is not able to accurately generalize the concepts of age and gender.   The tool is able to classify individuals into distinct categories of two dimensions and this classification may be useful in multidimensional analysis to understand something about a population.   The machine learned something different from true age and sex recognition from facial features, but what it learned could be useful.  In this case, the utility was in its entertainment value.  There may be some other areas where such a classification could be useful even if the terms are not perfectly accurate.

For these simplified examples, the machine learning’s success comes from latitude we provided for it to define the problem it is attempting to solve.   The learning consists of learning the question as well as the answer to that question.

When I named this blog site “hypothesis discovery”, I was thinking of my own experiences where the most prized discoveries from data were discoveries of new questions that the data could answer.   Often the motivating problems we encountered were vague requests that we do something to help out when it wasn’t even clear what exactly needed to be solved.   For my project, the only tools I had available was historical data collected for some seemingly irrelevant purpose.   The aphorism “when all you have is a hammer, then everything looks like a nail” probably applies to our case. but there are times when progress can be made by hitting it with the available hammer.  Data lakes and analytic tools collectively are as narrow-purposed as a hammer but they can be a source of feasible solutions when we allow their use.   By hypothesis discovery, I imagine a concept of discovering the question that might be answerable by the data instead of discovering a way to answer a preconceived question.

Every day I see new claims of big data being solve identifiable problems such as curing cancer, fighting epidemics, improving the economy, solving global warming, etc.   For example, in the case of curing cancer, we have hopes for an actual cure that will finally make some cancer a treatable and survivable condition.   In these discussions, we seek to discover hypothesis in the sense of finding a new way to solve that prior question.  The question is known, what we need is a new approach to solving it.    This is different from my idea of hypothesis discovery.  Ultimately, we will be disappointed in big data if we demand of it to answer pre-defined questions.

The real opportunity for success and for progress using data driven decision making is when we allow the analytics to pick questions that it can in fact solve.   In health care, for instance, a feasible cure of some dangerous cancer is one of many possible innovations that can improve the health care.    While we are waiting for big data to discover answers to cancer cures, we could be using the data technologies to innovate elsewhere in health care such as extending sharing economy to health-care to permit individualized house calls to address infectious or routine medical care.

Another example was this recent innovation to discover anemia by using a smart phone camera instead of a blood test, promoting a much broader reach of health care to poorer communities.  This example illustrates the value of allowing the technology to redefine the question to one that available technology can immediately help.   We already have diagnostics to test for anemia, but those require medically trained staff to safely draw blood and a nearby laboratory to examine the blood.   The innovation is to use a different approach that uses readily available camera phones with a color sample and then using analytics to determine whether the eyelid color indicates anemia.    Absent this innovation, we would expect big data analytics to answer the question of how can we get blood-draw and laboratory capability to the rural and poor areas.   The innovation is the discovered hypothesis because it is something that we can do now with existing tools instead of waiting for technology to answer a specific question of how to get blood tests to the populations that need it.

I think the goal of curing cancer can present a similar missed opportunity.  While there may be promise in combining big data on genomics, wet-lab data results, detailed health data of patients, there is no immediate opportunity for these technologies to start curing the big cancers any time soon.   Instead of demanding a cure for cancer, we could use ask ourselves what questions can the current data and technology solve right now.   There may be something we can do to improve the delivery of care for cancer patients without actually identifying a cure.   More importantly, there may be health issues unrelated to cancer that data analytics can provide an immediate benefit.  An example of that was the possibility of using data technologies to better understand the dynamics of epidemics such as last year’s Ebola outbreak that affected western Africa.

When we look to data technology to solve problems, we should permit the technologies to identify the problems that can be solved with the current capabilities instead of demanding that the technologies evolve to solve the hard problems we have been working on.   There are many opportunities to make progress even if we don’t touch the hard problems.   Allowing technology to solve what it can solve now may transform the hard problems to be narrower, or possibly even less visible.   For example, there are other ways we can improve overall life expectancy without curing any cancers, perhaps with investments in areas unrelated to health care.   It is our nature to focus on objectives that catch our attention.  This focus can blind us to immediate opportunities that are realistic given our current situation.

Much of the current work in artificial intelligence and machine learning treats humans as a standard to compare progress.   The Turing test (whether we can tell the difference from a machine and a human) still looms large for evaluating machine intelligence.   With this approach, we seek technologies that can out-human humans.   While there has been significant progress with positive contributions from human-beating machines, I think this is limited in value.   With adequate training and discipline, humans can perform the same tasks in more place with far more flexibility and with far less energy requirements.  The population, diversity, and distribution of humans will outnumber and out-maneuver even superior machine intelligence.   Human intelligence is simply far more energy efficient.

I imagine that the improving computer intelligence will set a new standard for human evaluations.  In effect we will train our future generations to match their skills and discipline with computers instead of more fallible teachers.  This is similar to use of data analytics to improve the training of athletes.  In the future, we will use advanced computer intelligence to develop future human students to be smarter and more reliable.

The real value for computer intelligence is to uncover the world that all of humanity is blind to.   In particular, in terms of government policy, the computer intelligence can identify policies that do not attract any political advocacy or debate.   The computer intelligence identified policies may be beneficial and more readily achievable than the multitude of human policies that seem to get no where despite our good intentions and healthy investments.   Examples of human failings include our so-called wars on poverty, drugs, inequalities, cancers (and other diseases), etc.    These continue to get our attention despite slow progress if there is any progress at all.

We could direct machine intelligence to find the policies we are missing or ignoring.   We need algorithms that can find new objectives to seek.

Machine learning algorithms are currently divided into supervised and unsupervised learning.   Both involve some human provided objective.  The difference is whether we supply training examples of the answers for supervised learning.

In the above discussion, I’m suggesting a different approach to machine learning by calling the existing learning approaches as “supervised objectives”.   Humans provide or accept the objectives of either the supervised or unsupervised learning.    In other words, we supervise the objectives of the machine learning.   In my earlier discussions of dedomenocracy, I described the ideal to exclude humans from identifying and selecting decisions.   Another way to describe this is that dedomenocracy involves unsupervised objectives.   The machine intelligence decides what objectives are feasible and which of these to pursue.    I further described a human role in the scrutiny of the data and the algorithms.   This coordination of human and machine intelligence takes advantage of both strengths.  The humans supervise the data (including algorithms) but leave the identification and selection of alternative policies for the exclusive responsibility of machines.  The machines pick the most promising policies to pursue and the machines train humans to efficiently carry out these policies.

Advertisement

4 thoughts on “Dedomenocracy: unsupervised government

  1. Pingback: Business activity tracking can improve requirement analysis for maintaining legacy applications | kenneumeister

  2. Pingback: Life in automated world | kenneumeister

  3. Pingback: Paradoxical value of zero value men | Hypothesis Discovery

  4. Pingback: Fake News: A Dedomenocratic Perspective | Hypothesis Discovery

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s