Current debate about artificial intelligence automating jobs usually consider that the jobs at risk are low-skilled jobs. The advancements in AI simply raise that lower level of jobs that can be more economically performed by machines.
For example, there is now talk of autonomously driving trucks that will put truck-drivers out of work. Even if there is a need for a human operator, the trucks can travel in closely spaced convoys with just one driver in the lead vehicle. Eventually, the automation will permit the trucker jobs to be similar to drone pilots, working shifts in some office space where each trucker controls dozens of vehicles in transit. The overall cost benefits will be to have rigs being able to operate continuously without need for stopping for resting the trucker. The truck can drive uninterruptedly as remote-operators rotate their shifts in the office.
Another example is the automation in taking orders in the fast food industry.
In both cases, I appreciate the added value of automation offering better productivity, reliability, and consistent service delivery. These are worthwhile goals that balance the consequences of lost jobs. I would argue that the economic computation is incomplete.
When we automate jobs, we are really defining the boundaries of the job. In other words, if we can precisely define the boundaries of the job, then we can automate it. I would go further and say that it is imperative that we automate any jobs that are defined precisely enough to be automated. My argument is that low-end jobs deceive us into over-simplifying what the worker delivers.
For the autonomous truck example, the truckers offer more value than just the directly compensated task of moving the rig between its waypoints. On economic terms, the trucker is paid by the mile: if the wheels aren’t turning, the trucker isn’t earning. Reducing the job to the simple fact of keeping the wheels turning makes automation conceivable. The problem is that truckers provide far more value than the mere operation of the vehicle. For example, their presence on roads provides patrolling services of reporting road hazards or distress situations. Also, truckers are able to live anywhere in the country, thus permitting their bring back their earnings to benefit lower income and more rural areas.
Similarly, the person behind the counter in a fast food place offers services beyond just taking orders and accepting payment. The counter person can respond to emergencies in ways that a kiosk never will be able to. The counter person can provide assistance or witness testimony to unusual and unexpected events.
The response to the unexpected is the true value of workers at lower paid positions. We benefit from their actions when something unusual happens. I’m reminded of the reason why airlines continue to employ pilots when most of the flight is automated. Rarely, but frequently enough, the pilot’s improvised response resolve an unexpected situation. Comparable value occurs at the lower end jobs. These industries routinely encounter benefit from responses from humans who have immediate access to human the scene. We need humans in low paid positions (related topics here, here, and here).
The real opportunity for automation is at the jobs requiring specialized skills and particularly those that involve application of technologies. Often these jobs are among the higher paid jobs for non-celebrity positions. These are the jobs that automation is more likely to challenge in the near future. Often these jobs are defined well enough to be reduced to automation. Also, often there is little or no side-benefits of the presence of the human performing the job.
An example in medical care are diagnostic services of lab tests and interpretation. Historically, these are tests requiring specialized training for very precise and repeatable operations. The remoteness of the lab makes the human workers irrelevant to any surprising events. More typically, if a surprising event does occur at a lab (such as building fire) the occupants are mostly a hindrance than a cure. Emergencies in a lab would be easier to handle if no humans were in the lab.
Closer to my own knowledge is data science. Currently data science is a hot field for high paid jobs. The work of the data scientist involves application of technical skills. The human performing the technical tasks offers no value outside of the boundary of the tasks assigned to him. The tasks are also defined well enough for automation. Personally, I have been reviewing recent technologies offerings and I’m very impressed (and even excited) by how much is now automated but in the past required custom solutions. Although there continues to be a lot of money spent on data scientists to write custom solutions, I think most of this money is wasted out of misconception of how advanced the automated tools are for replacing humans. Most of the work we now call data science is available in products that anyone can use. There are no hidden contributions of data scientist beyond the clear definition of their jobs. Most businesses will be better off without the data scientists who spend most of their time implementing custom solutions (computer science aspects of data science).
Highly paid specialists, especially in data science, are too confident that automation will primarily affect lower paid jobs. We will see the opposite where we will continue to employ lower paid staff in a result of recognizing more fully the breadth of value they provide on for front-line services. The real threat of AI is for the high end knowledge workers and that includes most of the data scientists.
Humans have a distorted view of the difficulty of intelligence. We identify as intelligent the capacity of extensive learning or some mental achievement especially in science, mathematics, or technology (STEM fields). A relatively small minority of humans make such achievements, leading us to conclude that such intelligence is hard to obtain. When these achievements results in something recognized as beneficial, we reward the thinkers with (among other compensations) a recognition of high intelligence. We assume this kind of intelligence is hardest to automate.
Modern technologies that make possible the systematic processing of large volumes, variety, and velocity of data are quickly automating the specialized fields that we previously assumed only highly educated people can accomplish. From my own experience, I see this as especially true in the computer science area where new tools automate tasks that just a few years ago would have employed teams of highly trained specialists to perform. Alternatively, the underlying compute technologies improve to better accommodate less sophisticated implementations that also demand less gainful employment opportunity of the skilled computer scientist.
In some cases, these specialists have moved on to newer challenges but overall there is a decline in demand for humans to perform these tasks. While there will remain a need for specialists in the field, the openings are fewer in number and with briefer periods of relevance to sustain employment. Maintaining long careers in the field demands combining extensive efforts in continuing education on top of full time employment. A few will luck out with comfortable jobs, but many others will struggle to keep their skills relevant for the current market. That struggle comes from increasing competition. Often we assume the competition is among our peers, but in reality the competition is primarily with machines. It turns out that the tasks that distinguish computer science are easily definable to permit technological advances either through automation of tasks or through adaptation of hardware to lessen the need for labor extensive efforts.
The risk of automation taking over jobs is higher among the higher compensated jobs of specialized knowledge workers. Ironically, the near term opportunities for computer and data scientists is in the areas specifically to automate their jobs out of existence. Some practitioners may be aware that this is the goal and even embrace it. I am in that group and probably always was in that group throughout my career. I never appreciated the fact that there needs to be teams of STEM specialists between a real-world task and the technology. I am especially annoyed when such teams show no interest or even appreciation for the ultimate goal of the real world task. To the extent that STEM specialists acknowledge these task goals, they assume these tasks are uninteresting and definitely unintelligent.
Often, there are reports that STEM specialists imagine that they are making contributions to society by automating the boring uninteresting jobs. To STEM specialists, society would be a better place if we didn’t have the apparently low-skilled (and thus low paid jobs). Yet, despite their efforts, these jobs remain available. Low skilled, low-paid jobs are increasing in number. For example, Amazon’s Mechanical Turk bills itself as “artificial artificial intelligence” that pays small fees for small task to obtain mundane intelligence that continues to evade automation.
I wrote before that a data-automated economy will provide employment primarily in the area of collecting opinions or common sense that machines will probably never be any good at automating. Since these jobs involve common everyday knowledge, the entire population can compete as peers and this will keep compensation lows. The future is more low-paid jobs and fewer higher paid jobs. My impression of human history tells me that human society demands a way to differentiate people.
Eventually people will find new definitions for human intelligence that differentiates the population into different levels of wealth or compensation. Such differentiation will need to retain the ill-definition of common knowledge. Any well-defined defined differentiation immediately will populate training data for machine learning to automate the task. Sustainable differentiation of value among the population must involve concepts that do not generate training data for machine learning.
An example of this kind of differentiation is the concept of celebrity that seems inherently immune to automation. Indeed, much of the income inequality today comes from compensating celebrities both in culture and within corporations. Celebrities distinguish value by brand recognition. Distinguishable recognition for higher compensation does not leave measurable traits that can train machine learning algorithms. In fact, fashions change so that each time period presents celebrities that have little in common with prior celebrities.
Machine learning is unable to learn how to predict celebrities and this make celebrity status immune to automation by artificial intelligence. Celebrity status is already artificial but in a way that can not be automated through artificial intelligence. One of the tasks of the low paid knowledge workers such as Amazon’s mechanical turks is the differentiation of celebrities from non celebrities. I’d go further to generalize the concept of celebrity to include all of the tasks hired to these knowledge workers. Recognizing a cultural reference is the same kind of mental achievement as recognizing a celebrity.
There will always be an economy of recognizing celebrities. Perhaps the basis of all human economy involves identifying celebrities. The celebrities will enjoy high compensations for their being widely recognized. The remainder will enjoy some residual compensation for providing the recognition. When economists distinguish different economies, they are really distinguishing different concepts of celebrities within the economies. Some economies are simply permit wider variety of celebrity status.
Recently, I have been encountering many articles (such as here) that brag or warn of the rising capabilities of machine intelligence with respect to human intelligence. I don’t think this is surprising. The entire basis of IQ testing is to measure the rarity of the intelligence of the person taking the test. Modern recognition of intelligence, such as IQ testing, has defined the word intelligence as being uncommon. For example, a high IQ test result claims tat most people score less on the same test. Defined this way, intelligence is that what is hard for humans to achieve. That difficulty for human achievement of high intelligence indicates that this kind of intelligence is unnatural to the human brain.
In the past, we prized individuals who exhibited high intelligence through a combination of talents and lengthy dedicated learning because there were no alternatives for the services they offered. Completely coincidentally is the drive to distinguish humans from other lifeforms now that we accept that humans are just another form of life on the planet. We need the existence of highly intelligent individuals in order to prove some form of specialness of humans over other life forms. As an aside, when comparing human intelligence to non-human biological intelligence, we don’t give the same advantage to the competing species. We inevitably compare the high IQ humans to the average intelligence of competing species. In practical terms, we have not found a way to compare high IQ(h) humans with high IQ(nh) non-humans. This gives us the comfort of knowing that humans are different from other life forms because only humans can take IQ tests.
I also think that this kind of rarity of intelligence is a modern (post-enlightenment) concept coming out of the increased demands for such capabilities to drive the modern advancements such as the industrial revolution and the current data revolution. The true marvel of biological intelligence is common intelligence of all life forms or common sense in terms of humans. Common sense to human (and at many levels common to all living things) is very hard to create in machines. Correspondingly, I am not surprised that it is easy to create in machines that quality that is hard for humans to accomplish. Most high intelligent humans achieve their intelligence through diligent training and education. Training and education appears to me to be inherently script-able.
Machine learning follows a script of training algorithms following a similar model as training humans. Recent advancements of capabilities from deep learning come not just from large numbers of layers of large numbers of computing nodes, but also from more in-depth training examples that approach higher education preparation of humans.
Machines can excel in higher education because machines are simply better than humans at being able to be trained this way. Automation of higher educated capacities is inevitable and necessary due to the continued rarity of human intelligence. This will be especially true in the STEM fields that emphasize the consistent application of established knowledge and techniques.
I experienced this personally in the field of computer science where most of what once required teams of specialists is now integrated into software development environments (IDE). Students learning these IDEs often hear from their teachers to just use some module or tool because it just works. In the past, we needed STEM careers to make work what now comes for free out of the IDE. The IDE automated the intelligence because there are few best practices and even fewer combinations of best practices that work well together. It is easier to capture best practices in software than it is to capture it in humans.
The concept of best practices itself only makes sense in terms of human (or biological) intelligence. We can distinguish better and worse practices because we encounter the latter in practice. In contrast, machines that capture the best practices will not experience worse practices on their own. In the machine-intelligence ecosystem there is no best practice because the alternative of not-so-good doesn’t exist. Machines have just practices that just happen to be optimal. We should value automation of high intelligence because the automation eliminates the possibility of worse practices.
Ultimately we will value machine intelligence over human intelligence (such as measured by IQ) because we needs lots of that kind of intelligence and we can’t afford to suffer the consequences of inevitable human failings where the assigned human fails to follow the best practice.
The rise of machine intelligence will devalue the IQ-measured human intelligence. This will ultimately be a good thing at least in avoiding disappointments of humans who fail to apply best practices due to ignorance, fatigue, emotional distress, or malevolence.
The future of machines automated the high-intelligent tasks will leave the population leveled with common intelligence. This common intelligence will limit potential for celebrity status based on intellectual achievements alone. The commonness of non-machine automated intelligence may result in fewer opportunities for individuals to distinguish their capabilities over others, at least in terms of higher education type of intelligence. I imagine that in a future where intelligent endeavors belong to machines, there is a risk of degenerating the economy into a culture of poverty. However, common intelligence will still be essential and remain inaccessible to machines. The future human economy will involve compensation for common sense.
I include in the notion of common sense the ability to engage in argumentation or rhetoric. Part of what we mean by common sense is our ability to argue about ideas. Certainly argumentation is also part of higher education but it is more prevalent in the classical liberal arts studies rather than in the STEM fields. In STEM fields the emphasis on fundamental truths of best knowledge and best practices. STEM conceives of these truths as being excluded from rhetorical argumentation. STEM knowledge does advance through the application of experimentation and statistical testing. It does not advance from persuasion through rhetorical arguments. In contrast, common sense emphasizes argumentation for persuasion. Common intelligence is the intelligence to persuade or to become persuaded.
Standardized intelligence testing is not appropriate for distinguishing humans from machine. The fact that the tests are standardized strongly suggests that algorithm will inevitably out-perform humans. What distinguishes human and machine intelligence is rhetorical persuasion. A machine intelligence trained to know the best knowledge and follow best practices has no need for capacity to be persuades to act contrary to its knowledge. In today’s terms, there is an entire cyber-security industry that in general terms locks out the possibility of humans persuading machines to act differently than originally designed. Machine intelligence necessarily must prevent the possibility of being persuaded away from what it knows.
Similarly, there is a limit to machine’s abilities to persuade humans. The machines’ ability to persuade humans is limited to delivering correct inferences of verifiable evidence. Many times this is sufficient to persuade a majority or even a super-majority. Often, however, people demand more than what machines can present in order to be persuaded. People need rhetorical persuasion, the rhetoric that is grounded in common sense rather than best data and practices. Despite the goodness of algorithms and data, they are often incomplete. Meanwhile people have learned through experience that the future can surprise us. Common sense includes fears and doubts about whether the future will resemble the past.
Fundamental to human intelligence is the phenomena of fears and doubts. Biological intelligence (I argue for all life forms) emerges from a foundation of fear and doubt. Without fear and doubt, biological intelligence would never arise and probably life itself never would have succeeded. Fear and doubt represent the absence of data. Human common sense appreciates the importance of the absence of data because fears and doubts are the foundations for common sense.
In contrast, machine intelligence success is based on the abundance of data. Much of the recent success of machine intelligence comes from the ready availability of vast amounts of data. Studies show that previously ineffective intelligence algorithms can become more competitive when it has access to more information than its competitors. Machine intelligence is about historic evidence and best practices. Machine intelligence is fearless and doubt-free.
Human intelligence is about doubts and fears. Their being living beings justifies the importance of fears and doubts, the importance of considering the missing data.
This article exhibits this absence of fears and doubts in machine learning. The deep learning fails to recognize images nearly identical to the ones in the training set where the new images have minor differences imperceptible to humans. Humans recognize not only that the images represent the same concepts but the images themselves are for practical purposes identical. In contrast, the machines fail to recognize the minor variant even as it succeeds in recognize the subject in more dramatically different images. One explanation is that deep learning do not see images the same way as humans see things. This is an appeal for more research to make machine vision more like human vision.
In contrast, when I saw the article, my first reaction was to be reminded of the human problem known as the uncanny valley. In the progression of artificial images, people become more comfortable with recognition of the images but only until the images become too close to reality. At that point, there is a sudden decrease in comfort or increase in suspicion despite the images being closer to reality then cruder images. Humans recognize the importance of minor differences as cause for caution, fear, or doubt. In contrast, a machine approaches a similar scenario with a conclusion of a wrong result but with high confidence.
My explanation of this difference in behavior is that machine learning never has the opportunity of appreciating fears and doubts. Machines can never experience fears and doubts like living beings can. Machines are like the classical polytheistic gods who are among the world but are immune from injury or death. This immunity is inherent in their foundation of access to best practices and best data. Machine intelligence is based on a notion of an immortal truth. It has no reason to fear or to doubt. Instead it will decide with confidence that a slightly modified image of a dog is in fact not a dog but a form of flower. More extensive training of the deep learning can justify this confidence because there exists flowers that do mimic the appearance of animal shapes.
This leads to the title of this post. The problem with machine intelligence is that it lacks fears and doubts that open it up to persuasion. If it makes a decision based on its best practices and best learning, it will be impervious to persuasion by humans that its decision is wrong. Machines have intelligence without rhetorical skills of engaging in arguments. This also hampers machines in the reverse direction in being unable to persuade humans of its decisions when humans have fears and doubts. In this sense, machine intelligence has limited utility in human economy. Machine intelligence may be perfect in replacing highly specialized skills that demand close obedience to data and best practices. Machine intelligence will not be very useful in everyday more mundane activities because machines lack the ability to argue with humans, to engage in rhetoric of persuading or being persuades. This is impossible for machines (or philosophical zombies) because the foundation of their intelligence is immortal.
Machines can not engage in human arguments because they have no reason to have fears or doubts. In the end, perhaps machines will take over human governance. That can only occur by suppression of dissent. A rule by superior-intelligent machines must be authoritarian because such machines will never be able to engage in two-way arguments that are central to democracy.
This article presents some visualization of the image models of trained neural networks for certain classifiers that while achieving high recognition scores obviously see the recognized objects differently than humans. A good example is the reconstructed imagery a trained network seeks when identifying dumbbells where it is looking for a muscular arm attached.