I have some thoughts about the fate of future human workforce as we continue to improve automation. In articles such as this one, there is a presentation of the debate where one side points to history showing that automation generally increases the number of jobs while the other side notes the increasingly concentrated economic benefits of automation may leave most people “benefiting from the dramatically decreased costs of goods to eke out a subsistence lifestyle”. My response at the end of this post follows a lengthy background.
In the posts on this blog site, I’ve been exploring some thoughts in several areas that seem unrelated. I would list the recurring themes as: the nature of data, the under-appreciated need for routine human labor in data science, the potential faults of big data and algorithms, plus acknowledging intelligence, workforce participation, and social cohesion. However, I think the themes all are interrelated best when the order of the list is reversed. Social cohesion or its stability depends on participation that depends on acknowledging intelligence in humans (and elsewhere). Social cohesion also depends on continued trust in how we make decisions that affect everyone’s lives. That trust also depends on a mutual respect for intelligence, but it also depends on accountability and responsibility.
A key part of the cooperation that makes social systems work is the confirmation that there are responsible parties who will be held accountable for anything that doesn’t go well. No matter how well we plan on things, there will always be cases that do not work out well. The plans may accomplish goals of increasing the population that benefits but still fail to benefit others or even injure others. To maintain a stable social system, the injured or neglected parties need to hear explanations and justifications from someone who was responsible for the particular action.
When the injured parties find no one that can be held accountable, they will withdraw their participation in society. In very early posts, I talked about the need for governments to enjoy a super-majority support in order to withstand uprisings or protests from a small subgroup. For example, a democratic type government that rules by a simple majority depends on the cooperation of the bulk of the group that does not have power. This cooperation goes beyond the need for the out-of-power minority to agree to be governed. The majority ruling group needs the out-of-power minority to actively support the majority when facing protests or uprisings.
In the earlier posts, I described protests that presented impressive images of filling streets and public areas. When the protests succeed, it became apparent that the participants did not represent a majority or even a coherent minority. The images of large crowds gave the mistaken impression of unanimity when in fact the groups were a small minority that managed to inflate the crowd size for some unrelated or more temporary reason (such as curiosity to see what the crowd was all about). A protest that overthrows the government may not have the resources to govern themselves. But more relevantly to this discussion, the protest succeeded because of the lack of participation of a larger group to show support for the government that they may in fact prefer over the protest’s alternative.
I titled the blog post about workforce participation but I have a wider view of what is means to participate in the workforce. Reports and discussions of macro-economic concept of workforce participation gives the impression that a sizable part of the workforce is not participating in more things than just work. Lacking income, this group may be restraining expenses by spending less money and being less active in community than they otherwise would if they had an income.
Social cohesion depends on maximizing the participation of the population to provide that super-majority protection when it is needed.
One of the explanations for depressed workforce participation when compared to historical rates is that there are fewer jobs, or that the jobs are unsuitable for the workforce. Certainly the employment prospects do vary geographically that could account for a mismatch of workers and jobs. Booming locations for jobs are not convenient for preferred locations for keeping a home. For example, there may be many jobs available in a city but the cost of comfortable living is prohibitive to live within a reasonable commute distance from the city.
There may be more to the explanation of depressed workforce participation than just geographic or skill mismatch of labor and job. In the past, we achieved very high workforce participation by people who agreed to very long commutes including weekend commutes from temporary mid-week lodging in order to participate in the workforce. The decline in workforce participation may be a decline in motivation to participate.
Macro-economically, motivation is typically measured in terms of compensation. Real incomes in general are not rising as quickly as they did in the past. Many persistently unfilled job openings do not increase the salary range to make the job more attractive. There may be less monetary motivation to justify the effort to participate.
Motivation is not entirely monetary. Money may not even be the dominant form of motivation. While all other things being equal, more money may provide more motivation, more money may not be able to compensate if those other motivations do not remain equal.
In the workforce, we have to comply with various demands and limitations for how we do our work. In a preceding paragraph I mentioned that unfilled jobs do not raise their offered compensation to attract more interest. Often, the employer faces a salary cap on that position due to constraints outside of his control. For example, he needs the position to fill a particular contract that is tied to a particular salary range for the worker. Alternatively, his contracts with others may need to be renegotiated if this position is compensated outside of its limits. The employer may be willing to make an exception for this one case, but he is powerless to make exceptions.
This is just an example of many constraints imposed in the work place. Very capable people see problems they can solve but they are not allowed to practice what their solution requires.
There is nothing new about rules in the workplace. We have always had constraints on what is permitted or who is permitted to do them. The something new is the increasing sense that there seems to be no one who is responsible or accountable for the rules. There is no one to talk to discuss the rules, understand its justification, and to at least begin a negotiation to change the rules. The rules exist without a human who is accountable for the rule.
My earlier posts often defended the importance of recurring data-science labor in production data systems. Part of the reason for this need is to have a person accountable for when things are not going well. As I mentioned earlier in this post, even the most beneficial system will result in some potential beneficiaries being neglected or even injured. Those injured parties need someone to discuss their concerns so they can either understand why they have to accept their outcome or open a negotiation to change to better accommodate their needs. Currently, the trend is to eliminate the recurring labor costs of production data science. There is no one accountable for the current operation of the system. In fact, a frequent response to complaints takes the form of “it is all controlled by algorithms outside of our control”.
This is a huge problem. A foundation of the agreements to build social structures is the concept that someone will be able to control the outcomes. Even when someone does not like the consequences, he agrees to cooperate and actively participate because he knows someone is in control and he has at least some level of trust in that person’s intelligence and good will (even they disagree).
The admission that some aspect of our lives is outside of anyone’s control is an admission that social structure is inoperable for this particular aspect of our lives. The reason why we build social structures is to put someone in control of something that previously out of control. We build governments or social coalitions to address something that no human previously controlled. Even if we have at best feeble possibility of controlling the outcome, we prefer to have someone in charge who will be accountable for what is occurring. That person can be approached to defend the justification for the chosen course of action or he can be approached to negotiate a revision to better accommodate other needs.
Most of my posts generally tend to return to the topic of data science. However, I mentioned that the increasing investment into big data projects is increasingly making data central to all aspects of our lives. I suggested that in the near future, participation in government or work will require everyone to have data science skills just as we currently insist people to have an ability to commute to work. We will increasingly demand accountability for how data is used.
I think the issue of accountability is broader than data. Accountability for controlling things that matter to our lives is a key justification for social arrangements such as governments. Democratic governments depend on super-majority active support where most out-of-power groups agree to be governed and agree to defend the government. Without super-majority support, a small minority can topple the government. That small minority is one that is demanding accountability (they want to change something). The larger minority that participates in the super-majority to support the government also needs to be convinced there is accountability in the form of someone able to justify convincingly a particular course of action.
The response of a situation being the results of algorithms outside of any human’s control is not going to satisfy either the group demanding accountability to negotiate a change, or the group expecting someone to give a convincing justification of why the algorithm is best for everyone.
The benefits of a set of optimized data-driven algorithms will not mean much if there is no society to deliver those benefits to. Societies can fall apart with horrifying consequences. Optimization must be accompanied by preservation of the human needs to continue to participate cooperatively in society. That cooperation depends on someone being accountable for any decision that affects another person.
When we are asked to follow rules that frustrate our personal goals, we want to know who is responsible for the rules and that he is available to justify the rule given our particular circumstances, or to negotiate to change the rule. We may accept the fact that such a person may be very remote and require a lengthy chain of communication to get the message through. Ultimately we need assurance that our grievances are answerable by someone who is in a position to justify or change the rule.
Increasingly, we find rules that are divorced from accountability. The rules must be followed even though no one can explain why or no one is in a position to change the rules when it doesn’t work for a particular individual of sub-group.
An example may come from the recent post’s example of the big data visualization of people movement in a large city. Such a system may optimize allocation of policing resources based on where the algorithms identify as having the most need. This in fact may prove to save costs and improve overall crime statistics. However, consider a case of a property owner who experienced a burglary and then finds out that his property is outside of what the algorithm determined as needing a fast police response. This property owner will naturally want to talk to someone to discuss this incident to learn justification for the slow response or to negotiate an accommodation for the particular attractiveness of his property to burglars. The least satisfying response to this property owner is that no one is in control because it is all in control of algorithms. He may resign to the fact of the inability to address his grievance, but he may also choose to decline to cooperate when the government needs his cooperation. For him, the government lost some of its relevance.
Compounding over time similar situations of others being unable to have someone competently address their grievances, this will inevitably erode the essential super-majority support for the preservation of the government against relatively minor protests.
With this background, I present my observation about the consequences of increasing automation on the workforce as illustrated by articles such as the one I mentioned at the beginning of this post. Inevitably we will encounter increasing automation to do jobs we previously expected from people. The video of the human-like Baxter robot being an assembly assistant to a human coworker exemplifies a work relationship like that of a master craftsman (human) and his apprentice or assistant. The robot gives a convincing demonstration that it can take the duties of that assistant at far less annual cost (cost of robot is cheaper than a single year of assistant wages and benefits). Assuming that the demonstration is representative of a realistic workstation, the economic case for the robot instead of a human assistant is very clearly in favor of the robot.
My observation is that this demonstration acknowledges the continued need for a human expert to perform the task, The expert merely needs an assistant that he will supervise to complete the task to the expert’s satisfaction.
The scenario is curious because the robot interface is designed to respond to the expert’s training and activity in comparable ways as he would interact with a human assistant. This makes sense, because we’re asking an expert to replace his current human assistants.
Eventually the expert will retire or move to other work and there will be a new job opening for this particular workstation expertise to perform the responsible work that we still need from an expert specialist. Be eliminating the assistant job, we’ve eliminated the preparation of a replacement expert. We need to find someone who was trained prior to introduction of robotic assistants. Increasingly those people will become unavailable.
The replacement expert that we assume is still needed to supervise the robotic assistant will have to be specifically trained to perform the specialized task while supervising the robotic assistant. This specific training for a particular scenario for working with a particular robotic assistant that requires a now irrelevant need to mimic a familiar human assistant. The new expert had no prior experience with a human assistant, and may be encumbered by the pretense of interacting with a machine as it if were human-like.
This human-likeness is inefficient to implement and inefficient for training future experts. New generations of robotic assistants will be optimized to require more specialized training with a more efficient interfaces for training or cooperating. The expert will need to be trained or certified for this particular implementation of a robotic assistant on top of being trained for his core competency that his job demands to be done by a human.
As I discussed in the above introduction, there are tasks we still demand human accountability. We need to have someone who is directly responsible for a particular product and can justify why it turned out they way it did or be able to make a change to make it better address our concerns (such as a recurrence of a particular kind of defect). I don’t see how we will ever be able to automate that kind of accountability.
We will need humans to fill that role. We need the human specialist that needs an assistant. Future expert specialists will have to be trained. The current expert was trained by starting as an assistant and thus the robot mimicked that assistant. The robot eliminated the possibility of training a future expert specialist so we will need a new method of training experts and that may involve more efficient robotic interfaces that require specialized certification training on top of the specialty.
If it is true that the automate assistant technology creates more jobs than it replaces, there is a question of how to obtain the people to fill those jobs. Jobs must be filled in order to be counted as jobs. We need to train future experts but they can not get this training in a non-expert role because all the non-expert roles are automated.
These experts will need to obtain training on their own time and expense. Without an income possibility, they will either need to draw from their (or their relatives’) wealth or they will need to take out a loan.
For the last several decades, a sizable portion of the population needed to change careers or expertise multiple times in order to reach a retirement age that keeps getting extended to older years. Without access to assistant type entry-level jobs, the only option to qualify for a new job is to obtain training using savings and loans and involving free time outside of any employment. How many times in a work-life can a person repeat this kind of self-financed training?
The worker facing a career change has a choice to make. He could use his free time and available savings (or credit) to obtain qualifying certification for a job he can no longer obtain as an assistant. Alternatively, he may decide (perhaps out of necessity) to drop out entirely and no longer participate.
The first problem of the assistant robot eliminating the paid training opportunity for future responsible supervisory experts leads to the second problem of the workforce becoming increasingly younger as the older generation can no longer afford the time or expense to certify for a new career. We valued the expert in part because his lengthy experience provides a higher level of accountability, but the future will have no option but to increasingly rely no less experienced expertise with a lower quality of accountability.
The third problem is that we will have an increasing population of disenfranchised older workers whose prior expertise is no longer marketable and who are unable to afford a new certification without a paid assistant-type of job. A large portion of these disenfranchised will not be very motivated to cooperate to contribute to the super-majority support of the government when it is needed.
In recent events of governments facing major demonstrations, we often hear of two groups of people. The people actively protesting are students or at least student age. They are very visible but are probably a minority. Equally damaging to the government is the large population often described as pensioners who passively observe the demonstrations but do not do much to support the government.
Increasing automation may create more jobs but it will probably also create more people who effectively are pensioners observing younger retirement ages.
Increasing automation makes good short term economic sense in exchange for a much more unstable future.
4 thoughts on “Workforce participation needed for human accountability”
Pingback: The danger in big data is its charm | kenneumeister
Pingback: Big decisions responding to volume, velocity, and variety of recommendations | kenneumeister
Pingback: Workforce participation needed for human accountability | Hypothesis Discovery
Pingback: The danger in big data is its charm | Hypothesis Discovery