Of course big data projects are essentially driver-less. The whole reason big data exists at all is in our ability to automate the collection and ingestion of data and automate the analysis. The role of humans is to just sit back and enjoy consuming the finished product.
It is interesting to watch the recent rapid developments of completely autonomous vehicles such as Google’s driverless car.
Autonomous vehicles have been a subject of research and progress for as long as I can remember. In particular, it has received a lot of support from the military research where such vehicles would be very valuable especially in improving logistics transport. Long distance and dangerous routes can be more easily navigated if there is no need to accommodate the human driver and protective support. There are numerous roles where an autonomous vehicle would very much improve the effectiveness of the warfighter. However, progress has been relatively slow due in part to the challenges involved.
Navigating a vehicle through uncertain routes especially when such routes need to detour around hazards requires a tremendous amount of intelligence. A good example recently learned by Google’s car is when the car encountered a typical urban street construction project. Humans, even novice drivers, are quickly able to navigate through these scenarios as if they knew what to do all along.
There is something similar going on with aircraft. Today there is a lot of attention to drones. These are not autonomous but instead remotely piloted. Originally exploited for the military, they have quickly become accessible to consumers with use of consumer portable computing and radio devices. Removing the pilot from the aircraft opens many opportunities that would be impossible if the aircraft had to accommodate a human’s weight, life-support needs, and general fragility. Remotely piloted planes can be smaller, faster (and slower), stay aloft longer, fly closer to terrain, etc. The close to terrain capability is especially popular for consumer applications where public events now can expect to have a few flying just few feet overhead taking pictures.
I’m sure there is a similar motivation to automate these craft as well. I don’t know much about the military drones, but I suspect they are largely automated when doing routine navigation or station-keeping. I don’t think a remote pilot is involved for most of the boring part of the flight. Cruise missiles have been doing this for decades. For the drones, the remote pilot may take over only during the critical moments.
There should be a huge commercial market for such autonomous aircraft. I’m thinking in particular of logistics such as the overnight delivery companies that rely in a huge fleet of piloted aircraft to fly exactly the same routes every day. A commercial company that can perform this logistics without human on-board pilots can have a huge financial advantage.
These commercial airplanes are already highly automated. The pilot’s role is primarily for where regulations prohibit automation. However, even if these regulations were removed, there will still be a need for a pilot to handle the unexpected.
Humans have a uniquely powerful ability to figure out how to handle unexpected circumstances. I understand that pilots get a lot of training with various failure scenarios, but that training in part includes a good deal of exercising the innate ability to figure out the problem and build confidence in doing so. Just because we can automate the training does not mean we know how to automate the pilot. We don’t.
I mention humans, but I don’t really think it is unique to humans. Humans are unique only in terms of using technologies. The intelligence to figure out how to get through unexpected detours is something that most animals do routinely. Survival in the world requires figuring out how to adapt to a world that is completely indifferent to the animal’s goals. The intelligence that humans tap when there are pilots or drivers is that same intelligence that guides a predator to its prey, or guides the prey to unfamiliar territory to escape the predator. It is what allows a herd to find another route when their well-worn route is suddenly cut off by flood or avalanche.
It is a uniquely animal instinct to be able to figure out how to negotiate with the indifferent world. This is why we need pilots and drivers. Or at least this is what makes automating pilots and drivers so challenging.
Back to the point that Google is getting so much attention with its autonomous car project. There is excitement that they may actually be able to make a break through. This excitement in part comes from their software successes in other areas previously assumed to require human intelligence. Google’s original technologies in part organized data in a way that is very compatible with the way humans search for information. They then applied these data technologies to other human activities with great success.
For example, they used data mining techniques to tackle human language translations that previously had been progressing slowly due to a different approach to create artificial intelligence. The innovation from Google was to change the focus from emulating what a brain does. Instead the focus is on finding matching data where such a phrase in one language with the same meaning as a phrase in a different language. This doesn’t require emulating a mind comprehending and translating the concept.
I suppose some similar software approaches are used with autonomous vehicles. In any case, we seem to grant confidence that they are making progress at a much faster rate than we previously expected.
When I started my career, I thought of the problem of automating a pilot or driver was a matter of transferring to silicon some form of an emulation of the workings of the human brain in silicon. This was a subset of artificial intelligence. Initially at least it was considered vehicle navigation a near term goal for artificial intelligence because locomotion skills are very basic intelligence skills that we share with most other animals. At the time, we thought the only limitation was the lack of silicon, or more precisely the lack of sufficient number of transistors. By the time I graduated from college in the early 1980s, it was already clear we had it backwards. What we consider to be uniquely human intelligence (such as being able to play a game of chess) is actually fairly easy to automate. What is hard to automate is what we humans share with every other animal: the ability to figure out how to live in an indifferent world.
Automation for piloting or driving vehicles is something we welcome for the part of the trip that presents no challenges. A human is not needed to make minor steering, accelerating, or braking adjustments for driving on a well maintained highway with smooth traffic. We may spend hours in those conditions. For those conditions we will welcome automation. The need for a human driver comes up when we leave that predictable environment, for example at the start or destination local roads, or where an accident shuts down the the highway and forces a detour onto side roads that then become over congested and require further detours or simply a change in plans to find a place to wait out until the roads are clear again.
Will this kind of adaptability be automated? Will we get to the point where when a car encounters a shut-down highway it makes human like (and human acceptable) choices to deal with the scenario? I immediately think that even among human drivers and passengers there can be considerable disagreement about what course of action is the best.
Consider the alternative of the same road conditions encountered by an autonomous truck carrying some life-less cargo. Will we be satisfied it makes the right choice in this condition? What if the cargo is perishable, or if the cargo is urgently needed at its destination? What if the trip started with an expectation of lower urgency but the need for increased urgency was communicated while the cargo was en-route. Will we trust the autonomous driver to do the right thing? What is our recourse if we disagree with the choices the software makes?
Even if we could be reasonably sure that an autonomous driver or pilot will make the right decisions nearly all of the time, perhaps to the point of having the same chance of regret as we can expect from expert humans, we will still be inclined to want a human on board to make the final decision. Humans offer something more to navigation than intelligence or experience. Humans offer a consciousness to understand the entire situation and make a good judgement call for what makes the most sense at the current moment. I mentioned the statistics may be the same but there is still a difference in outcomes of a specific mistake.
We need human operators to take into account the awareness of the relative importance of the trip compared with alternative actions.
I am reminded of a movie called the Sorcerer. In it four characters needed to deliver some unstable explosives to a oil-rig fire. They took two trucks and followed two different routes that are rarely used. At some points the roads were impassable and required some engineering to get through the obstacles. In the end, the surviving truck broke down a short distance from the oil rig site and the sole driver attempted to carry the crate by hand for the remaining distance. Theoretically the entire transport could have been autonomous vehicles. Perhaps in some near future, the machines could even do what the humans had to do to continue to make progress. What if all of that were technically possible, would the ending of that trip been possible: an completely improvised mode of transportation of hand carrying something that was clearly a bad idea to attempt to hand carry?
This is a long discussion for someone who is not really all that interested in logistics. Instead I’m using Google’s role in autonomous vehicles to make an analogy to their role in building the big data technologies into the success that they have become. A good part of the achievement of big data is to automate data science. The reason why we expect progress from Google’s investment in autonomous vehicles is that they succeeded in creating autonomous data science.
A theme running through my blog posts is that we have not really automated data science. There is still a need for human analysts to do data science for the same reasons we need humans to be pilots or drivers. We need a human to figure out what to do when a huge tree has fallen on a remote rarely used road, a road too narrow even to turn around. Navigating through data is pretty much the same thing as navigating vehicles. Our expertise is needed when something unexpected happens that blocks our progress.
It just happens that we conveniently ignore the requirement for big data projects. We get away with it because the problems occur infrequently and often require a trained eye to even spot. Many problems are not spotted because we removed that possibility by removing the human data scientist. Autonomous vehicles eventually may be successful for the same reason autonomous data science is successful: a willful neglect of the possibility of a human being able to prevent a catastrophic failure.