One of the underlying themes in this blog is the musing on the nature of intelligence. I named the blog hypothesis-discovery initially based on my attempt to characterize how I approach data. However over time, this idea raised more questions that quickly turn to metaphysical. What is the motivation for discovering new hypothesis? Superficially, the motivation is to obtain recognition of some sort from discovering something new. However, I don’t think that describes my motivation since my nature is the opposite of ambitious. I’m not lazy, but when thing start to come together to make success within reach, I tend to want out. I don’t want to succeed, but I do want to discover. That leads to the question, what is it that I want to discover?
I don’t really want to discover something that will make me famous or rich. If I do start to make progress in this way, I try to get someone else to take over to finish the task of succeeding. I’m not some pure philosopher either, I have very mixed feelings about my failing to follow through on the progress I had made previously. The conflict is what holds me back from either goal of reputation-type success or of some breakthrough that only I will ever know.
Some of the questions I’ve entertained concerned the nature of data itself. What is data? More specifically, make makes information possible, and where does information itself come from? Data comes from measurements, but why is this data informative? The material world in many cases acts according to very stable rules. Seeking new hypotheses is about adding to the known rules that govern the world, material or anthropological.
Even if rules exist, how is it possible for us to observe them? I am wondering how information emerges from the material world, and how this information can be intelligible. I’m very impressed with the details about the human brain, but I’m not convinced the material brain is capable of converting the stimulation that the world presents to the body.
This questioning is a questioning of common sense. For example, I know I can see because light illuminates objects and the reflected light hits the lens of my eyes to focus an image on the retina that converts the signals for the brain to process. I am wasting my time wondering whether the real world is actually very different than what I see. What I think I see is just an illusion of something that is very different. The world is more than what I can sense. Or, maybe, the illusion is as unrepresentative of the world as a hallucination would be.
To be clear, I trust acting on what I perceive in the world. And when I do follow my interpretation of what I perceive, I generally avoid regretting my actions. In fact, I probably err in the opposite direction, trusting my interpretation of my perception too much and this is limiting my potential. In that respect, entertaining the question of the accuracy of my perception may be a form of self-therapy to get to a better place in my life.
Setting aside the personal aspects of the quest, I still think it is worthwhile to question the mechanics that make information accessible and intelligible.
My suspicions are raised by the successes coming out of big data technologies and techniques that have matured in the past many years. Something more is going on than just the technology permitting retrieval and analysis of large volumes, variety, and velocity of data. The recent achievements (some successes, some failures) appear to be coming from a different kind of intelligence than the one we recognize as human.
A part of me envies the machine. Its access to big data, too big for the human mind to process directly, is giving it access to a different kind of intelligence. Certainly, it is revealing lessons that we have not been able to learn on our own. If big data is teaching us, then I am inclined to say it must have its own intelligence to play the part of a teacher.
I envy the machine because we as a society grant the machines more latitude for its thinking than we permit our human peers. Because the data is so vast, so various, and so fast, we can’t argue with what the machines come up with. We can’t engage in rhetorical arguments with the propositions coming out of big data.
I have no claim to being a good arguer with humans, so it is not really a personal loss that I can’t argue with the data products. What I regret is that I can’t imagine any human being able to argue in a way that I would feel is representative of my understandings.
I can do the work to make new data available, to make more data retrievable more quickly, or to make new algorithms to derive some interpretation of the data. Even if I were far more adept at these skills, I would still feel a loss in being unable to argue with the result. If I am convinced that the data is correct and that the algorithms are reliable, then the conclusion is beyond argument.
If something is observing the world and coming up with its own interpretation of reality, I feel a loss if I am disqualified from questioning the conclusion.
The root of the problem is that my intelligence is not a peer to the intelligence coming from the analytics of quickly retrieved massive data. I regret losing the ability to question the results. It feels like a reversion to an earlier theocratic way of being led, or of a time of superstitions.
Modern data-driven approaches feels like we are going backwards, devolving into a more primitive way of intellectual life. Maybe this is what bothers me most.
But, this thinking accepts that there is a new kind of intelligence emerging. Although enabled by human technology, the intelligence is not human. The intelligence is coming up with its own belief systems that are inaccessible to humans and humans are unable to challenge the beliefs of this new intelligence.
This is an imaginary, paranoid, kind of fear. The problem is not that there is machine intelligence. The real problem is that the machines may be coming up with their own belief systems, their own views about how the world operates and how it should operate.
The question I’m posing is how can we detect that such non-human belief systems are emerging in our machines. This was partly triggered by my playing with machine learning algorithms such as those the learn to recognize hand-written numbers. Applying the algorithms backwards can expose the patterns that the successful algorithms are looking for. What bothered me is that the machine is recognizing patterns that are very different than how I think about when I recognize numbers. What bothered me is that the machine could be equally good at interpreting sloppily written numbers by using what seems to be an obviously incorrect approach for matching the patterns. In this trivial example, the machine is coming up with its own belief system for what distinguishes the written numbers.
For deeper learning algorithms, the patterns are likely to be even more incomprehensible even as we acknowledge the machine’s skill. The machine’s success may be due to a belief system of a non-human intelligence. Or how can we know if this isn’t true?
I imagine an ecosystem of information where different levels of intelligence work within specific intellectual niches while feeding upon or feeding other levels of intelligence. In the natural world, we don’t perceive other intelligence because we have no reason to believe they exist. The modern era of machine learning, though, presents us an example where we can begin to suspect a separate level of intelligence, and one that feeds on our intelligence. As we sense this happening, we realize that we’ll get no sympathy from the machine for the exact same reason we don’t recognize naturally occurring intelligence in non-humans. The machine has no reason to recognize our intelligence in the sense of being qualified to engage is dialog and argument with the machine’s belief system.