In my last post, I described a theory of everything in the real world from the perspective of data instead of physics. If 10,000 years ago, civilized man had access to big data like we have today, he might have conceived of an origin myth that placed the beginning of the universe not some distant time in the past, but instead in some infinitesimal time into the future. The origin of the universe is the source of observations that provide us the data that our intelligence can contemplate.
The initial question I asked in that post is what explains the availability of data that we use to make sense of the world. This question is the component of epistemology that asks how observations are possible. I assume as a given that we have intelligent and creative minds, but such a mind could create fiction and fantasy as easily as it can an model of the real world. The primary difference between these products of the intelligent mind is that non-fiction connects factual observations about the real world. I am asking the question of how is it possible for factual observations of the real world to exist in a form that an intelligent mind can access.
In some modern theories of physics, all of reality consists of vibrating strings that exist in dimensions that are impossible for humans to ever observe. If this were true, then all observations should be observations of strings and nothing else. I grant that the human mind is impressively intelligent, but I am pretty sure it is not making sense of the observations of individual strings. The anatomy of the human brain tells us it is processing observations from a few sense organs that respond to physical stimulation. The ultimate source of the stimulation may be strings or the larger scale quantum physics, but that our sense organs are detecting chemical changes on surfaces of cells (for smell, taste, and sight) or mechanical deformations of cell surfaces (for touch and hearing). These sensations are far removed from the fundamental reality of quantum physics or whatever underlies that explanation.
I recall my earlier experiences confronting computer simulation projects when computing resources were much more limited than today. I recall at that time the promise that if only we had adequate computing power, we could simulate systems at the basic physics level and get a perfect understanding of larger scale problems. We could supply the simulation with an accounting of all atoms and their arrangement in some system and the simulation would apply physical equations alone to predict behaviors such as when cracks may appear in some structural element or when one human may decide to rob another. Even today with vastly more computing resources, we don’t attempt this kind of simulation by first principles. We do employ vastly more data, but that data is already at some aggregate level of material crystal structure for the first example or human psychology for the second.
I admit to not being fully aware of all areas of simulation research, but my impression is that there has been little progress in being able to predict macro phenomena by simulation based on first principles on the nature of fundamental particles. We instead work from empirical models of systems (materials, structures, organisms, etc) to derive behaviors for the next higher level of scale. We may investigate the behavior of an individual biological organ by modeling the properties and arrangement of the component tissues, not the individual cells or the molecules in the cells.
Nature provides us intelligible information about macro forms. We are certain that these macro forms must consist of fundamental physical building blocks either of molecules, atoms, quantum particles or strings. But our intelligence contemplates the larger forms that nature conveniently provides us to observe.
It seems remarkable to me that the nature of fundamental particles can produce large scale structures that are intelligible. A reality consisting of a collection of quarks and electrons could just as easily be analogous to what comes out of a food blender as to what we put into the blender. I imagine that a quark-inspired reality that is completely chaotic and without any repeatable structures and yet still giving rise to some intelligent mind. That mind, like ours, will collect observations and construct stories. Unlikely our reality, that mind will not have the convenience of recurring forms and predictable forms. Even its own peers may may be so dissimilar that they would not recognize each other. The mind operating with chaotic information that is nonetheless consistent with quark physics would probably be unable to construct useful stories to predict its surroundings.
At least that is my biased assumption of expecting the world that is compatible with my sensory capabilities that sense large scale forms instead of arbitrary aggregations of quarks and electrons. It is possible, I suppose, that in fact our minds do exist in a reality that is formless collection of quarks and electrons. For example, I base my assumption of how the mind works with sights, smells, tastes, etc by observing structures in our anatomy for these senses. It is possible that the observations of our anatomy may be inventions of the creative mind to go along with all of the other observations. The reality may be nothing but a chaotic mixture or plasma of quarks and electrons with not larger structure at all. The tidy world view we have is merely an illusion created by a mind that lives in what is really a world lacking any consistent repeatable forms at all. I do not accept this notion, but it is hard to completely dismiss its possibility because whatever I know and understand must come from the creative faculties of the mind.
I perceive things of recurring forms and these forms have predictable behaviors. What matters is that I am able to make practical use of these forms by performing actions guided by my predictive theories about these forms. The consequences of my actions are usually consistent with my predictions.
I imagine an explanation of how a universe with a scattering of fundamental particles interacting at quantum level can be so effectively intelligible to a human mind. This explanation comes from a perspective of data science and in particular my experience with working with data.
For context, in an earlier post, I described my approach to tackling data problems in a succession of steps each with persistent intermediate results and specialized algorithms that successively prepares data for the ultimate goal of analysis to support human decision making. Initially, I described this as a data life-cycle, but later I preferred to think of it as an information supply chain. Processing data similar to manufacturing supply chains offers advantages over the more popular single-leap approaches that make decisions directed from raw data (though cleaned and governed). A supply chain approach offers automatic compartmentalization of sensitive data and an opportunity to protect the reputation of source data.
The concept of the information supply chain is that each stage is like a factory that receives raw data from its supplier (another link in the chain) and enhances that data to meet the needs of its customers (the next link in the chain). The intermediate stage establishes governance with its supplies and then performs internal quality checks. The intermediate stage transforms the data in various ways to meet the governance agreement it has with its customers. When the intermediate data is ready, it is delivered to the customer that may accept or reject the package. If the package is rejected, the intermediate stage needs to repeat the process either to fix the data or to negotiate with its providers to supply more data stock to work with.
This information supply chain is a progression of increasingly intelligible data. The raw data from operational systems has many problems for immediate analysis for human decision making. The first problem is that the operational data is unrecognizable to the issues of the larger scale decision making. We need to enhance the operational data to bring it into context with business needs at the decision-maker level. The other problem is that the operational data has problems of omissions, conflicts, errors, conformity, etc. Although generally we call this a data cleansing requirement, I think the issues requiring resolution go beyond basic cleaning of data. Data may be cleaned to match business rules and still contain conceptual level misunderstandings or ambiguities of data. My concept of cleaning data addresses conceptual cleaning as well as the normal business-rule constraints.
Each stage of the information supply chain makes its source data more intelligible to its customer stages. Each stage of the information supplies new intelligence. In practice, this involves human supervised automation algorithms. Not only do humans design the algorithms, there is an ongoing investment in data science scrutiny of data in the supervision of the operation of the automation. As time progresses, the nature of the input data will gradually change, or the customer processes will demand changes in the data they receive. The human activity within a stage makes appropriate changes to maintain the intelligibility of the delivered data.
Intermediate stages make information more intelligible. In data projects, this is possible because humans supply the intelligence. The final data available for decision-making analytics is data that is intelligible to the decision maker because we added the intelligence to that data.
The rawest operational data is machine data that is not human intelligible. It is analogous to the quarks and electrons of physical reality. The quantum particles are content to go about following their (presumed) fixed behaviors but we can not base human-scale decisions based on extensive data about quantum particles.
In terms of our own brains, natures provides intelligible to the senses. This intelligible data ultimately traces back to quantum reality but our senses can only detect stresses or chemical reactions on cellular walls. These sensations happen to have one to one relationships with macro human-comprehensible phenomena.
As I mentioned in my previous post, thinking about nature from a data science perspective may lead to a different view of reality than what comes from the modern science thinking of mechanistic explanations. From my data science perspective, my decision-making mind somehow acquires intelligible data. Various information supply chains supply this intelligible data to my senses. The data is intelligible because the earlier stages added intelligence to the less intelligible data.
I am able to make sense of the world because I have access to information that I can make sense out of. Using the data science experience, I suspect that intelligible information must come from other forms of intelligence. We are surrounded by intelligence. If we were not, then the world would be unintelligible to such a simple structure like the human brain. This is a conclusion from thinking about data instead of mechanisms.
Early in my life and certainly by the time I entered as a freshman in undergraduate study, I reasoned the following. I started with the observation that I think I am intelligence, a variant of René Descartes’ cogito ergo sum (I think, therefore I am). I also recognized I am a product of nature. It seems reasonable to conclude that nature is capable of creating intelligence. Given this possibility, I presume that everything in nature is intelligent. The burden of proof is to show that it is unintelligent.
A much earlier post attempted an allegory to describe the problem of observing intelligence. The story itself needs more work to be entertaining, but I was most interested in describing our human bias for observing intelligent. Our bias says that we will recognize intelligence only when it looks like human intelligence. In the story, I described an example of the college professor (credentialed intellect) is evaluating the intelligence of his students. Part of the story describes an exam where two students turn in the exam extraordinarily fast compared to the rest of the class: one exam is 100% correct (perhaps the only perfect grade in the class) and the other exam is 100% wrong. The first exam must have cheated because there was not enough time intelligently read through the exam so quickly and get the right answers. Perhaps the second student can immediately be dismissed as ineligible for further study unless it is recognized it would have been equally improbable to provide all of the answers and get every one wrong. More probably, the second student knew all of the right answers into order to deliberately avoid them so that he turned the test around to test if the teacher was smart enough to detect this.
The story went on to describe one student who only completed a small portion of the test and thus get a failing grade. However, every question answered was in the sequence of the exam (he didn’t skip any questions) and each of the answers were correct. If he had more time, it is a possible he could of completed the exam with a perfect score.
The professor’s conclusion is that the second and third student can be dismissed immediately, while the first one deserves special scrutiny to catch how he cheated.
Similarly, we seek intelligence that we recognize as human intelligence. Our definition of intelligence is that it can not be too perfect, nor too fast, nor too slow. In our definition, the pinnacle achievement of intelligence is to be able to engage in conversation with humans. This concept of intelligence effectively equates to humanity. We will only see intelligence if it comes in human form.
I think of intelligence differently. Intelligence is a fundamental property of nature. Everything in nature has some form of intelligence. We can not comprehend this intelligence for two reasons. The first reason is that the other forms of intelligence operate at incompatible time scales: too fast or too slow for humans to comprehend. The second reason is that even if the time scales become compatible, the other intelligence has no motivation to enter dialog with us. Humans are quick to be insulted when ignored by other intelligent beings.
While I started with a presumption of intelligence in nature where the burden of proof is on proving its absence, I now appreciate from a data science perspective that intelligence is essential for nature to present intelligible information for my brain to process.
The answer to how we are able to comprehend information about the universe without our relatively simple brains is that our brains have access to data made intelligible by other forms of intelligence. Although those forms of intelligence to leave evidence in the form of data we can interpret, they operate as time scales too fast or too slow for humans to comprehend. When intelligence occurs too fast, the very large number of decisions average out to be some central tendency of decisions plus what we think of as noise. When intelligence occurs too slow, we attribute the unexpected events to randomness or to precursor data that we had missed. We have a long and successful history of rationalizing away all but human intelligence, and at times we even suspect the existence of human intelligence.
Intelligence is a fact of nature. I suspect intelligence occurs in the empty space between materials. In the brain, the intelligence resides in the gap between neurons. Neurons signal each other through chemicals that they can secrete and detect at the cell walls. All other cells in nature have similar chemical emitters and receptors. Intelligence is probably common throughout all live but operating at different time scales and different scopes of information. Similarly, intelligence may reside in the gaps of non-biological materials. Gaps of emptiness are everywhere, and I think this is significant as I discussed earlier (here, here, here, and here). Emptiness is the realm of both information and intelligence.
I see a way to link these ideas together to the concept that places the origin of the universe in the infinitesimally distant future. The origin of the universe is a zero-dimension point that emits a pulse at an incredibly high rate of 1043 times per second. The pulse is like a radar beacon seeking a health-and-status message from whatever it is that can detect that signal. Perhaps those are what we today describe as vibrating tightly wound strings, or perhaps it is something even more exotic that we may never conceive of.
The beacon signal triggers a response from whatever receives it. That transponder-object will report its own local health and status as it exists because it will know this immediately. It will also report about its surroundings (what is nearby) based on saved information about what it discovered earlier. The transponder-object will need to periodically refresh this stored information about its surroundings. For its peer transponder objects, it may be able to detect their responding signals in infer distance from time lag or signal strength. But the transponder-object may also become its own beacon to emit a signal at a slower rate to learn about larger structures that it and its peers are participating in.
This concept leads to a chain of faster-beacon transponders that become slower beacons for larger structures that need more time to react. At each step in the chain, the transponder responds with its own internal health and status, the prior information it learned by listening to the responses of its peers, and the even more historic information it learned from its own beacons.
To get from 1043 times per second of the origin of the universe to the 100 time per second of human intelligence, there are probably a large number of stages we have not yet even imagined. But at some point, there will be a stage that we can recognize as the atomic particles of neutrons and protons that are aggregates of quarks. The larger structure of the neutron or proton receives beacons from its component quarks asking for health and status information. The neutron or proton responds with immediately available information such as the fact that it still intact and may have current energy in the form of rotation or vibration of its surface. It would also respond with its most recent encounter with its neighboring peers (for example, a proton in a carbon-14 atom may report its awareness of 5 peer protons and 8 peer neutrons). Finally it will emit a beacon to the carbon-14 atom to request its health and status, repeating the process to discover the carbon-14 atom’s relationship in molecules or its status as a free atom.
Eventually, the information supply chain builds a picture of the world that the human brain can comprehend. The lengthy information supply chain produces meaningful collection of mechanical deformations or chemical reactions on the sensory cells. Intelligence within the brain is possible because it has intelligible data prepared from earlier stages close to the original universal beacon. Almost certainly, the information chain does not stop at the human brain. Our sciences involve a conscious effort to collect information about our surroundings. We are probably also collecting information unconsciously. Forms of intelligence operating at even slower beacon rates are exploiting the information we collect for purposes beyond our comprehension.
The above speculation may be fictional, but I find it a useful analogy to the practical lessons I learned working with data in a information supply chain model. Instead of leaping directly from operational data to decision support analytics, I found that it made sense to break the process into several steps of successive refinements where each step involves autonomous and focused intelligence that works on the information it obtains from earlier stages to prepare data suitable for later stages. Decision-support analytics needs decision-level intelligible data. In my experience, this comes from intelligence in prior steps of a information supply chain.
With this data bias, I wonder whether this is identical to how the universe is made intelligible. Intelligence requires intelligible data. Intelligent information supply chains may exist to prepare intelligible data.