In relativity theory, the experience of time in different frames of reference is always the same but comparison of clocks between different frames can show a difference. The effect of the difference is most noticeable when differences in speeds approach the speed of light or in places with very different gravitational fields. Such frames of reference a necessarily separated by vast distances, but at least the physics allow for experiencing time in different frames where the other frame appears to have a clock running faster or slower than the local clock.
I once imagined what it would be like if this effect could occur in a shared local space. Two people in conversation in a room and one of the people turns on a field that allows himself to experience time at a different rate from the other. Imagine an extreme example where this person experiences one second of life while the other person experiences 100 years. These are two humans in close proximity, but they would be unable to communicate. They would not even have an incentive to communicate. It would take 100 years for the slower person to even acknowledge a question was being asked and still not produce a meaningful reply. The normal-time experiencing person would not bother even continuing a conversation.
Now I imagine two kinds of intelligence: one that exists on nano-second scales and another that exists on millisecond scales. One would accumulate a million experiences in the time it takes the other to have a single experience. This is analogous to two people where one lives for 12 days in the time for the other to live one second. These intelligence beings would not have any incentive to communicate with each other or even to acknowledge the other is intelligent at all.
Over the past year, I wrote several posts that explored some ideas about the structure of the universe in terms of intelligence instead of atoms. The intelligible universe is made that way by forms of intelligence operating within shorter time intervals. In terms of data science, I imagined a timeline that places the universe’s origin just a infinitesimal period of time in the future. This solves the singularity that is proposed to happen just once some 13 billion years in the past by placing the origin-singularity perpetually just a moment away. From this zero-dimensional point representing all physical space in universe, time emerges in a steady stream. On this stream of time, there are many levels of ecosystems operating at different time scales. Defined by their time-scale location with respect to the big bang, these ecosystems build a intelligence chain of increasingly comprehensible description of the world. All of this intelligence occurs offset from the origin of the universe, and this offset is in the past from the physical reality and thus intelligence is a property of empty space. This empty space has different scales corresponding to time-scales: smaller empty spaces have intelligence operating on shorter time scales. My thinking is that the vast emptiness of the universe is significant, because empty space provides the intelligence that makes the universe intelligible.
Recently I encountered a quote by poet Muriel Rukeyser “The universe is made of stories, not of atoms”. That’s the extent of my knowledge of her work so I don’t know what she meant by it. But this quote captures the above thinking that because physical reality is permanently condensed just a moment into the future, everything we know about the universe are stories about the universe told by others.
We can make sense of the universe because of the stories told to our senses. We benefit from the stories despite the fact that we do not acknowledge the story-tellers. Similarly, the story tellers are probably not even aware that we are eavesdropping on their tales. Each scale of time or space (really the same thing) has their own communities of story tellers.
The different time scales have different clocks that prevent or severely impede communications between levels. For example, my senses operate on a scale of milliseconds so it is impossible for me to talk with the story-tellers that experience the universe at nanosecond intervals. Their stories are available to me just like ancient artifacts are available in archaeology. I can make sense of their stories because I have time to consider what their stories. In contrast they can not make sense of my stories because they no longer exist when I get around to making my stories.
I imagine this hierarchy of story-telling a similar to a food chain. As in a food chain, the predator preys upon food sources without considering their commonality. The predator appreciates only the well-formed meal the prey provides. Instead of food, this is a chain of intelligible stories: stories of quarks informing stories of atoms informing stories of molecules, etc.
In our culture we have developed a physics that describes a mechanical world. This comes from our species’ earlier experiences in surviving in nature and later figuring out how to manipulate nature such as in our initial production of hand-axes. The mechanistic physics lead us to explain the universe as some type of causal machine that can be traced back to its origin in the distant past. Our recent understanding is that there was an original event that we call the big bang. From that point, the universe organized itself to its present form where intelligent beings can exist.
I imagine a different world view that we might instead have adopted if our first technologies were data science instead of hand axes. Data inverts the timeline because the origin is the observation. It is only in our contemplation of the historic observation that we can model the world. With this inversion, the future is determining what we can comprehend. The origin of the universe is just ahead of us, in the very near future.
This alternative view of the origin suggests that the physical universe does not evolve causally to become a greater organization. Instead, the evolution is in the the stories about the universe that is just a moment away in the future. The organization comes from different levels of intelligence telling stories that become available to longer time-scale intelligence levels. In one of the earlier posts, I explained that the reason our relatively simple brains can contemplate the universe is because the brains consider information that has already been worked out by shorter time-scale intelligence. The universe is intelligible only because our senses collect observations that are already prepared to be intelligible to our simple brains.
Evidence of different levels of intelligence comes from our scientific observations. For example, the mechanics we observe with objects we can handle work very differently from mechanics at sub-atomic levels. Also, the physics of galaxies appear to work differently so that we have to hypothesize the existence of dark matter. The physics of the cosmos appears to work even more differently so we have to hypothesize the existence of dark energy. All of these difference may instead be explained by different levels of story-tellers. The nature of story-telling at sub-atomic levels is different from the nature we experience everyday and both of these are different from the scales of the galaxies and the cosmos. We capture these differences by using different forms of mathematics.
This data perspective of reality is that everything we observe is a historical record of stories told from a singularity forever inaccessible in the very near future. We observe stories about reality instead of observing reality itself. The true reality is incomprehensible to our very simplistic brains. We observe the reality offset in the past from this reality to allow time for others to prepare the stories can we can observe through our simple senses.
In contrast, modern science accepts observations as reality. This view supposes that the brain is sufficiently complex to comprehend reality based on the raw observations of the human senses. Modern science also presumes causality: the past events determine future events. In particular, modern science distinguishes causality from correlation. All causal relationships have strong statistical correlations but not all correlations are causal. Causal explanations are ones that reflect immutable facts about reality. Causal explanations have strong predictive power.
Data-perspective of reality does not rely on causal explanation. It is sufficient to observe a correlation that happens to have a sufficient record of having predictive power. Our senses observe stories that happen to be consistent over time because the story tellers happen to constrain their stories to their peculiar cultures. Like regular science, data-perspective science values observations, but I think data science respects observations more than regular science does. In particular, while regular science accepts and embraces something that I call dark data: model-generated data to fill in gaps in observations, the data-perspective science strives to postpone any model-generated story-telling to the very last stage of making predictions.
On this blog, I emphasize the concept of discovering new hypotheses from observations. The goal is to discover something new and surprising about the real world. To reach this goal, we need to be open to new and surprising observations. In particular, this goal motivates us to seek out more observations and emphasize bright data (fresh observations) even if we perceive the data as dirty. In the context of human culture big data we obtain from social media, we accept first person narratives as they exist and promote seeking out stories from reluctant story tellers. We rely on judgement by trusted and accountable decision makers to interpret and challenge the data in order to come to some competent conclusion about the trustworthiness of the predictions.
This data perspective of discovering nature places a priority on observations over theories. In a pure form of data analytics, the decisions are based on inferences of actual observed data. The ideal of automating data-driven decision-making would rely entirely on observations instead of theories. This concept relies on iteration to collect new data that result from earlier decisions based entirely on data.
This data perspective tends to take a purely inductive approach to understanding the world. Patterns or correlations in data can inform decision making without establishing firm proof of causality. We benefit as long as the recommendations have more frequent favorable outcomes than unfavorable ones. In the majority of cases, the observed patterns will persist to confirm the predictions even if we can not establish an cognitive explanation for the pattern.
Occasionally, the pattern will not continue and thus invalidate the decision. We can ignore or dismiss failures as long as they occur infrequently and their outcomes are not too intolerable. Data driven decision making can be based on pure induction from observed data without the pretense of understanding some underlying truth of a causal relationship.
The argument against pure induction is that it can never eliminate the possibility that the next observation can surprise us. The project of data analytics is explicitly based on induction by using old observations in our attempts to predict the future. For example, machine learning algorithms automatically group data and establish relationships between these arbitrary groups. The groups depend on available data: if the data is different then the algorithms may derive new groups or memberships.
In my earlier posts about obligation to accept predictions from analytics, I argued that the process is beneficial over time. A faithful cooperation of a certain recommendation will provide opportunity to collect new data that reflects the changed behavior. This new data will inform future results in predictive analytics that can either correct past mistakes or improve upon past successes.
No matter how reliable a pattern is in past observations, the next observation may surprise us either because it is a missing observation all along (such as the black swan problem), because the historic pattern was an accident, or because some form of magic or divine intervention occurred (that modern science dismisses). In science, we attempt to eliminate this chance of surprise by adding criteria such as sound cognitive processes that justify the relationship.
For modern 3-V (volume, velocity, and variety) big data analytics, there is no time for cognitive processes. We need to accept and act on the discovered patterns without understanding their underlying reliability. In several earlier posts I objected to the concept of eliminating the cognitive role in production data systems. I reasoned that there is still a need for human cognitive processes (and in particular rhetorical scrutiny) before concluding the truth of some observation. Yet, we continue to embrace big data systems and we increasingly automate the decision-making to immediately implement the recommendations from predictive or prescriptive analytics.
This demand for cognitive justification (such as proof of causality) comes from the theory of time that the the present is a consequence of distant past event such as the cosmological big bang. In contrast, our modern approaches of using abundant observations to derive patterns dismiss the need for a cognitive justification. Implicitly, this dismissal is to accept a different theory of time. Instead of time being measured from a distant past original event that causally determined the entire universe, time can be measured in reverse as the time from the immediate future. Causality is irrelevant when we observe current events as coming from the future event.
We are presented instead with a steady stream of observations that suggest patterns that may be only temporary. Given our access to abundant data, we have the opportunity to make a multitude of decisions based on observations without any cognitive justification such as causal explanations. Experience tells us that our induction from observations succeed more often than they fail and when they do fail they tend to fail gracefully in a way that we can tolerate or even ignore the consequence. Examples include how financial firms use big data to trade on the stock markets, and how marketing firms use big data to develop new strategies. They succeed because the analytics of observations will succeed more often than they fail, and the returns from the successes outweigh the losses from the failures.
There is no need for a theory of the reality of the world. We only need to follow the data.
The poet’s quote about the universe being made of stories instead of atoms seems to be right. We don’t need atoms to understand the Universe. We know the world from the stories. Our observations are the stories told to us.
With my concept of a zero-dimensional “big bang” perpetually offset a moment into the future, I describe different story tellers operating at different time-interval offsets from this future event. Human conscious thought is stuck several milliseconds behind this future event. Between our conscious thought and the originating event are numerous shorter time intervals where story tellers are building stories in an supply-chain fashion until their stories become intelligible to our brains. It is at least conceivable that one of the story-tellers in the chain can suddenly start telling different stories.
If there were some way that we can communicate with these earlier story-tellers and provide them new instructions for how to tell their stories, we can perform magic instead of technology. To make anything possible, one would only need to convince the right story tellers to tell different stories.
By switching the origin of time to be a moment in the future instead of some distant past event, we eliminate the need for causality to constrain future outcomes. To get what we want, we only need to communicate our wishes to the up-stream story tellers operating in shorter time-intervals closer to the origin of time in our future. Unfortunately, this kind of upstream communication is impossible.
In an earlier post I made an analogy of time being a motor-boat where we are water-skiers at the end of some length of rope. We ride in the wake a constant distance from the source. In this analogy we have to negotiate the waves that reach us. We don’t have the option of influencing the creation of the waves we encounter because those are created outside of our reach in front of us but still behind the boat.
The intelligibility supply chain works on stories from shorter time intervals to create meaningful stories of longer time intervals. These intervals are measured from the source of time itself. The shorter intervals are always in front of the longer intervals. The shorter time intervals will never have access to the stories created within the longer time intervals. By the time the longer time intervals tell their stories, the shorter intervals have moved on ahead. The shorter time intervals have no clue of the consequences or intentions of the longer intervals.
The problem of communication between intervals is compounded by the fact that the story tellers occupy only a small portion of the time interval they inhabit. As I described in an earlier post, the intelligence is associated with different manifestations of matter: atoms, molecules, materials, and so on up to the entire cosmos. The physical dimensions of these entities correspond to the time interval they occupy. (Time can measure physical distances, such as the distance traveled by light). There are vast intervals where no entity resides. Communication between intervals, even if it were possible, would require major leaps in interval-distance.
There is a single direction of flow of stories from shorter intervals to longer intervals. Those entities that exist in longer intervals have to live with the consequences of the stories told within shorter intervals. In contrast, the shorter intervals have no means of learning the stories of the longer intervals.
In human terms, our stories include our wishes. We can pursue our wishes through the assumption that the shorter intervals will continue to tell their stories, and we can call this technology. Alternatively, we may pursue our wishes through attempts to convince the shorter intervals to tell new stories that are more to our advantage, and we can call this magic. Such magic involves a form of time travel, but one of trying to get our information in between the origin of time and the interval where the targeted story tellers reside. This form of time travel is different from leaping into the distant future or past, but instead is just an attempt to get closer to the beginning of time that is in our immediate future. This type of time travel requires getting inside much smaller time-intervals to get their attention.
The communication problem is like the example at the beginning of this post. That example involves two humans where one will experience a second of life while the other experiences 100 years of life. The one who lives for 100 years will live those years without any influence at all by the one who lives out just one second. The effect is similar to an imagined time-traveler traveling into the future. The subsequent activities of the future proceed without his contribution. As long as this travel persists, the one who experiences one second over 100 years will have to accept the consequences of the world that is ignorant of his existence. Eventually, after a few seconds, that person will need to take a breath, the breath he takes will be of air that is available hundreds of years into the future. That air may be unpleasant, but he had no opportunity to influence the sequence of events that led to that consequence.
This concept of time travel is what usually occurs in science fiction. It is also consistent with time-dilation that occurs in relativity theory. The time-traveler into the future remains human while he is traveling. Relativity claims the frame of reference does not matter to the experiences of the traveler.
In contrast, the time-travel required to perform magic (to influence story-tellers of shorter time-intervals) is not relativistic. The experience of life in shorter time intervals requires the traveler to become one of the denizens of that interval. For example, if we need to communicate with atoms, we need to become atoms.
Similarly, in my time-traveling human scenario, by living one second while the world progresses ahead by 100 years that person will cease to be human. Perhaps from his perspective the world of humans would cease to be human but he’d have to admit he’s the only human that exists. This type of time-travel by living in different intervals separates the traveler from his former peers. In particular, he is isolated from the story-telling telling community. The 100 years will progress without his opportunity to continue to the culture of stories, but he will have to live with the consequences of stories told in that interim.
Imagining again the magician who travels to a smaller interval, that traveler will lose his humanity and join the culture of that dimension. For example, someone desiring to cure a cancer may want to get into the intervals where proteins and DNA operate in order to negotiate with those entities to stop doing something that is having some drastic effects on the being living in longer intervals. A path to curing cancer could be to convince the bio-chemical processes to reform their activities. However, to get to enter this conversation, the curer would have to live life at the far faster pace of proteins and DNA. That conversation will have to proceed within the environment of the distractions and concerns of those proteins and DNA. The curer would become another protein or DNA. He may have an additional distraction of proving himself to not be foreign material that normally should be destroyed. Entering that world will present the curer with need to survive at that level instead of needing to survive at the human experience level. The curer will more likely be convinced by the story tellers to continue telling the same story in order to survive, than to convince that population to reform in order to save some distant concept of the body as a whole.
Pingback: Improving government with frequently updated laws: rule by data | kenneumeister
Pingback: Dark nothing hypothesis macro-sized particles | kenneumeister
Pingback: Controversy over Religious Freedom Restoration Act (RFRA): Separate laws for young and old | kenneumeister
Pingback: Dedomenocracy’s nemesis: the innovative criminal | kenneumeister
Pingback: Perspective of real time analytics | kenneumeister
Pingback: Thoughts about the nature of intelligence | kenneumeister
Pingback: Nanoseconds don’t listen to milliseconds | Hypothesis Discovery
Pingback: Improving government with frequently updated laws: rule by data | Hypothesis Discovery