Over the four decades of my adult life there has been a recurring theme in my education and profession and that theme is that the world works on fundamental principles and atomic units. In college, I recall the confidence that we can understand everything from quantum mechanics if only we had the computing power to…
This is just a possible scenario of a synergy between humans and automation technology. We need technology for its mastery of the time-domain. Technology depends on us for survival due to our biologic advantages of solving problems in time-volume or in a frequency domain.
Time, as we experience it, has different components sharing a common unit (such as seconds). There is the scientific time that is analytic in a way that makes possible mechanistic models that are very successful at modeling the physical world. There is the historic time that allows for growing intelligence made possible by the additional evidence that comes inevitably from the passage of time. For intelligence to act upon the physical (mechanistic) world to exercise a free will, there is a component of time required for persuasion through some process that allows for selecting the opportunities presented by the otherwise indifferent physical world.
We should learn from recent experience of large data technologies the lesson that decision making can benefit from streaming data in addition to (and often instead of) the publication science of one-time experiments. It is clear now that policy making needs access to a continuous stream fresh data about old ideas, especially when that data accumulates over time. With access to the technologies to do this work, it is unacceptable to base policies on the failed approaches of the past that rely on published studies.
Unlike skepticism of knowledge or of ability to know the truth, the modern skepticism is a skepticism of having enough data.
An initial consciousness could through design, refactoring, and replication build up the universe without any further miracles beyond the initial consciousness in the first place.
With big data, we end up with deep historical data from distant events. There will be something needed to fill in the gaps that were mysteries at the time. That gap filler will be spontaneous data whether we acknowledge it or not. Even if we as humans leave the gap unfilled, we can’t be sure that our data analytics or machine learning algorithms won’t fill it. When it does, how can we be sure it won’t come up with a supernatural explanation that it keeps to itself?