This is just a possible scenario of a synergy between humans and automation technology. We need technology for its mastery of the time-domain. Technology depends on us for survival due to our biologic advantages of solving problems in time-volume or in a frequency domain.
Time, as we experience it, has different components sharing a common unit (such as seconds). There is the scientific time that is analytic in a way that makes possible mechanistic models that are very successful at modeling the physical world. There is the historic time that allows for growing intelligence made possible by the additional evidence that comes inevitably from the passage of time. For intelligence to act upon the physical (mechanistic) world to exercise a free will, there is a component of time required for persuasion through some process that allows for selecting the opportunities presented by the otherwise indifferent physical world.
We should learn from recent experience of large data technologies the lesson that decision making can benefit from streaming data in addition to (and often instead of) the publication science of one-time experiments. It is clear now that policy making needs access to a continuous stream fresh data about old ideas, especially when that data accumulates over time. With access to the technologies to do this work, it is unacceptable to base policies on the failed approaches of the past that rely on published studies.
Unlike skepticism of knowledge or of ability to know the truth, the modern skepticism is a skepticism of having enough data.
An initial consciousness could through design, refactoring, and replication build up the universe without any further miracles beyond the initial consciousness in the first place.
With big data, we end up with deep historical data from distant events. There will be something needed to fill in the gaps that were mysteries at the time. That gap filler will be spontaneous data whether we acknowledge it or not. Even if we as humans leave the gap unfilled, we can’t be sure that our data analytics or machine learning algorithms won’t fill it. When it does, how can we be sure it won’t come up with a supernatural explanation that it keeps to itself?
The popular dark-matter hypothesis takes for granted the existence of fundamental particles that are outside of human capacity to observe. The hypothesis in the first article is that these hidden particles are as-yet undetected peers of sub-atomic particles we already know. The lack of perturbation of post-collision dark matter implies that if such sub-atomic dark-matter particles exist, they do not collide individually like particles we know. My conjecture is that the entire blob depicted in ghastly blue in the visualization is a single particle, or an agglomeration of galaxy-sized fundamental particles. The collisions didn’t affect these particles because the collisions are trivial for the scale of these particles.