Time, as we experience it, has different components sharing a common unit (such as seconds). There is the scientific time that is analytic in a way that makes possible mechanistic models that are very successful at modeling the physical world. There is the historic time that allows for growing intelligence made possible by the additional evidence that comes inevitably from the passage of time. For intelligence to act upon the physical (mechanistic) world to exercise a free will, there is a component of time required for persuasion through some process that allows for selecting the opportunities presented by the otherwise indifferent physical world.
In the actual case, humans facilitated the fast recovery after the false alarm because even though each person is specialized in terms of what they would have been doing that day, they would also, in large part, recognize the situation by observing people around them going about their own restoration of normal life. The specialization among humans is fundamentally different from the specialization of machines in that despite the specialization in duties or goals, humans share their human identity. They will adjust their expectations and demands in part to accommodate for the difficulties they can see others going through to meet those demands. Also, people will volunteer their services in areas that they normally do not work. In contrast, the various systems within an automated economy are less equipped to cooperate with unrelated systems, either in perceiving the need to cooperate, or having the capacity to help.
Unlike skepticism of knowledge or of ability to know the truth, the modern skepticism is a skepticism of having enough data.
With big data, we end up with deep historical data from distant events. There will be something needed to fill in the gaps that were mysteries at the time. That gap filler will be spontaneous data whether we acknowledge it or not. Even if we as humans leave the gap unfilled, we can’t be sure that our data analytics or machine learning algorithms won’t fill it. When it does, how can we be sure it won’t come up with a supernatural explanation that it keeps to itself?
Dedomenology is about distinguishing data in terms of how well it captures an actual event in the real world with solid documentation and control so that we know exactly what is being reported. The idea that data could range from being very bright (it really did happen) to very dim (we’re not sure what happened) to being dark (we guess this might have happened) to forbidden (this could not have happened) to unlit (something found nearby), to accessory (irrelevant observations). All of this variety of data can occupy a spot in a data store.