We should learn from recent experience of large data technologies the lesson that decision making can benefit from streaming data in addition to (and often instead of) the publication science of one-time experiments. It is clear now that policy making needs access to a continuous stream fresh data about old ideas, especially when that data accumulates over time. With access to the technologies to do this work, it is unacceptable to base policies on the failed approaches of the past that rely on published studies.
Unlike skepticism of knowledge or of ability to know the truth, the modern skepticism is a skepticism of having enough data.
An initial consciousness could through design, refactoring, and replication build up the universe without any further miracles beyond the initial consciousness in the first place.
With big data, we end up with deep historical data from distant events. There will be something needed to fill in the gaps that were mysteries at the time. That gap filler will be spontaneous data whether we acknowledge it or not. Even if we as humans leave the gap unfilled, we can’t be sure that our data analytics or machine learning algorithms won’t fill it. When it does, how can we be sure it won’t come up with a supernatural explanation that it keeps to itself?
What really makes legacy news fake is the tyrannical influence of past narratives that influence what future observations we accept. Fake news is the need to keep old narratives relevant when the such a narrative never would have emerged if started from scratch with the data available at the current moment.
Having model data explicitly materialized into tables gives the data clerk to recognize the deficiency that this data is not observed data. This provides the data clerk the opportunity to ask whether there can be another source for this data. Perhaps, for example, some new sensor technology became available that provides observations that previously required models to estimate. The analyst can then revise the analysis to use that new data instead of the model-generated data.
The popular dark-matter hypothesis takes for granted the existence of fundamental particles that are outside of human capacity to observe. The hypothesis in the first article is that these hidden particles are as-yet undetected peers of sub-atomic particles we already know. The lack of perturbation of post-collision dark matter implies that if such sub-atomic dark-matter particles exist, they do not collide individually like particles we know. My conjecture is that the entire blob depicted in ghastly blue in the visualization is a single particle, or an agglomeration of galaxy-sized fundamental particles. The collisions didn’t affect these particles because the collisions are trivial for the scale of these particles.