Unlike skepticism of knowledge or of ability to know the truth, the modern skepticism is a skepticism of having enough data.
With big data, we end up with deep historical data from distant events. There will be something needed to fill in the gaps that were mysteries at the time. That gap filler will be spontaneous data whether we acknowledge it or not. Even if we as humans leave the gap unfilled, we can’t be sure that our data analytics or machine learning algorithms won’t fill it. When it does, how can we be sure it won’t come up with a supernatural explanation that it keeps to itself?
Dedomenology is about distinguishing data in terms of how well it captures an actual event in the real world with solid documentation and control so that we know exactly what is being reported. The idea that data could range from being very bright (it really did happen) to very dim (we’re not sure what happened) to being dark (we guess this might have happened) to forbidden (this could not have happened) to unlit (something found nearby), to accessory (irrelevant observations). All of this variety of data can occupy a spot in a data store.