Unlike skepticism of knowledge or of ability to know the truth, the modern skepticism is a skepticism of having enough data.
Following up on my last post, I am also wondering about where IQ comes from. As noted there, there seems to be a genetic component to IQ because IQ tends to be stable over an individual’s lifetime, and there seems to be an environmental component since IQ scores have been increasing in recent generations.…
Clearly there are better philosophies that introduce morals, charity, cooperation, restraint, human rights, etc. Unfortunately, they don’t ever fully refute the default philosophy of the martial arts. As implied in the video at the top, sometimes these elevated philosophies come into conflict in a way that must be resolved by the default philosophy of martial arts.
Our admiration of the martial arts is a result of our respect for the strength of its underlying philosophy.
The modern era of machine learning, though, presents us an example where we can begin to suspect a separate level of intelligence, and one that feeds on our intelligence. As we sense this happening, we realize that we’ll get no sympathy from the machine for the exact same reason we don’t recognize naturally occurring intelligence in non-humans.
With big data, we end up with deep historical data from distant events. There will be something needed to fill in the gaps that were mysteries at the time. That gap filler will be spontaneous data whether we acknowledge it or not. Even if we as humans leave the gap unfilled, we can’t be sure that our data analytics or machine learning algorithms won’t fill it. When it does, how can we be sure it won’t come up with a supernatural explanation that it keeps to itself?
Just like the fact that I can’t interest an advanced piano teacher doesn’t diminish the fact that such teachers exist, the fact that science can’t engage the immaterial teacher says nothing about the existence of such a teacher. The teacher is simply uninterested in engaging, and have every good reason to not engage.
Success or fame in the current environment is a lottery prize given to great and weak with equal probability. In such an environment, it makes sense to put as little effort into success as one would put into buying a lottery ticket. A weekly effort, perhaps, but one that takes only a few minutes, leaving the rest of the week free.
This futuristic dedomenocracy offers a way to falsify the deterministic theory of mind by simple observation of innovative crime that occurs despite the power of the available data. Observations of successful innovative crimes that have unacceptably damaging consequence, or that occur in unacceptably high numbers would be evidence that we lack sufficient data for the deterministic model. However, I am assuming a future where we will have observations of just about everything that is observable. If dedomenocracy continues to experience innovative crimes despite having access to everything that is observable, then there the innovative part of the mind must be accessing something that can not be observed.
The popular dark-matter hypothesis takes for granted the existence of fundamental particles that are outside of human capacity to observe. The hypothesis in the first article is that these hidden particles are as-yet undetected peers of sub-atomic particles we already know. The lack of perturbation of post-collision dark matter implies that if such sub-atomic dark-matter particles exist, they do not collide individually like particles we know. My conjecture is that the entire blob depicted in ghastly blue in the visualization is a single particle, or an agglomeration of galaxy-sized fundamental particles. The collisions didn’t affect these particles because the collisions are trivial for the scale of these particles.
Dedomenocracy is a scaled up version of modern data science practice using big data predictive analytics to automate decision making. As a data science project, there is a need to evaluate the data in terms of how closely it represents a fresh unambiguous observation of the real world at a specific time instead of a reproduction of a past observation through model-generated dark data. Darker data involves some level of contamination with historic observations or with our interpretation of past observations. The problem with darker data is that its use of old and potentially outdated data can discount more recent observations that can tell us something new and unexpected about the current circumstances of the world.