This article describes a research result of trying to understand the nature of dark matter so that we can find clues about how to finally observe it directly. The analysis is of colliding galaxy clusters that shows that the colliding volumes of dark matter do not appear to be interacting with each other. Although the visible matter is disturbed by the collisions, the conjectured dark matter shows no disturbance. The implication is that dark matter does not consist of invisible particles having familiar properties such as the ability to collide with each other. Another article claims same observation appears to rule out a colliding parallel universe (or a 3D intersection of a multidimensional universe). These inconclusive claims argue against dark matter consisting of tiny particles or intersections with other universes.
Dark matter is called dark because it’s only known affect on visible matter is gravity, making it undetectable from electromagnetic observation. All we know for sure is that there is more to the Universe than what we can detect in laboratories or in astronomy. (This has been common knowledge for all human history.)
In earlier posts, I have argued that model-generated data does not belong in data stores. I called this type of data “dark data” expressly as an analogy to cosmology’s dark matter. Dark data is a substitution for missing observational data. That substitution can be based on highly trusted scientific theories, but it should still be segregated from observational data. Scientific analysis should be analysis of observations only. Application of scientific theories should occur only at the end of a process of scientific effort when the observation data is prepared for publication or persuasion.
The study of model-generated data is metaphysics instead of physics. There is useful role for metaphysics, specifically to challenge the logic of our theories, but it is distinct from the direct understanding of nature. Dark matter is dark data. The study of dark matter is metaphysics instead of physics.
In this particular study, model-generated data becomes data. Models compute the distribution of where dark matter should be found. Modern technology permits compelling visualization of the conjectured matter, making it appear as a peer to electromagnetic observations. The visualization deceives us into seeing this data as if it were actually observed.
With this visualization of conjectured stuff, we observe the interaction of model-generated data behaving in a surprising way. Model generated dark matter doesn’t collide with dark matter like known-matter would react. The visualization clearly shows that the dark matter organizes into clumps. This presents a contradiction. Dark matter must be able to interact with itself in order to coalesce into clumps but doesn’t interact with its peer clumps when these clumps should have collided.
The contradiction is a fault of asserting model generated data (the proposed distribution of dark matter) as observed data. Eliminating the proposed matter as valid observation data solves the problem. Dark matter by definition resists observations. From a data perspective, the existence of dark matter has no place in the data store.
There are still two different useful conclusions. The first conclusions is that colliding visible matter interacts with its kind: for example by changing where star-forming regions occur. The second conclusion is that colliding dark matter does not interact with its kind. The first conclusion is from natural physics. The second conclusion is from metaphysics. The fault of this study, or at least of the article, is to discuss these conclusions simultaneously as part of the same intellectual endeavor of natural physics. Dark matter resists direct observations. Any discussion of dark matter is metaphysics, an investigation about our theories, not about nature.
We conjectured the existence of dark matter to explain the motions and lensing by galaxies or galaxy clusters can not be explained by the mass of the observable matter. We should leave the missing data explicit in the data store. It is sufficiently satisfying to accept that we have no observation that can explain what these behaviors. Derived dark matter is not an observation. Adding dark data to the data store misdirects our research investments meant for physics and spends it instead on metaphysics. To the extent that this is government funded, we are paying for science and getting metaphysics instead.
Ethically, study of dark matter should come from funds specifically allocated for metaphysics instead of natural physics. This same ethics applies to all forms of dark data. A presentation of an data analysis should be completely honest about whether it is a conclusion about the natural world based on direct observations, or it is a metaphysical conclusion about our theories about the natural world. I realize that my views are in the minority. Most scientists have no problem substituting model-generated data for missing observations as long as there is broad consensus about the reliability of the model.
I don’t object to the project of studying theories, but I place this effort in the realm of metaphysics, by the terms original definition of what comes after physics. In the data life-cycle, the metaphysics occurs at the end as part of the persuasion to decision makers (or sponsors).
We must enforce governance to keep model-generated data out of the data store of observations. In the case of cosmological dark matter, we will need to confront the fact that the observational record is incomplete. There is more to the cosmos than what we can observe electromagnetically (light, cosmic rays, radio, etc). I grant the legitimacy of proposing dark matter to explain the observations. I object to using this conjecture as a source of observations such as happened in the article that concluded that colliding dark matter blobs do not interact with each other.
Observations of galaxy motions informs us of our ignorance. The stuff we are familiar with on Earth can only account for about 5% of what makes up the universe. Dark energy and dark matter are placeholders for that missing 95%.
Personally, I am not surprised about this ignorance. On the contrary, I think we should be proud of the fact that the brief time of existence of humanity allowed us to achieve a 5% understanding based only on our experiences on Earth. Over the past several decades, we have rapidly decreased this percentage of our understanding, from initial near 100% about a century ago to the current 5%. I expect our real understanding is proportional to humanity’s experience in time and place relative to the cosmos: minuscule.
There is a human limitation of being stuck on Earth and its immediate environments. In particular, our access to observing the emptiness of space is limited to approximations what we create in laboratories or what we experience in interplanetary space. We have no access to the empty space within and between galaxies. That empty space may be of a different quality of the space we can observe locally.
One particular distinction is scale. On the scale of light-years or parsecs, there may be different kind of empty space as what exists within star systems. This is not about the sparse material contents of the empty space. Instead there may be different types of emptiness at larger scales than humans can possibly experience through experiments.
I think we should learn a lesson from the last century’s discovery of how nature operates at sub-atomic scales. One way to look at the weirdness of quantum mechanics is to conclude that empty space behaves differently at that scale. At quantum scales, empty space does not distinguish particles from waves. The empty space at sub-atomic levels is different from the empty space that humans can experience directly.
There may be similar differences in the nature of the empty space at galactic and inter-galactic scales. Perhaps the empty space that surrounds a galaxy is different than the empty space the separates the galaxies. The model I have in mind is of oil droplets in water. The galaxies are trapped in the droplets and travel through the water of the intergalactic space. The droplet containing the galaxy can have motion that carries the galaxy along with it. This spinning droplet of galactic empty space with an entrapped galaxy can explain the fact that the furthest arms of galaxies do not slow down as they should from the galaxy’s massive center. This droplet of empty space may have different properties than the empty space between galaxies. These properties may involve slowing down light to produce refraction. The differences in empty space at galactic and inter-galactic scales can explain the illusion that the universe’s expansion is not slowing down like it should based on the calculation of gravitational mass. Intergalactic empty space being of a different nature of intragalactic space may be an alternative explanation to dark energy.
Assume for a moment that my dark nothing hypothesis of multiple types of nothingness at different scales is the ultimate truth. This is a truth that can never be scientific because humans will never be able to sample the nothingness outside of our stellar-neighborhood to observe the nothingness-stuff that spins the galaxies, nor to observe the intergalactic nothingness-stuff the presents low resistance to the intragalatic nothingness-stuff.
The dark nothing is not a scientific theory. It is a metaphysical one, similar to the conjectures of dark matter, extrapolated from our experience of stuff we can observe. Dark matter extrapolates from tangible stuff. Dark nothing extrapolates from the different theories of empty space that separate sub-atomic particles and the empty space that separate human-observable objects. In contrast to dark matter, dark nothing receives little theoretical consideration. We assume all empty space above the scale of subatomic particles is the same nothingness-stuff.
Recently I observed this video that attempts to explain (entertainingly) the current current challenges of research to explain the cosmos. The bulk of the video is about trying to understand dark matter, but it describes the context of the ideas of the original big bang and inflationary period the started the universe.
My observation is that the big bang and inflation were universal in size where entire context of time and space were instantly available. Within this instant, there was no measuring standard except the entire extent of the universe. Clearly, during this period there would not be any possibility of any kind of particles or of any kind of physics we can observe.
There there would be no distinction between sub-atomic scales (where quantum mechanics tells us empty space behaves differently) and other scales ranging from stellar, to galactic, and inter-galactic. Within this brief period, many fundamentally different concepts of empty space may emerge. One of those concepts provided prototypes that define space at quantum scales. Another concept provided prototypes at stellar scales. There may be other prototypes for galactic and intergalactic scales. These prototypes would eventually define different types of emptiness.
Working from a big-bang (all starting in one 0-dimensional point) with inflation when dimensions emerge, my hypothesis is that this would create fundamental particles of a variety of scales. There may be macro-sized particles where a single particle can encapsulate a galaxy. The particles can join (like quarks combine to make protons, or nucleons combine to make nuclei) to create composite particles of galaxy clusters. I think the observations of no splash-like affect of colliding stuff presumed to be dark-matter can be explained by analogy to our experience with colliding atoms or of particles. In particular, unless there is sufficient violence involved, colliding atoms or particles do not splash. It would take a huge amount of violence to get galaxy-sized particles to splash.
I suppose this can be a different theory of dark matter but one where the missing particles are so large that they embed galaxies. The missing mass of the a galaxy may be like the missing mass of an atomic nucleus after subtracting the known masses of its component protons and neutrons. The missing mass exists as a distributed property of the whole instead of a specific missing particle.
I prefer to think of the galaxy-sized particle as a different type of empty space. I like this interpretation because it identifies and area of research that we are neglecting: a research into empty space itself. We define empty space as the absence of stuff we can observe. It may instead be a medium that defines the stuff we can observe. Empty space determines our observations. Similar to our thinking about the specific case of the empty space inside an atom, we can think about the specific case of the empty space inside of galaxies.
The galactic sized volumes of empty space are distinct. Here it is useful to think of this distinctiveness as a particle. A galaxy is a particle of the kind of nothingness that embeds galaxies.
At some point in space there is a surface of this particle. The volume within this surface contains gravity-bound matter of star systems, galaxies, or galaxy clusters. I suspect there may be an intermediate stellar-neighborhood level of particle that governs our macro (bigger than quantum-level) observations. Our local experience is inside a sub-particle of nothingness that is distinct from the nothingness that comprise the galaxy’s particle. As a result, we may have no conceivable access to directly sample the nothingness that lies outside our stellar neighborhood but inside of the galaxy.
I can imagine that light may behave differently in the different media of emptiness. In particular, there may be a difference in the speed of light inside a galaxy and outside of it. This difference in speed results in different refractive indexes that causes the refraction that we observe. Light may travel faster between galaxies than it does within a galaxy. It travels faster because the nothingness it travels through is of a different quality.
The universe is vast.
The popular dark-matter hypothesis takes for granted the existence of fundamental particles that are outside of human capacity to observe. The hypothesis in the first article is that these hidden particles are as-yet undetected peers of sub-atomic particles we already know. The lack of perturbation of post-collision dark matter implies that if such sub-atomic dark-matter particles exist, they do not collide individually like particles we know. My conjecture is that the entire blob depicted in ghastly blue in the visualization is a single particle, or an agglomeration of galaxy-sized fundamental particles. The collisions didn’t affect these particles because the collisions are trivial for the scale of these particles.
An analogy is a container full of a mixture of pure hydrogen and oxygen. Such a mixture is stable at room temperature despite the multitude of collisions that are occurring between them. It takes a more than a room-temperature collision to make these particles splash.
Observable collisions of galaxy-scale particles or galaxy-cluster molecules of particles occur at a temperature comparable to chemistry’s room temperature. In this model, we can expect that nothing will happen to the particles as a result.
Pingback: Materialize the model to level the competition with observations | kenneumeister