Recently, I was intrigued by a story about a new character in Disney Star Wars where that character was a sentient stone that happened to be an excellent navigator of hyperspace. The discussion reached me through videos ridiculing a character that is a lump of rock. I didn’t mind the idea of sentient stone. Instead I was reminded of something that always bothered me about Star Wars, the idea that someone can navigate through hyper space. Full disclosure, the last time I saw anything Star Wars was back in 1979, so I’m a bit uninformed about the concepts. I accept that hyperspace is something you can jump into, and something you can get out of. I objected to the idea of having any kind of intelligent control in the middle of hyperspace. Once in hyper space what landmarks or beacons can be followed. To the extent that they exist, what kind of directional control can you have to change course, and how would any civilization learn these for the first trip involving a mindlessly expensive mission that had to succeed on the first try.
The Star Wars hyperspace may be a convenient short cut between two parts of the galaxy, but the rules of the short cut are very different than in the normal space.
In this blog, I make the distinction between bright data and dark data. Bright data are data from well-calibrated sensors providing recent observations that are directly relevant to situational awareness. In contrast, dark data comes entirely from computations based on theories and on boundary conditions to extrapolate from.
In the Star Wars context, I imagine the jump to hyperspace to be a jump from bright data to dark data.
Normal space is navigable in the usual sense of checking sensors for conditions around the ship, checking for observable beacons or landmarks, checking for weather conditions that should be navigated around. With this immediately acquired direct observations of the real world, the navigator can plot a course for steering and speed. This is navigating with bright data.
Hyper space is a very different kind of space. In hyperspace, I doubt there can be any kind of relevant observations. To observe anything about normal space, those observations must themselves jump into hyperspace. Having information jump into hyperspace for the convenience of a hyper space navigator would result in an imbalance in the conservation of mass and energy in normal space. For information to have the ability to cross into hyperspace on its own, it would also need the ability to get out hyperspace on its own. In the logic of the Star Wars universe, the complementary direction does not seem possible. In that universe, people in hyperspace can observe the normal space but normal space can not see them.
The way I imagine this would work is that the space craft would bring with it a snapshot of all the relevant data about the galaxy as observed at the point of jumping into hyperspace. This way the information is already onboard when the ship enters hyperspace. From that point, the navigation is dead reckoning based on calculations extrapolating from those initial observations and any controls applied midflight. Within hyperspace itself, the travel would be blind. There would be no point of having any sensors and certainly no point of having any windows.
The purpose of a navigator is to be able to adjust a course based on some new information such as an obstacle (such as a storm) that needs to be avoided. Of course, in science fiction it is common to assume there is a new class of sensors that would be capable to supplying this kind of updated information. It should at least try to explain how this would work in a concept like hyperspace. Again, I am willing to accept the possibility of being able to enter and to leave hyperspace. The problem is that the concept of hyperspace seems to preclude any possibility of obtaining any new information outside of the ship.
It seems to me that you would set a course at the start and then accept whatever happens when you pop out the other end. There may be a calculated adjustment based on what can be observed inside the ship, such as fuel consumption, ship vibrations, or the experience of positive or negative accelerations. Those calculations would be purely on computations based on pre-discovered theories and whatever observations available before jumping into hyperspace.
The hyperspace navigator would be in some closed off room deep inside the ship and fully engaged at looking at computer models of where things should be given the extrapolations from the information known before entering hyperspace.
The long introduction is in analogy to my impressions of what happened onboard the Ever Given before it ended up getting stuck in the Suez canal.
The Suez canal is in many ways like hyperspace. It is a shortcut between two bodies of deep water with wide separations from any obstacles or anomalous conditions.
Within the short cut, there is no longer a lot of room on either side or below the ship. There are new considerations affecting the travel of the ship. These include shore effects that can push and pull the ship to either side. These also include the floor effects that can push and pull the ship vertically, and change its center of rotation. These effects depend on the speed, direction, and the precise topology of what is mostly underwater.
The ship had an additional challenge of a winds storm with shifting wind fields.
In all of these new parameters, the information available was mostly from models. The winds the ship would soon encounter were computed from weather models and relatively distant onshore weather stations. The actual below-water topology is estimated from long-past surveys that may not be precise for every foot and may be outdated due to shifting of sands resulting from previous ships passing through.
One of the first pieces of information I learned about this ship was its size, and in particular its height. It was described as essentially a sail boat with an immovable sail. The normal procedure would be travel with a yaw to balance the wind with the thrust to move in the desired direction, but the ship was too long to travel this way in the narrow passage. Its other option was to speed up to use rudder authority to counter the wind forces, but this exacerbated the shore and floor effects.
Prior to grounding, the ship was trapped by the counteracting effects of wind speed and ship speed. The best way to control the ship against shore and floor effects was to travel slow, but the best way to counter the wind effect was to travel fast.
I imagine that the navigators were very busy with their calculations to get through the situation. I also imagine that this is relatively routine. All previous times this has worked, and there was probably reasonable confidence that it would have worked this time.
Reports now are that there is a focus on human errors at various levels, and also second-guessing saying that the better choice was to wait until weather forecasts were more favorable. These are coming from seasoned naval persons. Yet, I believe that these same experts might have made the same choices to enter the canal when they did, and they probably did so in their past with similar risks. The Ever Given just got unlucky.
It seems to me that except for the very smallest of ships, crossing the canal always involves a degree of luck. The concept of skill for navigating implies that the navigator has accurate and relevant observations about the conditions around and ahead of the ship. Instead they had topological and weather maps generated not too recently. They could calculate what might be around them with some degree of confidence but they did not have direct measurements to confirm these guesses.
A more competent navigation would gather real information that would typically not be available for a ship that normally spends its time in open and deep water. The kind of sensors that would be helpful would be terrain mapping sonar not only at various points along the hull but also from escort boats on either side and in front.
Additionally, there would be an array of wind sensors at a variety of lengths and heights along the ship and its cargo. This would provide information about not only the overall forces but also the gradients that could soon change the ship’s attitude with respect to its rudder. Like with the sonars, there would be wind measurements coming from escorts along the sides and in front.
Even with all this information, there remains the unknown of how the overall ship would handle in these conditions. There may be sensors to measure water flow around the hull and at the rudder. There may be sensors for tilt and 3D acceleration at different parts of the ship. These can help to provide updates about what the ship is actually doing.
Such extensive sensors on a ship would be prohibitive in themselves. It would even more costly to have the computing power properly maintained to ingest all this information and calculate some optimal path.
Dark data is much cheaper than bright data. They have access to information about how the ship should behave in these situations with information about what the below-surface topology should be like and what the above surface winds should be like. The navigator uses these database of what should be happening to make course decisions.
It is in this sense that this navigation reminds me of the hyperspace short cuts in science fiction. In both cases, the ship is in a short cut where spatial properties are different from more routine conditions of open seas in deep waters. In both cases, the navigator must rely on information he had when he entered the short cut. The navigator has very little if any relevant measurements of what will really matter to the outcome of the journey.
Given all I heard about the Ever Given grounding, I think it was possible and perhaps even very likely it could have got through without any incident even with the challenges it faced. I think many similar situations have occurred in that canal without any incident. Each trip through the canal involves a lot of dark data that brings with it a certain degree of risk of relying on outdated information. The risks may have been lessoned with an abundance of recent sensor information of all varieties, and with an abundance of computer resources to ingest and process that sensor information in time to provide relevant steering directions.
The necessary sensors to provide this kind of information is impracticable within the canal in the case of a flotilla of escort ships to measure conditions around the ship. It is impractical for a ship to install and maintain the necessary coverage of sensors and computing capacity to perform this feat. Out of necessity, the ship must enter the canal somewhat blind in the sense of relying on dark data that tells the pilot what should be occurring instead of what is actually occurring.