Hawaii’s false alarm of missile attack: what if it occurred in a dedomenocracy?

On this site, I use the term dedomenocracy as a form of government where the sole decision makers are machine intelligence applied to automated data collection.  Of the many ways this may work, I stick to a model I describe as a government by a combination of data and urgency.   The data element demands that the government imposes as few rules as possible in order to gather unbiased observations of the behavior of the individuals in the current population.   As a result, another description for this model is that of punctuated libertarianism, or punctuated anarchy.   The only rules are those that address urgent responses to either threats or opportunities, and modern data technologies make such urgent policy making possible.   This concept of government has full authoritarian or tyrannical force to prosecute the rules of urgency.

The January 13 false alarm of an incoming ballistic missile attach on Hawaii provides an interesting case to consider how events would have played out in a dedomenocracy that I presume that a dedomenocracy is equally subject to human error that triggers a mass communication of false alert of an imminent public danger.   The reason is that my concept of dedomenocracy is that automated analytics respond to data collection of human actions with complimentary information about the role the individual plays.

A dedomenocracy will respond to a human input that triggers a false alarm.   Again, the concept is that dedomenocracy would only interfere with human behavior if there is an emergency.   Certainly, a determination of an incoming missile attack qualifies for urgent response.

One of the biggest complaints about the actual false alarm is that it took nearly 40 minutes before starting to inform everyone of a much earlier recognition that the alarm was in error.

The good thing is that things returned to normal very quickly after the alarm was declared to be false.   Many other commentators have speculated that the situation could have turned out much worse even in the current form of government.   In particular, there could have been many deaths caused by panic-driven stampedes, or overcrowding of perceived shelters, or even widespread suicides (some of them murder and suicide) or at least overdoses of alcohol or drugs.   I’ve seen many commenters marvel that this did not happen given the current paranoid state of public debate.

With that in mind, a similarly benign outcome could have occurred under a dedomenocracy.   In both systems, the society’s composition of the individuals (and their state of mind) will determine what consequences will occur.

Even given the initial well-behaved response to the alarm, the subsequent results could have changed dramatically if the initial alarm was followed by new reports of conflicting and incomplete news of some calamity resulting in multiple deaths.   Given the initial priming that something was wrong, the population may have responded to the later news with even more alarm and that could have triggered even further disasters or even armed violence between groups mistakenly led to believe they are opposing each other.

The consequences of the population response to a false alarm could have been approached the destruction of an actual attack.   The destructive result of an actual attack depends entirely on the the engineering and proper functioning of a single warhead.    The avoidance of the destructive result of a false alarm depends entirely on the good behavior of every single individual in the population.

Fundamentally, there is a trade off between which do we trust more.   Our civil defense alerting and processes assumes two things: that the missile will succeed if the alarm were valid, and that the population will be well behaved if the alarm is invalid or if the missile fails.   We base the first assumption on experience with technologies and testing results of missiles and explosives.   We base the second assumption on faith in the population to respond well and even to intervene to counteract someone who is not responding well.   The second assumption is based on faith in humanity.

Some have proposed that alarm was deliberate with the intention of gathering information about how a modern society would respond to a credible alert of a civil emergency.   From a data-driven machine-learning perspective, this is a reasonable speculation.   It would be valuable to collect data about how an entire population would respond to such an event.   In addition, keeping this intentional experiment a secret would preserve the value of this collected data, in the same way double-blind studies optimize pharmaceutical testing.

In this case, I don’t know what happened.  It could have been an accident, or it could have been deliberate.   My point here is that the same ambiguity would be present had this been under a dedomenocracy.   The difference is that our current presumption leans toward human-error, while the presumption in an dedomenocracy leans toward a deliberate test.

At this time in history, we live in a combination of both worlds.   I think that both the explanations, accidental or deliberate, are equally likely.  In any event, we did collect data about how a population responds, in particular that it did not result in disastrous chain-reaction of panic and responses to panic.

In my taxonomy of data, the observations about the population’s response to a civil defense alarm are fairly bright data.   These are observations directly observed as reported by the population on their social media, and recorded with surveillance cameras, and logging of activity in both the physical and virtual worlds.   There are also the bright data of evidence of what did and did not happen.   From the perspective of data collection, this was successful in providing abundant direct observations.   On the other hand, there were also dark data in terms of explanations for why things occurred the way they did.   My ongoing complaint is that reporting and discussion of current events intermingle dark and bright data without making explicit distinctions of this quality.   A dedomenocracy would segregate the bright data of directly observed or recorded events from the dark data of attempts to explain what happened.

This event does illustrate a type of data that I have not previously distinguished.   We have the event of the record operator’s action, in this case the triggering of that initial alarm.    Clearly, the triggering of the alarm was bright data: we know the person who did it, and what action he took.   Also, there is the dark data of knowing why he did it, something that he is reluctant to discuss.   In addition to these forms of data, there is the special category of an the alarm itself.

A dedomenology would of course collect the information about the authority’s actions that set off the alarm.   Meanwhile, it would also collect pervasive observations of the sirens, public announcements, text messages, television interruptions, etc.   The systems will collect this data in parallel with the data of the initial event that started everything.

This leads to multiple versions of truth.   The initial action by the operator to start the alarm is an observation with the ambiguity of whether it was real or false, and if false whether it was deliberate or accidental.   Meanwhile, the sirens, announcements, and other alerts were unambiguous in its truth value: there was a state of alarm.   The actual alarms have a different quality than the operator actions that started everything.

Dedomenocracy model distinguishes rules from observations.  My concept is to have rules active only on during periods of urgency in order to allow unbiased observations in order to optimize the non-urgent periods representing most of the time.   The state of warning of a missile attack is a rule introduced in response to an urgent situation.

Rules themselves are a different category of data, different from bright data, dark data, unlit data, spark data, etc.   The observations of the rules (such as the recording of sirens or announcements) are bright data.   The rules themselves are different, in the sense that a dedomenocracy should segregate the rule itself from other types of data.   Rules are what punctuates the default of libertarian or anarchy that permits the population to behave as they wish.   In a dedomenocracy, rules must be followed, however, the fact of the existence of the rule is itself data that needs to be recorded in order to properly interpret the subsequent observations of population behavior as being influenced by that rule.   I’ll tentatively characterize a rule as punctuation data, data the interrupts the normal behaviors of the population.

I assume that the algorithms in a dedomenocracy will properly recognize that the events during an emergency like this is directly influenced by a known rule introduced to address some immediate urgency.    In particular, subsequent rule making will take into account the the current observations about human behavior are not comparable to observations of behavior without the rule in place.

Despite that, the subsequent events following a similar false alarm may be quite different under a dedomenocracy.   In particular, the dedomenocracy will likely be more unstable, and in this context could have resulted in much more damage and harm than what actually happened.

One cause of instability in a more pure dedomenocracy is that there is an established expectation that machines will make most of the routine decisions, replacing humans who current make comparable decisions.   Assuming a long-term result of dedomenocracy being the norm for generations, there would be very few people prepared to take leadership roles, and very few people who would acknowledge their authority in contrast to the authority of the machines.

In the punctuated libertarian model, the machines have absolute authority for rule makings for routine life choices but also people expect such rules to be rare and only created in situations demanding urgent action.

In this scenario where there is a very unusual condition of a civic emergency, the efficacy of the response by the dedomenocracy is uncertain, and probably not helpful.  One reason is the multiple truth value of the alarm in the first place.   From the data collection perspective, there is immediately some doubt as to the need to raise the alarm.  Meanwhile, there is no doubt that the alarm is present and meant to be heeded.  The presence of this condition is provoking population behaviors that are unprecedented in prior observations.

Coexisting with the punctuated libertarian dedomenocracy is a largely automated economy.   There will be driver-less cars, automated access-control or evacuation direction at facilities, automated accommodations such as for delivering goods, garbage collection, or community policing.   Many of these systems will be autonomous of each other.   Each of these systems will collect their own data or will independently interpret common data from other sources.   Optimistically, I will assume there is some type of cooperation between systems and with the governing dedomenocracy that set off the alarm, but I expect this to be a very weak federation type of cooperation.   I don’t anticipate that there will be some single global algorithm that can optimize the operations of all of these separate automated systems.

A central global algorithm may have the best information about the nature of the alarm.  In this scenario, it would be first to obtain information about the false nature of the alarm.   The problem is that the automated systems with society will be interpreting the current conditions based on direct observations of the public.   For example, automated transportation systems are responding to hailing requests from riders, automated building management systems will be converting from its primary function to that of a shelter, or that of a site unsuitable for a shelter.    These systems will also be responding to the public information.  They will be responding to the sirens and the public announcements that would continue for a considerable amount of time after determining that the alarm is false.

Even after the alarm is withdrawn, these systems will all be responding to the unusual distribution of people for the time of day, and their unusual demands.    The systems will be responding to the data they are collecting independently.   The systems may have information about the nature of the now rescinded alarm but they have to contend with the conflicting information of some people still attempting to flee to shelter, others changing their plans, while others still are attempting to return to their normal daily routine.    I find it easier to visualize the chaos this would play out on the streets for transportation systems.   There will also be chaos in other things such as converting buildings back to their normal purpose, with the prerequisite checks for habitability as well as evacuation of the shelter-seekers who otherwise should not have access to the facility.

I assume a largely automated food service industry that would be disrupted with their supplies not delivered in time, or their food preparation being disrupted.   Even if this is not a problem, there is the problem of the unusual schedule for meal times where some will opt to have earlier dinners while things settle down, and others postponing their meals for their own reasons.    There may be food shortages in the short term and this can trigger further population reactions such as an attempt to find places where food may be available,   There may be longer term shortages due to abandoning normal shipments (perhaps to spoil) during the period when such vehicles were being redirected for anticipated relief efforts.

Generally speaking, the futuristic automated economy would respond to a civil emergency false alarm quite differently than the current economy still mostly under human control.    In particular, the futuristic economy may take much longer to recover because of the near autonomy of the different automated subsystems.

In the actual case, humans facilitated the fast recovery after the false alarm because even though each person is specialized in terms of what they would have been doing that day, they would also, in large part, recognize the situation by observing people around them going about their own restoration of normal life.    The specialization among humans is fundamentally different from the specialization of machines in that despite the specialization in duties or goals, humans share their human identity.   They will adjust their expectations and demands in part to accommodate for the difficulties they can see others going through to meet those demands.   Also, people will volunteer their services in areas that they normally do not work.   In contrast, the various systems within an automated economy are less equipped to cooperate with unrelated systems, either in perceiving the need to cooperate, or having the capacity to help.

Automated commuter cars are unlikely to fill some need to complete a large delivery, while many people own personal SUVs or trucks that can make a delivery when it is needed and it is the most convenient option.

More importantly, the lesson I see from this event is that the human economy is very resilient to a major alarm like this.   In particular, the natural human behavior has a dampening effect on the the reaction to what was major disruption in everyone’s day and in their persons.   Once the alarm cleared, people responded in a way that overall was cooperative.   Their reactions did not make the situation worse.   Most often, they responded appropriately to return to normality very quickly.

I am much more pessimistic that an economy dominated by automated systems could recover as quickly.    The human population will not be aware of what happens behind the scenes for automated systems.   They will not be aware of the strains their demands are placing on these systems, or how their demands are contradictory in terms of what is of highest priority.

Meanwhile, the process of recovering from the event will differ wildly from one system to the next, leading to new demands by the populations that may require operations of other systems.   As in the example of the restaurant district not having any food resulting in the demand of people to be transported to other districts while the transportation system is still recovering from the earlier demands.

Finally, the shared humanity that damped the impact of the false alarm in a human-operated economy will likely exaggerate the impact in an automated economy.   People will see in others their frustrations in not getting their needs met.   They will attempt to help out by the only option available to them: to request services themselves.   As the process drags out, they will begin to express their frustration in unison.

Eventually this frustration in the inadequate response by the automated economy will escalate to some type of unrest.   The broader dedomenocracy then will have to respond to this urgency with a new alarm, and a new rule.   This time the alarm will be real, and the rule will be have greater impact.   After that point, many things may happen as a chain reaction, but this is the turning point when the dedomenocracy would fail in comparison with the human government.   The dedomenocracy will replace the false alarm with a real emergency.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s