Over the four decades of my adult life there has been a recurring theme in my education and profession and that theme is that the world works on fundamental principles and atomic units. In college, I recall the confidence that we can understand everything from quantum mechanics if only we had the computing power to…
This is just a possible scenario of a synergy between humans and automation technology. We need technology for its mastery of the time-domain. Technology depends on us for survival due to our biologic advantages of solving problems in time-volume or in a frequency domain.
The primary advantage of western civilization is its celebration of the concept of philosophy set down by Plato. This concept is that we expose our internally acquired wisdom to our peers who have developed different wisdom while being equally able to participate in society. The love of wisdom of philosophy refers to a form of love that puts that wisdom on display for others to absorb, and this inherently presents a conflict between different models of wisdom. Machine learning automates the acquisition of privately-held wisdom. The next challenge is artificial philosophy to expose that internal wisdom for public dialog.
Dedomenology has a saturation aspect, requiring very long periods of work stretching over many days regardless of the concepts of standard working hours such as a 40 hour workweek. When something needs to be tackled, it will employ the dedomenologist continuously until there is some level of completion. There will be an endless stream of assignments that someone will need to dive into the depths of the data ocean and staying there for a long time until the assignment is over.
Consider the case of a big-data store the was able to store all of the individual answers keyed with sequence numbers, time stamps, and specific individual identification. I don’t think anyone would voluntarily discard that data in exchange with anonymized data consisting of just a few categories. The value of data reduction into categories is for people who don’t have access to big data. Those people are the consumers who wish to have an external assessment of what kind of person they are, allowing them a shortcut to introducing themselves, similar to the 1960’s approaching of introducing oneself as a zodiacal sign.
These cases are often described as open-secrets. Many people in the community are aware of the information about individual cases and about the pattern of behavior, but there has been some kind of understanding that the past events are resolved in some acceptable terms, and that ongoing behavior is restrained by certain conditions. The oxymoron of open-secrets can be resolved by defining the open-part as being observed data, while the secret-part is restraints on how this data may be used in future decision making.
We should learn from recent experience of large data technologies the lesson that decision making can benefit from streaming data in addition to (and often instead of) the publication science of one-time experiments. It is clear now that policy making needs access to a continuous stream fresh data about old ideas, especially when that data accumulates over time. With access to the technologies to do this work, it is unacceptable to base policies on the failed approaches of the past that rely on published studies.