Technology vs Biology

Given the accelerating advances of technology and machine intelligence, it is reasonable to worry about technology eventually out-competing humans, leaving us with no economic contributions that can’t be performed better by machines.   In earlier posts, I argued against this but not in an optimistic way that others propose.   The work that biological units (such as us) will always be superior to machines is the work that is very strenuous and dangerous.   Humans are better at recovering from unfortunate circumstances that require rescue or recovery.   There will always be work for non-machines, but that work will not be pleasant and the career will not be long.

Others have more optimistic views of a future where most of today’s jobs are automated.  They envision that the automation would create new careers that we cannot imagine now, just like new (and generally more rewarding) careers took over previously automated jobs.   I don’t share that optimism because machines and their intelligence will excel most in exactly the type of jobs that we would covet — machines will fill those positions faster then we will.

In spite the above discussion, I suspect I may be missing something very fundamental.   In particular, human technology and biology may engage with the real physical world in fundamentally different ways.   Both may be dealing with different realities, where biology is able to negotiate with aspects of nature where technology is inherently unable to compete.

In my last post, I described time as having multiple dimensions or facets where just one of those is the time we use for our science and technology.    In short, time is very complex but it has a particular component that is readily analyzed by mathematics and that is obedient to causality and induction.    Science and technology relies heavily on a model of time that simplifies time to  a single line, and a line with a fixed starting point and only one arrow.

Scientifically (as well as intuitively) the arrow of time points to the future and the fixed point is some origin (ultimately a cosmic big bang, for instance).   In earlier posts, I proposed a different time model that sets the fixed point a very short time in the future and that arrow points to the past.   This alternative view is how data science treats time where the arrow of time points to where additional data will come from to inform what is contemporary with the current time.    Either way, time is a one-dimensional line with a fixed origin and an arrow at the other end.

Time may have more facets than just the analytic linear time.   I use the word facet to distinguish from dimensions:  the volume of time is probably very unlike the volume of space.    Certainly, I have no visualization of what it would look like, and thus no hope in describing it.   I only suspect that time is more than the analytic linear time that we exploit in our science.

Biology may have a superior approach to exploit the volume of time while technology is constrained to operate on just the analytic linear time.   As admitted above, our exclusive focus on a linear analytic time produces technology that out-competes humans (and other life forms) in nearly all physical tasks and increasing in many mental ones.   I expect that machine intelligence will eventually do to mentally-intensive careers what earlier automation has done to manual labor.   This technology may eventually use biological elements such as customized proteins or DNA to create organic implementations of the technologies currently implemented in silicon or metals.   Even if implemented in organic forms, the technology will remain reliant on the analytic one dimensional time that makes possible our science and technology.   The organic-based technology will still be artificial and that artificiality will rely on the simplification of time as that linear component that behaves well, mathematically.

Technology based on organic or biological models (proteins, DNA, etc) will still be artificial technology.   For this discussion, I assert that this artificial technology will be distinguished from living biology.   In particular, artificial technology will never have a life comparable to a living organism, even the most primitive.   Let me exclude for now any discussion about our experiences of living, experiences such as consciousness, self-awareness, or soul.   Even for the basics of engaging with the real world, living things do so very differently than technology does, even when technology supposedly mimics biologic models.

Many technologies are built based on biological models.   For example, neural networks mimic what we see when we study the brain: a vast rich network of interconnected neurons.    While the biological neural networks have far more nodes and interconnections that our technology implementation, our implementation may have better algorithms for faster and more reliable convergence of a set of weights between the nodes.   Each year brings technology improvements allowing for more nodes and more interconnections so we can reasonably expect that the technology of neural networks will close the gap of what the biological neural networks can perform.

Personally, I’m not convinced that the neurons in the brain operate in the same way as the technology neural networks operate.   They share similar features of many nodes with many interconnections, and they share rapidly converging capabilities in many ways that can replace humans for mental tasks (for example, self-navigating vehicles).  I suspect the living brain as well as the living being are able to engage with the natural world that is intrinsically out of reach from technology.

I will aside for now the topic of neural networks.   A similar comparison between biology and technology exists with human vision and with imaging technologies (cameras, signal processing, image recognition, etc).   Our surveillance technologies (including the latest development that permit real-time image recognition) appear to achieve the same function as the human vision system.   We equate the eyes to cameras, the retinal neurons with signal processing, the visual cortex with image processing and image recognition.

In recent years, we have improved the video technology to allow for real world tasks that previously required human vision.   Many of the innovations in the technology exploited new discoveries or understandings of how the human vision works.

We studied human vision and then set out to perform the same tasks with technology.   As we did this, we discovered that our implementations were superior to the biological implementation.   In particular, the biological implementation has many glaring flaws that either could have been avoided with a better design, or could have been more optimally mitigated with better algorithms.   While studying the biological visual system improves our technology, we confront the realization how how clumsy biology implements vision.

We are confident that our technology is in many ways superior to the biological system, at least in terms of implementations of the component parts and functions.   Meanwhile, we recognize that the biological visual system out-performs the technology.   Optimists will conclude that it is just a matter of time before technology will match performance of human vision.

I tend to agree with those optimists, but I’m skeptical.

The entirety of biological science is that everything we see in biology is a consequence of evolution of natural or reproductive selection of generally randomly occurring traits (based on random mutations in DNA).   In short, we assume that the process that produced biology, and vision in particular, was completely without intelligence.   Thus we are comfortable with the conclusion that of course the result would be inferior to what we can produce in deliberate engineering designs.

My experience with working with data causes me to look at the problem differently.   In addition, I don’t have any allegiance to the concept of random evolution, or even of intelligent evolution.   From a data perspective, I consider the evidence to have a finite horizon in terms of how far back in history we can know about.   I don’t know what happened millions of years ago, and I don’t see any requirement to assume I know.

What I see with the vision system is a system that has some very distinct features that often seem to be mistakes or clumsy work-arounds to mistake.   The easy explanation is that the designer was profoundly stupid, consistent with the concepts of evolution.

An alternative explanation may be that the designer was so intelligent he came up with solutions to problems we can not see.   I cannot dismiss this possibility.   The human vision system has to solve a lot more problems than the video technology.   For example, the development of the eye along with its associated neural constructs in the brain must develop gradually and concurrently from cell division from the fertilized zygote.   The instructions for the visual system must be encoded in DNA that can be passed to the next generation in a way that is robust to the potentially conflicting information by combination of maternal and paternal DNA.

More importantly, the human vision system has to promote survival of the individual to reach reproductive maturity, a process of at least a dozen years for humans.   The visual system does promote such survival to maturity.   I agree that this could be a consequence of haphazard design that just happened to provide what is essential for survival.   However, I think there is another possibility: the vision system was expertly designed to solve natural world problems that we don’t (or can’t) understand.

I’m thinking about how the visual system fills in missing information such as colors in the peripheral field, the blind spot that doesn’t appear as a black dot in our vision, the momentary blindness (due to excessive blurriness) when the eyes move, etc.   Our existing technologies offer superior ways to adapt to these anomalies.

Our existing technologies do not have the challenge of survival in the real world.   Like the above visual-field blind spots that we can’t normally recognize without special tests or instrumentation, our technologies have a big blind spot in that their survival depends on our survival.   Our technology is not natural, it required us to be a survivable species in order to exist in the first place.

There may be more that is necessary to survive in the real world than our science can ever learn.   Some aspects of the real world may occur too infrequently for science to discover and yet frequent enough to assure extinction of any sufficiently prepared being.  Our technologies do not have to solve that problem because our survival will assure we’ll be around to recreate or service the technology.

I’m also thinking we may not be aware of some of the challenges to merely continue to survive.   The biological system may be solving problems we are not aware exist.   If we assume that an intelligence at least comparable to human intelligence designed the biological visual system, then we may be led to conclude that what appear to be poor design choices may in fact be competent choices for problems we haven’t yet considered.  In contrast, the evolution perspective would have little motivation to consider this possibility because there is no assumption that the design was anything but accidentally sufficient for survival.   In both cases, I have the observation I outlined in my last post where I considered a more voluminous concept of time where our science only considers the one dimensional timeline that is most mathematical.

A mathematically analytic single-dimensional forward pointing time is the foundation of  human technology and science.

I was thinking about the analytic technique of converting a problem from the time domain to a frequency domain to make solving certain problems easier.   This is a mathematical operation that makes certain calculations easier but it is an approximation because typically the frequency domain assuming time extends infinitely in both directions and that an infinite number of frequencies are available.   We are usually satisfied with the practical compromise of working with a limited time interval and ignoring frequencies with small contributions.

When I first learned about converting to frequency domain (for electrical engineering of communications and control systems), I recall thinking whether the real world was frequencies and that time was an illusion while we first learn about time so we assume that its frequency components are illusions for computational convenience.

Our technology in general, and machine intelligence in particular, approach problems from a time domain perspective.   I don’t dismiss the use of frequency-domain operations in the math, but we express the problem to be solved in terms of time, time constants, or time intervals instead of frequencies.   Maybe the difference between human technology and biology (and all its consequences such as psychology or sociology) is that human technology solves problems in the time domain while biological systems solve problems in the frequency domain.

Maybe the real problems of survival are in the frequency domain and that time is an approximation that we use by default due to the illusion of experiencing time stamps.  We already know that it is useful and practical to convert from one domain to the other.  However, just as a frequency domain conversion of a time-domain problem is just an approximation, a time-domain conversion of the frequency-domain problem is also an approximation.   The approximation may be fatal in context of living beings.

This is just a possible scenario of a synergy between humans and automation technology.   We need technology for its mastery of the time-domain.   Technology depends on us for survival due to our biologic advantages of solving problems in time-volume or in a frequency domain.

Leave a comment