Assessing the "Algorithmic Origin of Life"
Recently, we told you how astrobiologist Paul Davies and NASA postdoc Sara Walker propose looking at life as "software" -- not unlike the way intelligent-design theory advocates. Their new paper represents a major course-correction in origin-of-life studies. Not only have traditional "chemical evolution" approaches proved inadequate, a whole new way of thinking is required, they argue -- a new paradigm that focuses on life as a system of information control. The idea is radical, it's abstract, yet it remains materialistic enough that NASA's Astrobiology Magazine gave it good press.
Let's have another look at the model proposed by Davies and Walker and the context that led them to propose it. (The 10-page paper is available free on arXiv and makes a good read.) Before arguing for their proposal, they put the origin-of-life question in a historical context, with a cast of major characters including: information theorists like Shannon, Turing and von Neumann, life theorists like Schrödinger and Maynard Smith, experimentalists like Joyce and Wachtershauser. Charles Darwin, naturally, gets the opening fanfare, but his part is soon diminished:
A landmark event in the history of science was the publication in 1859 by Charles Darwin of his book On the Origin of Species, affording for the first time in history a scientific framework unifying all life on Earth under a common descriptive paradigm. However, while Darwin's theory gives a convincing explanation of how life has evolved incrementally over billions of years from simple microbes to the richness of the biosphere we observe today, Darwin pointedly left out an account of how life first emerged, "One might as well speculate about the origin of matter," he quipped. A century and a half later, scientists still remain largely in the dark about life's origins. It would not be an exaggeration to say that the origin of life is one of the greatest unanswered questions in science. (Emphasis added.)Like many in the origin-of-life field, Davies basically agrees that once a system can undergo Darwinian evolution, the rest of the tree of life comes along naturally. They do, however, find major weaknesses in that assumption:
Often the issue of defining life is sidestepped by assuming that if one can build a simple chemical system capable of Darwinian evolution then the rest will follow suit and the problem of life's origin will de facto be solved. Although few are willing to accept a simple self-replicating molecule as living, the assumption is that after a sufficiently long period of Darwinian evolution this humble replicator will eventually be transformed into an entity complex enough that it is indisputably living. Darwinian evolution applies to everything from simple software programs, molecular replicators, and memes, to systems as complex as multicellular life and even potentially the human brain -- therefore spanning a gamut of phenomena ranging from artificial systems, to simple chemistry, to highly complex biology. The power of the Darwinian paradigm is precisely its capacity to unify such diverse phenomena, particularly across the tree of life -- all that is required are the well-defined processes of replication with variation, and selection. However, this very generality is also the greatest weakness of the paradigm as applied to the origin of life: it provides no means for distinguishing complex from simple, let alone life from non-life. This may explain Darwin's own reluctance to speculate on the subject.We'll try to resist the urge to point out that software programs and artificial systems are intelligently designed. (Sorry, looks like we failed to resist that particular temptation.) Davies also finds that the assumption (that Darwinian evolution can handle the tree of life) flawed in the fact that the first replicator might have been analog instead of digital, and thus incapable of surviving without an information storage system. That brings up an interesting point: analog vs. digital in the living world. What does he mean? Davies labels the genetics-first approaches (such as "the popular RNA World") digital, since they focus on discrete coding in the replicator. The "metabolism-first" approaches, by contrast, are primarily analog, because they work with continuous concentrations of reactants.
Davies and Walker explain that life cannot be all analog, nor can it be all digital. Analog life would not survive geological time, they argue, because it lacks a method to encode adaptations to change. Digital-only life fails from the starting gate, though, because it cannot deal with biological function. That's because the information in life as we know it is stored in the system as a whole, not just in the genetic macromolecules: function involves all the networks of analog molecules, their feedback loops, and their ability to modify DNA itself. The hybrid "RNA World" scenario with its half-genetic, half-metabolic ribozymes is flawed because "there would be no way to physically decouple information and control from the hardware it operates on, resulting in unreliable information protocols due to noisy information channels." Indeed, "that mono-molecular systems are divided from known life by a logical and organizational chasm that cannot be crossed by mere complexification of passive hardware." For these reasons, Davies and Walker believe life had to be "'bimolecular' from the start," with analog and digital components separated, working together as a system.
Another important distinction they make is between trivial replicators and non-trivial replicators. Trivial replicators can be as simple as crystals. A highly compressible algorithm can code for such things: construct A, repeat for n times. Non-trivial replicators, by contrast, are incompressible. They require "an algorithm, or instruction set, of complexity comparable to the system it describes (or creates)." (Schrödinger gets a hat tip for predicting in his 1944 classic What Is Life? that the genetic material must be some sort of "aperiodic crystal," even before the structure of DNA was known.)
This is still the build-up to Davies and Walker's proposal. They discuss in some detail the attempts by Turing and von Neumann to model systems that could output any algorithm, including that of the system itself. The Turing Machine, and von Neumann's self-replicating automata, are classic attempts to model systems that can mimic what life does so effortlessly. While these attempts clearly brought key insights about the nature of life "by directing attention to the logical structure of information processing and control, and information flow in living systems," their efforts to model life fall short:
The UC [universal constructor] forms the foundation of von Neumann's theory on self-replicating automata. However, a UC is a mindless robot, and must be told very specifically exactly what to do in order build the correct object(s). It must therefore be programmed to construct specific things, and if it is to replicate then it must also be provided with a blueprint of itself. However, as von Neumann recognized, implicit in this seemingly innocuous statement is a deep conceptual difficulty concerning the well-known paradoxes of self-reference. To avoid an infinite regress, in which the blueprint of a self-replicating UC contains the blueprint which contains the blueprint ... ad infinitum, von Neumann proposed that in the biological case the blueprint must play a dual role: it should contain instructions -- an algorithm -- to make a certain kind of machine (e.g. the UC) but should also be blindly copied as a mere physical structure, without reference to the instructions its [sic] contains, and thus reference itself only indirectly. This dual hardware/software role mirrors precisely that played by DNA, where genes act both passively as physical structures to be copied, and are actively read-out as a source of algorithmic instructions. To implement this dualistic role, von Neumann appended a "supervisory unit" to his automata whose task is to supervise which of these two roles the blueprint must play at a given time, thereby ensuring that the blueprint is treated both as an algorithm to be read-out and as a structure to be copied, depending on the context. In this manner, the organization of a von Neumann automaton ensures that instructions remain logically differentiated from their physical representation. To be functional over successive generations, a complete self-replicating automaton must therefore consist of three components: a UC, an (instructional) blueprint, and a supervisory unit. (Italics in original.)The authors draw fascinating comparisons between these theoretical components and the actual molecules in life. For instance, the ribosome acts as the universal constructor, the genetic code functions as the blueprint, and DNA polymerases (which will read anything regardless of information content) act as the supervisory unit. Remarkable as von Neumann's theoretical model was, considering that he formulated it before the major discoveries of molecular biology, life's way of controlling information flow in a self-replicating system is even more astonishing:
In spite of the striking similarities between a UC and modern life, there are some important differences. DNA does not contain a blueprint for building the entire cell, but instead contains only small parts of a much larger biological algorithm, that may be roughly described as the epigenetic components of an organism. The algorithm for building an organism is therefore not only stored in a linear digital sequence (tape), but also in the current state of the entire system (e.g. epigenetic factors such as the level of gene expression, post-translational modifications of proteins, methylation patterns, chromatin architecture, nucleosome distribution, cellular phenotype, and environmental context). The algorithm itself is therefore highly delocalized, distributed inextricably throughout the very physical system whose dynamics it encodes. Moreover, although the ribosome provides a rough approximation for a universal constructor..., universal construction in living cells requires a host of distributed mechanisms for reproducing an entire cell. Clearly in an organism the algorithm cannot be decomposed and stored in simple sequential digital form to be read out by an appropriate machine in the manner envisioned by Turing and von Neumann for their devices.They mention the ENCODE project as "a glimpse of the complexity involved in mapping the function of the human genome." The distributed nature of biological information leads Davies and Walker to introduce a principle that happens to be key to intelligent-design theory: the idea that information is a fundamental, non-physical property that can be expressed in disparate contexts without altering the message:
The biologically relevant information stored in DNA therefore has very little to do with its specific chemical nature (beyond the fact that it is a digital linear polymer). The genetic material could just as easily be a different variety of nucleic acid (or a different molecule altogether), as recently experimentally confirmed. It is the functionality of the expressed RNAs and proteins that is biologically important. Functionality, however, is not a local property of a molecule. It is defined only relationally, in a global context, which includes networks of relations among many sub-elements....It is precisely these qualities of biological information that motivated Davies and Walker to look above and beyond the molecules and components for something deeper -- an abstract algorithm that organizes and controls all the information flow and functionality from the top down. They write:
One is therefore left to conclude that the most important features of biological information (i.e. functionality) are decisively nonlocal. Biologically functional information is therefore not an additional quality, like electric charge, painted onto matter and passed on like a token. It is of course instantiated in biochemical structures, but one cannot point to any specific structure in isolation and say "Aha! Biological information is here!"
As we have presented it here, the key distinction between the origin of life and other "emergent" transitions is the onset of distributed information control, enabling context-dependent causation, where an abstract and non-physical systemic entity (algorithmic information) effectively becomes a causal agent capable of manipulating its material substrate.But how is one going to get an algorithm (a non-trivial one at that) from a blind material process? This is where their proposal comes in. Surprisingly, though, it receives just scant attention in the context of a deeper discussion about information and causation:
We have argued that living and nonliving matter differ fundamentally in the way information is organized and flows through the system: biological systems are distinctive because information manipulates the matter it is instantiated in. This leads to a very different, context-dependent, causal narrative -- with causal influences running both up and down the hierarchy of structure of biological systems (i.e. from state to dynamical rules and dynamical rules to the state). In modern life, genes may be up- or down-regulated by physical and chemical signals from the environment. For example, mechanical stresses on a cell may affect gene expression. Mechanotransduction, electrical transduction and chemical signal transduction -- all well-studied biological processes -- constitute examples of what philosophers term "top-down causation", where the system as a whole exerts causal control over a subsystem (e.g. a gene) via a set of time-dependent constraints. The onset of top-down information flow, perhaps in a manner akin to a phase transition, may serve as a more precise definition of life's origin than the "separation of powers" discussed in the previous section. The origin of life may thus be identified when information gains top-down causal efficacy over the matter that instantiates it. (Italics in original.)Did you catch it? It was in that passing phrase, "perhaps in a manner akin to a phase transition." They never elaborate on that idea. It's merely a suggestion, an analogy, that might provide someone else a new way to find a workable model. They mention briefly a "toy model, one possible candidate" in which it was theoretically possible to imagine an "algorithmic takeover" in which bottom-up causation switched into top-down causation. It's clear from the paper they were winging it with incomplete and problematic approaches, mere suggestions. All they could claim was an idea that can point others in a new direction:
In this framework, the origin of life would mark the first appearance of this reversal in causal structure, and as such is a unique transition in the physical realm (marking the transition from trivial to nontrivial information processing as discussed earlier). The utility of this approach is that it provides a clear definition of what one should look for: a transition from bottom-up to top-down causation and information flow.... The aforementioned simple model, while instructive, suffers from the fact that it cannot capture how algorithmic information alters the update rules, and thus the future state of the system.... How this transition occurs remains an open question.This remarkable paper is important for two reasons: (1) It undermines 50 years of dogma on the origin of life by pointing out the general failure to account for the top-down, holistic information flow in living systems. (2) It emphasizes abstractions like algorithms, information flow and information control that defy materialistic capabilities. If all Davies and Walker can say in conclusion is that somehow, in some way they cannot describe realistically or in detail, causation flipped from bottom-up to top-down, leading to the most elegant and sophisticated systems known to man, you know evolutionists are in big trouble.
A third way this paper is important is in the way it re-emphasizes how well-designed -- stunningly so -- life is. The authors are profuse on this point, saying that life appears to the beholder to work as if by magic. Their observation poses a problem for materialism but not for intelligent design, which comes equipped with a toolkit to explain top-down systems.
A useful summary of life's superb informational design can be found in the only chart in the paper, "Table 1: Hallmarks of Life," which offers some points to ponder. We end with that:
Information as a causal agency
Analog and digital information processing
Laws and states co-evolve
Logical structure of a universal constructor
Dual hardware and software roles of genetic material
Physical separation of instructions (algorithms)
from the mechanism that implements them
Image: Comets Kick up Dust in Helix Nebula, NASA/JPL-Caltech/Univ.of Ariz.