#6 of Our Top Stories of 2015: Peer-Reviewed Paper Reveals Darwin's Unavoidable Catch-22 Problem
Editor's note: Welcome to the traditional recounting of our Top 10 evolution-related stories of the past year, as compiled in a rigorous, peer-reviewed, strictly scientific manner by Evolution News staff. Sit back and enjoy the most amusing, the most enlightening, and the most important news to come our way in 2015. The countdown will culminate on New Year's Day.
Happy New Year from your friends in the intelligent design community! If you haven't finalized your year-end contribution to support the work of the Center for Science & Culture, including Evolution News, please do so now. Any amount helps. We need you and greatly appreciate your generosity!
A new peer-reviewed paper in the journal Complexity presents a computational model of evolution which shows that evolving new biological structures may be deterred by an unavoidable catch-22 problem.
The article by physicists David Snoke, Jeffery Cox, and Donald Petcher begins by observing that in order to produce a new system, evolution first needs to try lots of new things. It must generate many, many variations upon which natural selection can act in order to "find" something useful to retain. But that comes with a potentially fatal cost. In the scenario proposed by Darwinian theory, you'd end up with an organism full of suboptimal or useless parts. As the authors put it:
[T]here is an additional energy cost to increased complexity. ... In real systems, building new systems is costly, and the cost of carrying along useless or redundant systems is one of the arguments for the efficiency of existing living systems, as excess baggage is dropped as too costly.The problem can be circumvented but only by providing something like an incentive system, a reward for trying out new variations. But then the difficulty arises: If you don't make the "reward" high enough, you never evolve anything new. On the other hand, if you set the reward too high, too many new things are tried, many of which don't do anything useful, and the system tends to accumulate deleterious junk. They explain:
(David W. Snoke, Jeffrey Cox, and Donald Petcher, "Suboptimality and Complexity in Evolution," Complexity, DOI: 10.1002/cplx.21566 (July 1, 2014).)
There are two competing processes. On one hand, the energy cost of carrying vestigial systems makes them weakly deleterious, not neutral, which tends to reduce their number. Conversely, without stabs in the dark, that is, new systems which might eventually obtain new function but as yet have none, no novelty can ever occur, and no increase of complexity. Thus, if the energy cost of vestigial systems is too high, no evolution will occur.Of course, many evolutionary biologists prefer to say it's easy to evolve new structures but lots of junk accumulates. The authors of the paper observe that this sort of reasoning "has historically led evolutionary theorists to expect, that living systems carry a significant fraction of vestigial, or nonfunctional, elements, as well as quasivestigial elements which function with much less than optimal efficiency."
To test this evolutionary expectation, they constructed a model that rewards evolving a new function, but exacts a price for evolving systems that require lots of parts before providing an advantage. Using this model, they found that when the model was optimized to reward the evolution of new features, it did evolve new features. Some of those features were useful. But the vast majority were not. It then happened that before the simulation finished, the population experienced a crash. The organisms accumulated so much genetic garbage -- new features that were in fact no more than useless freeloaders -- that fitness dropped precipitously.
Thus, the authors observe another problem: "nature does not reward complexity per se, it rewards functions that enhance survival and reproduction" and "there may be many paths to the same function, some simpler and some more complex, and all will be rewarded roughly the same whether or not the function is done elegantly or not; only the overall energy cost will deter some versions of obtaining the function." Their model tries to accommodate these facts by incorporating "(1) an energy cost for increasing number of elements produced and (2) multiple paths to beneficial functions."
Briefly stated, there's a ratio of reward to the cost of trying out something new:
- If the ratio is low, then it's costly to try new things, and they will be eliminated right away. New features don't evolve.
- If the ratio is high, then new features will evolve quite easily. As noted, at first you get runaway success -- new complexity is generated, and strongly harmful or costly vestigial traits get eliminated. But trying lots of new things mean you cannot weed out slightly deleterious traits. Over time unhelpful traits accumulate. Eventually such mutations pile up to an extent that the population reaches a crisis point, and crashes. The junk has become an unbearable burden. The organisms go extinct.
The authors think real biological organisms are closer to position (1). Indeed, study in the field of systems biology increasingly finds that biological systems contain very little junk. This is because there are efficient ways to get rid of nonfunctional features that exact a cost. But if organisms are in position (1), that would suggests new complex features cannot be built because it's very difficult to try new things.
Again, an evolutionary biologist might reply that our genomes are in fact full of junk, showing that we have lots of "failed tries" and a few things that worked. As I said, though, the trend in research goes against this idea that our genomes are full of junk.
But this Complexity paper argues that this model holds true at other levels of scale. For example, perhaps we should have 15 hearts that don't work for every one that does. But everyone agrees that at macroscales, our bodies are not full of junk. The trend at both the macro and micro scales is that there's almost no junk, suggesting Darwinian evolution is not the cause that produced what we observe.
Thus, their model is not analogous to creating new DNA sequences, but rather it models the generation of larger features (those encoded by DNA sequences) like organs, cellular interactions, proteins, or molecular machines.
For this process to work, it must be possible to make many stabs in the dark, assuming that the number of systems which lead to useful functions is a small fraction of the total of all possible systems which could be constructed. This assumption is eminently reasonable given the fact that most randomly generated strings of DNA do not lead to proteins which have a folded and compact form, and presumably also do not lead to proteins with useful function. Thus, at any point in time there must be some number of stabs in the dark going on, in the form of nonfunctional systems which might become functional with a few changes. We can call these at-present-useless systems "vestigial"...The problem is that we don't observe systems that are full of dead weight, suggesting organisms are more like position (1) where it's very difficult for new features to evolve. As they conclude: "In existing living systems, the fitness collapse seen in this model appears to be prevented by mechanisms which quickly eliminate nonfunctional elements, while leaving functional elements untouched. This type of mechanism would seem to prevent 'stabs in the dark' of any great magnitude, and thus prevent ongoing increase of complexity."
When it comes to generating viable living systems, it's pretty much damned if you do, damned if you don't. The bottom line seems to be that whatever cause generated the biological features we observe, unguided Darwinian evolution is not it.