Evolution Icon Evolution
Intelligent Design Icon Intelligent Design

Specified Complexity — Like D�j� Vu All Over Again

Deja Vu.jpg

Writing at the blogs Panda’s Thumb and The Skeptical Zone, Joe Felsenstein (University of Washington geneticist) and Tom English (Oklahoma City computer scientist) have lately published three posts criticizing two arguments for intelligent design: specified complexity and conservation of information. However, their objections are based on misrepresentations of these arguments.

I have previously attempted to answer these misrepresentations, but have either been ignored or accused of surreptitiously switching claims proven false with new ones. Well, I will try again. I plan to address Felsenstein and English’s objections here in a series of posts.

Most recently, at The Skeptical Zone, Felsenstein has contributed a post responding to Evolution News contributor Jonathan McLatchie (“Jonathan McLatchie fails to define Specified Complexity“). Felsenstein claims that specified complexity was originally a fallacious argument, which was modified in 2005 to become a useless argument. What’s the truth?

Specified complexity is a probabilistic argument, but it requires that you use some other method to determine the probability of an event. For example, if one wishes to use specified complexity to argue against Darwinian evolution, the probability according to Darwinian evolution must be calculated. Felsenstein claims that originally specified complexity was defined to compute probability on chance alone, ignoring the effects of natural selection.

This is not the first time that Felsenstein has said this. He made the same claim back in 2013 at Panda’s Thumb (“Does CSI enable us to detect Design? A reply to William Dembski“). I responded here at Evolution News (“Information, Past and Present“), citing exact page numbers in the early work on specified complexity demonstrating that Felsenstein’s claims were false. Specified complexity has always required the calculation of probabilities taking into account whatever hypothesis is under consideration, including that of natural selection.

To the best of my knowledge, Felsenstein has never responded to that post. When he did criticize the work of the Evolutionary Informatics Lab, where I am a Senior Researcher, he dropped the discussion of specified complexity, focusing on issues around conservation of information. I had hoped that this indicated that he had accepted that I was correct, even if he did not say so explicitly. It is disappointing that he now chooses to repeat claims about the history of specified complexity that I have already demonstrated to be false.

However, if specified complexity requires another method to evaluate probability, what good is it? Felsenstein describes it as “a useless add-on quantity, computable only once one has already found some other way to show that the information cannot be put into the genome by natural selection.” Essentially, Felsenstein presents specified complexity as circular. It is true that specified complexity does not in any way help establish that the probability of complex life is low under natural selection. You must have another way of showing that, for example Michael Behe’s irreducible complexity, Doug Axe’s work on proteins, or Stephen Meyer’s work on the Cambrian explosion.

However, all these methods only seek to show that various biological systems are improbable under Darwinian evolution. That is a logical claim distinct from arguing that Darwinian evolution is false. Specified complexity exists to bridge this gap, arguing that we are justified in inferring the falsity of Darwinian evolution on the basis of the low probabilities established by these other arguments.

Some critics of intelligent design regard this as an obvious point. If complex life were prohibitively improbable under Darwinian evolution (an idea these critics certainly reject), Darwinian evolution would clearly be false. They find it difficult to believe that specified complexity was developed to defend such an obvious point. However, other critics insist that low probabilities of complex life would not provide evidence that we should reject Darwinian evolution.

Ingo Brigandt, for example, argues in “Intelligent Design and the Nature of Science“:

Since complex events (involving many individual events) with small probabilities happen all the time in nature, a small probability suggests neither that the hypothesis postulating this probability is probably false, nor that some intelligent intervention must have taken place.

In “Can Probability Theory Be Used to Refute Evolution,” Jason Rosenhouse writes:

For suppose that somehow we did manage to carry out such a calculation and suppose we found that it really is terribly improbable than our eye evolved by natural means. What would we learn from such a result?

Almost nothing. Improbable things happen all the time, you see, and the fact that something is improbable does not mean that it cannot happen.

In “What’s Wrong with Creationist Probability?“, John Allen Paulos writes:

The actual result of the shufflings will always have a minuscule probability of occurring, but, unless you’re a creationist, that doesn’t mean the process of obtaining the result is at all dubious.

All of these critics argue that we cannot draw conclusions about Darwinian evolution from small probabilities. To be fair, many critics would not agree with these simplistic criticisms of probability arguments. They would in fact argue that evolution makes complex biological systems highly probable. It is unfortunate that these writers feel the need to disparage specified complexity, which exists to defend against an argument they would not make.

Next: “What Does ‘Life’s Conservation Law’ Actually Say?”

Image: � David Pellicola / Dollar Photo Club.

Winston Ewert

Senior Fellow, Senior Research Scientist, Software Engineer
Winston Ewert is a software engineer and intelligent design researcher. He received his PhD from Baylor University in electrical and computer engineering. He specializes in computer simulations of evolution, genomic design patterns, and information theory. A Google alum, he is a Senior Research Scientist at Biologic Institute and a Senior Fellow of the Bradley Center for Natural and Artificial Intelligence.

Share

Tags

Computational SciencesscienceViews