Intelligent Design Icon Intelligent Design

Pink EleP(T|H)ants on Parade: Understanding, and Misunderstanding, the Design Inference

I wrote here previously (“Design Detection in the Dark“) in response to blogger Elizabeth Liddle’s post “A CSI Challenge” at The Skeptical Zone. Now she has written a reply to me, “The eleP(T|H)ant in the room.” The subject of discussion is CSI, or complex specified information, and the design inference as developed by William Dembski.

CSI is the method by which we test chance hypotheses. For any given object under study, there are a variety of possible naturalistic explanations. These are referred to as the relevant chance hypotheses. Each hypothesis is a possible explanation of what happened to produce the object. If a given object is highly improbable under a giving hypothesis, i.e. it is unlikely to occur, we say that the object is complicated. If the object fits an independent pattern, we say that it is specified. When the object exhibits both of these properties we say that it is an example of specified complexity. If an object exhibits specified complexity under a given hypothesis, we can reject that hypothesis as a possible explanation.

The design inference argues from the specified complexity in individual chance hypotheses to design. Design is defined as any process that does not correspond to a chance hypothesis. Therefore, if we can reject all chance hypotheses we can conclude by attributing the object under study to design. The set of all possible chance hypotheses can be divided into hypotheses that are relevant and those that are irrelevant. The irrelevant chance hypotheses are those involving processes that we have no reason to believe are in operation. They are rejected on this basis. We have reason to believe that the relevant chance hypotheses are operating, and these hypotheses may be rejected by our applying the criterion of specified complexity. Thus, the design inference gives us reason to reject all chance hypotheses and conclude that an object was designed.

Originally, Liddle presented a particular graphic image of unknown origin and asked whether it is possible to calculate the probability of its being the product of design. In reply, I pointed out that knowing the potential chance hypotheses is a necessary precondition of making a design inference. Her response is that we cannot calculate the probabilities needed to make the design inference, and even if we could, that would not be sufficient to infer design.

Elizabeth Liddle’s Errors

At a couple of points, Liddle seems to misunderstand the design inference. As I mentioned, two criteria are necessary to reject a chance hypothesis: specification and complexity. However, in Liddle’s account there are actually three. Her additional requirement is that the object be “One of a very large number of patterns that could be made from the same elements (Shannon Complexity).”

This appears to be a confused rendition of the complexity requirement. “Shannon Complexity” usually refers to the Shannon Entropy, which is not used in the design inference. Instead, complexity is measured as the negative logarithm of probability, known as the Shannon Self-Information. But this description of Shannon Complexity would only be accurate under a chance hypotheses where all rearrangement of parts are equally likely. A common misconception of the design inference is that it always calculates probability according to that hypothesis. Liddle seems to be plagued by a vestigial remnant of that understanding.

As I emphasized earlier, the design inference depends on the serial rejection of all relevant chance hypotheses. Liddle has missed that point. I wrote about multiple chance hypotheses but Liddle talks about a single null hypothesis. She quotes the phrase “relevant [null] chance hypothesis”; however, I consistently wrote “relevant chance hypotheses.”

Liddle’s primary objection is that we cannot calculate the P(T|H), that is, the “Probability that we would observe the Target (i.e. a member of the specified subset of patterns) given the null Hypothesis.” However, there is no single null hypothesis. There is a collection of chance hypotheses. Liddle appears to believe that the design inference requires the calculation of a single null hypothesis somehow combining all possible chance hypotheses into one master hypothesis. She objects that Dembski has never provided a discussion of how to calculate this hypothesis. But that is because Dembski’s method does not require it. Therefore, her objection is simply irrelevant.

Unknown Probabilities

Can we calculate the probabilities required to reject the various chance hypotheses? Attempting to do so would seem pretty much impossible. What is the probability of the bacterial flagellum under Darwinian evolution? What is the probability of a flying animal? What is the probability of humans? Examples given of CSI typically use simple probability distributions, but calculating the actual probabilities under something like Darwinian evolution is extremely difficult.

Nevertheless, intelligent design researchers have long been engaged in trying to quantify those probabilities. In Darwin’s Black Box, Mike Behe argues for the improbability of irreducibly complex systems such as the bacterial flagellum. In No Free Lunch, William Dembski also offered a calculation of the probability of the bacterial flagellum. In “The Case Against a Darwinian Origin of Protein Folds,” Douglas Axe argues that under Darwinian evolution the probability of finding protein folds is too low.

While it is unreasonable to calculate the exact probabilities under a complex chance hypothesis, this does not mean that we are unable to get a general sense of those probabilities. We can characterize the probabilities of the complex systems we find in biology, and as the above research argues those probabilities are very small.

Earman and Local Inductive Elimination

Above, I mentioned the division of possible chance hypotheses into the relevant and irrelevant categories. However, what if the true explanation is an unknown chance hypothesis? That is, perhaps there is a non-design explanation, but as we are ignorant of it, we rejected it along with all the other irrelevant chance hypotheses. In that case, we will infer design when design is not actually present.

Dembksi defends his approach by appealing to the work of philosopher of physics John Earman, who defended inductive elimination. An inductive argument gives evidence for its conclusion, but stops short of actually proving it. An eliminative argument is one that demonstrates its conclusion by proving the alternatives false rather than proving the conclusion true. The design inference is an instance of inductive elimination: it gives us reason to believe that design is the best explanation.

Liddle objects that Dembski is not actually following what Earman wrote, and she quotes from Earman: “Even if we can never get down to a single hypothesis, progress occurs if we succeed in eliminating finite or infinite chunks of the possibility space. This presupposes of course that we have some kind of measure, or at least topology, on the space of possibilities.”

Dembski has not defined any sort of topology on the space of possibilities. He has not somehow divided up the space of all possible hypotheses and systematically eliminated some or all. Without that topology, Dembski cannot claim to have eliminated all the chance hypotheses.

However, Liddle does not appear to have understood what Earman meant. Earman was not referring to a topology over every conceivable hypothesis, but over the set of what we might call plausible hypotheses. In Earman’s approach, inductive elimination starts by defining the set of plausible hypotheses. We do not consider every conceivable hypothesis, but only those hypotheses which we consider plausible. Only then do we define a topology on the plausible hypotheses and work towards eliminating the incorrect possibilities.

In his discussion of gravitational theories, Earman points out that the process of elimination began by an assumption of the boundaries for what a possible theory would look like. He says:

Despite the wide cast of its net, the resulting enterprise was nevertheless a case of what may properly be termed local induction. First, there was no pretense of considering all logically possible theories.

Later, Earman discusses the possible objection that because not all logically possible theories were considered, it remains possible that true gravitational theory is not the one that was accepted. He says:

I would contend that all cases of scientific inquiry, whether into the observable or the unobservable, are cases of local induction. Thus the present form of skepticism of the antirealist is indistinguishable from a blanket form of skepticism about scientific knowledge.

In contrast to Liddle’s understanding, Earman’s system does not require a topology over all possible hypotheses. Rather, the topology operates on the smaller set of plausible hypotheses. This is why the inductive elimination is a local induction and not a deductive argument.

Furthermore, Dembski discusses the issue in “Naturalism’s Argument from Invincible Ignorance: A Response to Howard Van Till,” where he considers the same quote that Liddle presented:

In assessing whether the bacterial flagellum exemplifies specified complexity, the design theorist is tacitly following Earman’s guidelines for making an eliminative induction work. Thus, the design theorist orders the space of hypotheses that naturalistically account for the bacterial flagellum into those that look to direct Darwinian pathways and those that look to indirect Darwinian pathways (cf. Earman’s requirement for an ordering or topology of the space of possible hypotheses). The design theorist also limits the induction to a local induction, focusing on relevant hypotheses rather than all logically possible hypotheses. The reference class of relevant hypotheses are those that flow out of Darwin’s theory. Of these, direct Darwinian pathways can be precluded on account of the flagellum’s irreducible and minimal complexity, which entails the minuscule probabilities required for specified complexity. As for indirect Darwinian pathways, the causal adequacy of intelligence to produce such complex systems (which is simply a fact of engineering) as well as the total absence of causally specific proposals for how they might work in practice eliminates them.

Dembski is following Earman’s proposal here. He defines the boundaries of the theories under consideration. He divides them into an exhaustive partition, and then argues that each partition can be rejected therefore inferring the remaining hypothesis, design. This is a local induction, and as such depends on the assumption that any non-Darwinian chance hypothesis will be incorrect.

At the end of the day, the design inference is an inductive argument. It does not logically entail design, but supports it as the best possible explanation. It does not rule out the possibility of some unknown chance hypotheses outside the set of those eliminated. However, rejecting the conclusion of design for this reason requires the willingness accept an unknown chance hypothesis for which you have no evidence solely due to an unwillingness to accept design. It is very hard to argue that such a hypothesis is actually the best explanation.

Closing Thoughts

Liddle objects that we cannot calculate the probability necessary to make a design inference. However, she is mistaken because the design inference requires that we calculate probabilities, not a probability. Each chance hypothesis will have it own probability, and will be rejected if that probability is too low. Intelligent design researchers have investigated these probabilities.

Liddle’s objections to Dembski’s appeal to Earman demonstrate that she is the one not following Earman. Earman’s approach involves starting assumptions about what a valid theory would look like, in the same way that any design inference makes starting assumptions about what a possible chance hypotheses would look like.

In short, neither of Liddle’s objections hold water. Rather both appear to be derived from a mistaken understanding of Dembski and Earman.

Winston Ewert is Research Assistant at the Evolutionary Informatics Lab.

Image credit: Kevin H./Flickr.

Winston Ewert

Senior Fellow, Senior Research Scientist, Software Engineer
Winston Ewert is a software engineer and intelligent design researcher. He received his PhD from Baylor University in electrical and computer engineering. He specializes in computer simulations of evolution, genomic design patterns, and information theory. A Google alum, he is a Senior Research Scientist at Biologic Institute and a Senior Fellow of the Bradley Center for Natural and Artificial Intelligence.

Share

Tags

__k-reviewscienceViews