Intelligent Design Icon Intelligent Design

Design Detection in the Dark

Dark.jpg

William Dembski introduced the concept of complex specified information (CSI) in his 1998 book The Design Inference. He argued that in seeking to explain natural and other phenomena, we may reject hypotheses of chance and infer design on the basis CSI. Under Dembski’s definition, an event is complex if it is improbable, and specified if it matches an independent pattern. Events that have both properties are said to exhibit CSI. We may infer that an event that exhibits CSI under all possible chance hypotheses is a product of design.

The subject of CSI has prompted much debate, including in a recent article I wrote for ENV, "Information, Past and Present." I emphasized there that measuring CSI requires calculating probabilities. At her blog, The Skeptical Zone, writer Elizabeth Liddle has offered a challenge to CSI that seems worth considering. She presents a mystery image and asks for a calculation of CSI. The image is in gray-scale, and looks a bit like the grain in a plank of wood. Her intent is either to force an admission that such a calculation is impossible or to produce a false positive, detecting design where none was present.

But as long as we remain in the dark about what the image actually represents, calculating its probability is indeed impossible. Dembski never intended the design inference to work in the absence of understanding possible chance hypotheses for the event. Rather, the assumption is that we know enough about the object to make this determination.

The Theory of Design Inference

Let’s review the design inference as Dembski introduced it in The Design Inference and as he further developed it in No Free Lunch and in his article "Specification: The Pattern that Signifies Intelligence." There are three major steps in the process:

  1. Identify the relevant chance hypotheses.
  2. Reject all the chance hypotheses.
  3. Infer design.

This process is outlined as the Generic Chance Elimination Argument in No Free Lunch and The Design Inference, and, with modification, in the "Design Detection" section of "Specification: The Pattern that Signifies Intelligence."

Specified complexity is used in the second of these steps. In the original version of Dembski’s concept, we reject each chance hypothesis if it assigns an overwhelmingly low probability to a specified event. Under the version he presented in the essay "Specification," a chance hypothesis is rejected due to having a high level of specified complexity. In either case, we infer design only after rejecting all the relevant chance hypotheses.

But what if the actual cause of an event, proceeding from chance or necessity, is not among the identified hypotheses? What if some natural process exists that renders the event much more probable than would be expected? This will lead to a false positive. We will infer design where none was actually present. In the essay "Specification," Dembski discusses this issue:

Thus, it is always a possibility that [the set of relevant hypotheses] omits some crucial chance hypothesis that might be operating in the world and account for the event E in question.

The method depends on our being confident that we have identified and eliminated all relevant candidate chance hypotheses. Dembski writes at length in defense of this approach.

Elizabeth Liddle discusses a possible way to perform a design inference:

For example, if something isn’t obviously the result of some other iterative process like crystallization or wave action (which my glacier is), or self-replication, then it might be perfectly reasonable to infer design as at least a possible, even likely, candidate (black monoliths on the moon would come into this category).

This is basically the approach that Dembski has outlined. Liddle has identified a number of relevant chance hypotheses: crystallization, wave action, self-replication. If we judge that none of these processes account for whatever we are investigating, than we can infer design as the best explanation.

Liddle argues that this isn’t CSI, but she has misunderstood the process of design detection. The criterion of specified complexity is used to eliminate individual chance hypotheses. It is not, as Liddle seems to think, the complete framework of the process all by itself. It is the method by which we decide that particular causes cannot account for the existence of the object under investigation. It is only by eliminating all relevant chance hypotheses that we can infer design.

A Mystery Image

Liddle demonstrates her point by presenting the mystery image and requesting a calculation of CSI for it. In fact, the mystery image has since been identified, as Liddle writes in the comments section of her original post ("It’s the Skei�ar�rj�kull Glacier in Iceland, showing evidence of successive eruptions of Gr�msv�tn as bands of black ash"). But we will pretend that we know nothing about it. Instead let’s look at the process of deciding whether or not this image was designed.

The first question we want to ask is whether or not the image is specified. In the case of the model used in "Specification: The Pattern that Signifies Intelligence," we need to determine the specification resources. This is defined as being the number of patterns at least as simple as the one under consideration. We can measure the simplicity of the image by how compressible it is using PNG compression. A PNG file representing the image requires 3,122,824 bits. Thus we conclude that there are 2 to the 3,122,824th power simpler or equally simple images. (It is important to note that PNG files are prefix-free, which allows an accurate count. However, it ignores multiple encodings representing the same image and some sequences of bits not being a valid image at all.)

A first hypothesis to consider for this image is that it was generated by choosing uniformly over the set of all possible gray-scale images of the same size. The image is 795 by 658 pixels with 256 possible levels of gray. This gives us 2 to the 4,191,240 possible images. Expressed in terms of Shannon information that is 4,191,240 bits. Using the formula for specified complexity given in the essay "Specification," we obtain a result of approximately 1,068,017 bits. Thus we have overwhelming reason to believe that this image was not generated by such a process.

But we must consider other possible hypotheses. The pixels in the image tend toward being lighter rather than darker. The image might be generated by a process biased towards lighter pixels. To test this hypothesis, we measure the distribution of colors in the image, and determine the probability of the image given that distribution. The probability is approximately 1 in 2 to the 3,716,716th power. The value of specified complexity in this case is approximately 593,493 bits. Again, we have very strong reason to reject this hypothesis. The image was not generated by a simple biased process.

We note that not only does the image tend towards lighter pixels, the image also tends to have similarly colored pixels close together. We measure the probability of each possible color following each other color. Thus we’ve measured how often light follows dark and how often dark follows light. Then we measure the probability of this image. It is approximately 1 in 2 to the 3,111,387th power. Calculating the specified complexity gives us approximately -11,836 bits. Thus the concept of specified complexity gives us no reason to reject this hypothesis.

Another hypothesis would be that there exists a deterministic process that generates this image with probability 1. It always produces this exact image, and never produces another. This gives us a specified complexity of approximately -3,123,223 bits. Thus specified complexity gives us no reason to reject this hypothesis.

However, this last hypothesis is rather implausible. It may perform well at explaining the image in question, but we have no reason to believe it actually operates. In order for a hypothesis to be considered, we need to have evidence for that hypothesis. In fact, Dembski writes in " Specification," "We must have a good grasp of what chance hypotheses would have been operating."

The assumption of Dembski’s approach is that we can identify the chance hypotheses that might have produced a given event. An arbitrary image makes this difficult, as we cannot determine what natural processes are operating to generate the chance hypothesis. The best that we can do is postulate processes similar to those which have been observed. That’s what the first three hypotheses tested here did.

Dembski’s Contradiction?

In Liddle’s presentation of his argument, Dembski claims both that we need to know something about the history of an object and that we don’t. Her evidence is the following sentence (from "Specification"):

By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause.

However, the quotation is taken out of context. Dembski is not discussing ignorance of possible chance mechanisms, but rather ignorance of possible design mechanisms. He argues that to infer design we don’t need independent evidence of the designer. Dembski is not claiming that we do not need to investigate possible chance hypothesis. Rather, his method is predicated on identifying and ruling out chance hypotheses in order to infer design.

Closing Thoughts

We have seen that Liddle has confused the concept of specified complexity with the entire design inference. Specified complexity as a quantity gives us reason to reject individual chance hypotheses. It requires careful investigation to identify the relevant chance hypotheses. This has been the consistent approach presented in Dembski’s work, despite attempts to claim otherwise, or criticisms that Dembski has contradicted himself.

Winston Ewert is Research Assistant at the Evolutionary Informatics Lab.


Image credit: The Bartender 007/Flickr.

Winston Ewert

Senior Fellow, Senior Research Scientist, Software Engineer
Winston Ewert is a software engineer and intelligent design researcher. He received his PhD from Baylor University in electrical and computer engineering. He specializes in computer simulations of evolution, genomic design patterns, and information theory. A Google alum, he is a Senior Research Scientist at Biologic Institute and a Senior Fellow of the Bradley Center for Natural and Artificial Intelligence.

Share

Tags

mathematicsViews