Evolution Icon Evolution

A Response to Dr. Dawkins’ “Information Challenge” (Part 1): Specified Complexity Is the Measure of Biological Complexity

[Editor’s note: This was the first installment of a three-part series. The full article, A Response to Dr. Dawkins’ “The Information Challenge”, can be read here.]

Last week I posted a link to a YouTube video where Richard Dawkins was asked to explain the origin of genetic information, according to Darwinism. I also posted a link to Dawkins’ rebuttal to the video, where he purports to explain the origin of genetic information according to Darwinian evolution. The question posed to Dawkins was, “Can you give an example of a genetic mutation or evolutionary process that can be seen to increase the information in the genome?” Dawkins famously commented that the question was “the kind of question only a creationist would ask . . .” Dawkins writes, “In my anger I refused to discuss the question further, and told them to stop the camera.” Dawkins’ highly emotional response calls into question whether he is capable of addressing this issue objectively. This will be the first installation of a 3-part response assessing Dawkins’ answer to “The Information Challenge.”

What Type of “Information” Is Relevant Here?
Dawkins writes, “First you first have to explain the technical meaning of ‘information’.” While that sounds reasonable, Dawkins pulls a bait-and-switch and defines information as “Shannon information”–a formulation of “information” that applies to signal transmission and does not account for the type of specified complexity found in biology.

It is common for Darwinists to define information as “Shannon information,” which is related to calculating the mere unlikelihood of a sequence of events. Under their definition, a functionless stretch of genetic junk might have the same amount “information” as a fully functional gene of the same sequence-length. ID-proponents don’t see this as a useful way of measuring biological information. ID-proponents define information as complex and specified information–DNA which is finely-tuned to do something. Stephen C. Meyer writes that ID-theorists use “(CSI) as a synonym for ‘specified complexity’ to help distinguish functional biological information from mere Shannon information–that is, specified complexity from mere complexity.” As the ISCID encyclopedia explains, “Unlike specified complexity, Shannon information is solely concerned with the improbability or complexity of a string of characters rather than its patterning or significance.”The Inconvenient Truth for Dawkins: The difference between the Darwinist and ID definitions of information is equivalent to the difference between getting 10 consecutive losing hands in a poker game versus getting 10 consecutive royal flushes. One implicates design, while the other does not.

It is important to note ID proponents did not invent the notion of “specified complexity,” nor were they the first to observe that “specified complexity” is the best way to describe biological information. My first knowledge of the term being used comes from leading origin of life theorist Leslie Orgel, who used it in 1973 in a fashion that closely resembles the modern usage by ID proponents:

[L]iving organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple, well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures which are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity.

(Leslie E. Orgel, The Origins of Life: Molecules and Natural Selection,” pg.189 (Chapman & Hall: London, 1973).)

Orgel thus captures the fact that specified complexity requires both order and a specific arrangement of parts or symbols. This matches the definition given by Dembski, where he defines specified complexity as an unlikely event that conforms to an independent pattern. This establishes that specified complexity is the appropriate measure of biological complexity. This point will be important in the next installment, Part 2, which rebuts the heart of Dawkins’ article.

As a final note, Richard Dawkins’ article admits that “DNA carries information in a very computer-like way, and we can measure the genome’s capacity in bits too, if we wish.” That’s an interesting analogy, reminiscent of the design overtones of Dawkins concession elsewhere that “[t]he machine code of the genes is uncannily computer-like. Apart from differences in jargon, the pages of a molecular biology journal might be interchanged with those of a computer engineering journal.” (Richard Dawkins, River Out of Eden: A Darwinian View of Life, pg. 17 (New York: Basic Books, 1995).)

Of course, Dawkins believes that the processes of random mutation and unguided selection ultimately built “[t]he machine code of the genes” and made it “uncannily computer-like.” But I do not think a scientist is unjustified in reasoning that in our experience, machine codes and computers only derive from intelligence. Regardless, in the next installment, Part 2, I will assess Dawkins’ argument that gene duplication can increase biological information.

Casey Luskin

Associate Director and Senior Fellow, Center for Science and Culture
Casey Luskin is a geologist and an attorney with graduate degrees in science and law, giving him expertise in both the scientific and legal dimensions of the debate over evolution. He earned his PhD in Geology from the University of Johannesburg, and BS and MS degrees in Earth Sciences from the University of California, San Diego, where he studied evolution extensively at both the graduate and undergraduate levels. His law degree is from the University of San Diego, where he focused his studies on First Amendment law, education law, and environmental law.

Share

Tags

__k-reviewRichard Dawkins