It’s not often that a book on Intelligent Design becomes a best-seller, or is opined (in print) to be one of the best books of the year by a prominent atheist philosopher.  And yet that is true of Stephen Meyer’s book, Signature in the Cell: DNA and the Evidence for Intelligent Design.  I must say it’s one of the best books I have read on the topic of the evidence for intelligent design in biology.  The information was presented in a very logical, systematic order, with each chapter building naturally on the former.  Not only was Meyer’s approach systematic, but he presented difficult concepts in very understandable ways.  Coming in at 561 pages of text, it is not a quick read, but the time spent is well worth it.

Meyer’s thesis is that the origin of life is best explained by an intelligent cause.  He begins his book by telling how the mystery of life’s origin was not recognized in Darwin’s day, but came to be realized in the decades that followed as knowledge of life’s complexity began to emerge.  That mystery has not been solved over the decades, but rather looms larger and larger the more we discover about the internal workings of the cell, and what is required for even the simplest of life. 

When it comes to explaining the origin of life (OOL), many think it’s just a matter of explaining the origin of biological complexity.  Not so.  What needs to be explained is not just the mere complexity of the cell, but the origin of the biological information conveyed by the DNA molecule.  But there are two different types of information, so we have to be specific about the type of information we find in the cell that needs to be explained.  

The first type of information is called Shannon Information, named after MIT engineer Claude Shannon.  Shannon pioneered information theory in the 1940s.  He noted that information and uncertainty are inversely related, so that information increases as uncertainty is eliminated.  Shannon said “the amount of information conveyed (and the amount of uncertainty reduced) in a series of symbols or characters is inversely proportional to the probability of a particular event, symbol, or character occurring.”  So “the amount of information conveyed by an event is inversely proportional to the probability of its occurrence.”  The greater the probabilities, the more improbable it is that any one event or string of symbols is actualized, and thus that event or string of symbols, if it occurs, will convey more information.  Furthermore, probabilities are multiplied as more symbols are added to the mix, so long sequences will always bear more information than shorter sequences.

According to Shannon’s definition of information, “nenen ytawoi jll sn mekhdx nnx” has the same information capacity as “Four score and seven years ago” because both strings of symbols contain the same number of symbols, and are equally improbable.  And yet, we recognize that the latter is clearly different from the former.  What that difference is brings us to the second definition of information.

The second definition of information is “the attribute inherent in and communicated by alternative sequences or arrangements of something that produce specific effects,” or what I call “functional information.”  This definition does not contradict Shannon’s, but adds an additional component to it, namely functional specificity.  Shannon’s definition only measures the amount of information-carrying capacity contained in a particular sequence based on the improbability of the sequence, but it does not distinguish between improbable sequences whose arrangements are specified to perform a function from improbable sequences that are not specified to perform a function.  There is a difference, then, between information and meaning.  What distinguishes the latter from the former is the specificity of the arrangement to perform a function.

To illustrate the difference between Shannon information and functional information, consider U.S. domestic phone numbers.  Every phone number consists of a 10 symbol sequence, so each contains the same amount of Shannon Information, and each of the 10,000,000,000 possible phone numbers is equally improbable.  While an assessment of the Shannon information content of phone numbers is interesting, it won’t be of much help if you are trying to call your friend in New York, but inadvertently lost the slip of paper you wrote his number on.  You’ll need more than just Shannon information to call your friend—you’ll need information that is specified to perform a certain function, namely connecting you to your friend.  While all phone numbers are equally complex, they are not all equally specific.  Only one specified sequence will connect you to your friend.  There is a difference, then, between mere complexity and specified complexity. 

What type of information does the cell contain?  Both.  The complexity we find in the cell is not just mere complexity, but also specified complexity: complexity specified to perform a particular function.  And the specification we discover is very improbable given the sheer number of symbols (in this case nucleotides, amino acids, etc.) involved.  Of all the possible ways to specify the order of nucleotides in DNA and amino acids in proteins, only a limited number of sequences will result in biological function.

The information content of the cell cannot be explained in terms of the raw physical properties of DNA because information is not a physical property.  Consider a book.  While all would agree that a book contains information, where exactly is that information found?  Is it found in the chemical properties of the ink and paper?  No.  The information is found in the purposeful arrangement of those chemical properties by an intelligent agent.  The same is true of DNA.  DNA is an information-bearing molecule, but is not the source of the information itself.  It merely serves as a medium for the information content.  

I’ll continuing blogging through Meyer’s book next week.  Stay tuned.