Author's note: Conservation of information is a big result of the intelligent design literature, even if to date it hasn't gotten the attention it deserves. It quantifies the amount of information needed to increase the probability of finding a needle in a haystack so that the needle can actually be found. The upshot of conservation of information is that the information needed to find a needle in a haystack in turn requires finding another needle in a haystack, implying there is no free lunch in search. I just wrote up a full account of conservation of information for the journal BIO-Complexity in a paper titled "The Law of Conservation of Information: Natural Processes Only Redistribute Existing Information." What follows is a section from that paper on the displacement fallacy. This section is accessible and helps clarify the intuitions underlying conservation of information.
The discovery of conservation of information didn't start with proving a mathematical theorem. Rather, its discovery came from repeatedly noticing how efforts to account for the success of searches whose odds of success were seemingly hopeless always smuggled in information that wasn't properly accounted for. One hole was filled, but only by digging another, and so a new hole now in turn needed to be explained. This failure of explanation became especially evident in the evolutionary literature. Darwinian approaches to biological evolution and evolutionary computing sought to explain the origin of information through some process that directly used or else mimicked natural selection. Yet rather than admit a fundamental gap in explanation, this literature simply invoked selection as a backstop to explain the origin of information, the backstop itself being exempt from further explanation.
The move to explain the origin of information by invoking some separate unexplained source of information, typically via a selection process, was so common in the evolutionary literature that it deserved its own name: displacement. Displacement became the tool of choice among evolutionary critics of intelligent design as they tried to invalidate the logic of the design inference, which inferred design for events both specified and improbable. Critics claimed that once natural selection came into play, it acted as a probability amplifier that removed any seeming improbability that might otherwise have made for a valid design inference. Accordingly, critics argued that seeming products of design could be explained away through evolutionary processes requiring no design.
But this attempt to invalidate the design inference was too easy. Products can be designed, but also processes that build products can be designed (compare a Tesla automobile with a Tesla factory that builds Tesla automobiles -- both are designed). The design inference makes sense of improbable products. Conservation of information, through the search for a search, makes sense of improbable processes that output probable products. Making sense of displacement was a crucial step in developing a precise mathematical treatment of conservation of information.
Whereas conservation of information was a mathematically confirmed theoretical finding, displacement was an inductively confirmed empirical finding. Over and over information supposedly created from scratch was surreptitiously introduced under the pretense that the information was already adequately explained when in fact it was merely presupposed. In effect, displacement became a special case of the fallacy of begging the question, obscuring rather than illuminating evolutionary processes.
One of the more brazen examples of displacement that I personally encountered occurred in a 2001 interview with Darwinist Eugenie Scott on Peter Robinson's program Uncommon Knowledge. Scott and I were discussing evolution and intelligent design when Robinson raised the trope about a monkey, given enough time, producing the works of Shakespeare by randomly typing at a typewriter. Scott responded by saying that contrary to this example, where the monkey's typing merely produces random variation, natural selection is like a technician who stands behind the monkey and whites out every mistake the monkey makes in typing Shakespeare. But where exactly do you find a technician who knows enough about the works of Shakespeare to white out mistakes in the typing of Shakespeare? What are the qualifications of this technician? How does the technician know what to erase? Scott never said. That's displacement: The monkey's success at typing Shakespeare is explained, but at the cost of leaving the technician who corrects the monkey's typing unexplained.
In his book The Blind Watchmaker, Richard Dawkins claims to show how natural selection can create information by appealing to his well-known METHINKS IT IS LIKE A WEASEL computer simulation. Pure random sampling of the 28 letters and spaces in this target phrase would have a probability of only 1 in 27^28, or roughly 1 in 10^40, of achieving it. In evolving METHINKS IT IS LIKE A WEASEL, Dawkins's simulation was able to overcome this improbability by carefully choosing a fitness landscape to assign higher fitness to character sequences that have more corresponding letters in common with the target phrase.
Essentially, in place of pure randomness, Dawkins substituted a hill-climbing algorithm with exactly one peak and with a clear way to improve fitness at any place away from the peak (smooth and increasing gradients all the way!). But where did this fitness landscape come from? Such a fitness landscape exists for any possible target phrase whatsoever, and not just for METHINKS IT IS LIKE A WEASEL. Dawkins explains the evolution of METHINKS IT IS LIKE A WEASEL in terms of a fitness landscape that with high probability allows for the evolution to this target phrase. Yet he leaves the fitness landscape itself unexplained. In so doing, he commits a displacement fallacy.
Displacement is also evident in the work of Dawkins as he shifts from computer simulations to biological evolution. Indeed, his entire book Climbing Mount Improbable can be viewed as an exercise in displacement as applied to biology. In that book, Dawkins compares the emergence of biological complexity to climbing a mountain. He calls it Mount Improbable because if you had to get all the way to the top in one fell swoop (that is, achieve a massive increase in biological complexity all at once), it would be highly improbable. But does Mount Improbable have to be scaled in one leap? Darwinism purports to show how Mount Improbable can be scaled in small incremental steps. Thus, according to Dawkins, Mount Improbable always has a gradual serpentine path leading to the top that can be traversed in baby-steps.
But where is the verification for this claim? It could be that Mount Improbable is sheer on all sides and getting to the top via baby-steps is effectively impossible. Consequently, it is not enough to presuppose that a fitness-increasing sequence of baby steps always connects biological systems. Such a connection must be demonstrated, and to date it has not, as Michael Behe's work on irreducible complexity shows. But even if such a connection could be demonstrated, what would this say about the conditions for the formation of Mount Improbable in the first place?
Mountains, after all, do not magically materialize -- they have to be formed by some process of mountain formation. Of all the different ways Mount Improbable might have emerged, how many are sheer so that no gradual path to the summit exists? And how many do allow a gradual path to the summit? A Mount Improbable with gradual paths to the top may itself be improbable. Dawkins simply assumes that Mount Improbable must be such as to facilitate Darwinian evolution. But in so doing, he commits a displacement fallacy, presupposing what must be explained and justified, and thus illicitly turning a problem into its own solution.
In the evolutionary computing literature, examples of displacement more sophisticated than Dawkins' WEASEL can readily be found. But the same question-begging displacement fallacy underlies all these examples. The most widely publicized instance of displacement in the evolutionary computing literature appeared in Nature back in 2003. Richard Lenski, Charles Ofria, Robert Pennock, and Christoph Adami had developed a computer simulation called Avida. They claimed that this simulation was able to create complex Boolean operators without any special input or knowledge. One of the co-authors, Pennock, then went further to claim that Avida decisively refuted Michael Behe's work on irreducible complexity. And given that irreducible complexity is a linchpin of intelligent design, Pennock in effect claimed that Avida had also refuted intelligent design.
But in fact, as Winston Ewert and George Montañez showed by tracking the information flow through Avida, the amount of information outputted through newly formed complex Boolean operators never exceeded the amount of information inputted. In fact, Avida was jury-rigged to produce the very complexity it was claiming to produce for free: Avida rewarded ever-increasing complexity simply for complexity's sake and not for independent functional reasons. Other examples like Thomas Schneider's ev, Thomas Ray's Tierra, and David Thomas's Steiner tree search algorithm all followed the same pattern. Ewert and Montañez were able to show precisely where the information supposedly created from scratch in these algorithms had in fact been embedded from the outset. Displacement, as their research showed, is pervasive in this literature.
The empirical work of showing displacement for these computer simulations set the stage for the theoretical work on conservation of information. These simulations, and their consistent failure to explain the origin of information, prompted an investigation into the precise numerical relation between information inputted and information outputted. Showing displacement started out as a case-by-case effort to uncover where precisely information had been smuggled into a computer simulation. Once the mathematics of conservation of information was developed, however, the need to find exactly where the information was smuggled in was no longer so important, theory stepping in where observation fell short.
Theory guaranteed that the information was smuggled in even if the evolutionary simulations became so byzantine that it was hard to follow their precise information flow. By analogy, if you have a hundred and one letters that must go into a hundred mailboxes, the pigeonhole principle of mathematics guarantees that one of the mailboxes must have more than one letter. Checking this empirically could be arduous if not practically impossible because of all the many possible ways that these letters could fill the mailboxes. Theory in this case comes to the rescue, guaranteeing what observation alone cannot.
Displacement is a shell game. In a shell game, an operator places a small object, like a pea, under one of three cups and then rapidly shuffles the cups to confuse observers about the object's location. Participants are invited to guess which cup hides the pea, but the game often relies on sleight of hand and misdirection to increase the likelihood that participants guess incorrectly. So long as the game is played fairly, the pea is under one cup and remains under one cup. It cannot magically materialize or dematerialize. The game can become more sophisticated by increasing the number of cups and by the operator moving the cups with greater speed and agility. But by carefully tracking the operator, it is always possible to determine where the pea started out and where it ended up. The pea here is information. Displacement says that it was always there. Conservation of information provides the underlying mathematics to demonstrate that it was indeed always there.