…By Any Other Name – Why the Concept of Punctuated Equilibrium is More a Matter of Skilled Branding than Scientific Revolution

fossil_dinosaurs

Branding is typically thought of as a concept whose applicability is largely confined to the world of business and marketing. However, branding can be important even within the cloistered halls of academia. Nowhere is this more obvious than in the remarkable stamina displayed by Stephen Jay Gould and Niles Eldredge’s concept of punctuated equilibrium. The product of the parochial perspective of professional paleontologists, punctuated equilibrium does two things: restates a point evolutionary biologists had been aware of long before Gould and Eldredge made up a catchy new name for it, and leads to some altogether outlandish ideas concerning the nature of selection.

Gould and Eldredge came up with the idea of punctuated equilibrium in the late 1970s after observing that the fossil record displayed long periods of stasis, interrupted by relatively rapid sequences of change. Specimens from single species recovered from rock formations spanning millions of years often display the same basic range of variation. Then, in the blink of a geological eye, observable changes measurably shift the range of variation, suggesting a rather rapid bout of evolution.

This pattern seems to contradict the standard view outlined by Darwinian or “phyletic” gradualism. In this view, evolutionary change occurs at a more or less constant and more or less glacial pace. The transition from bony fish with fins adapted to swimming to bony fish with fins adapted to swimming and crawling occurred as a result of tens of millions of years of steady evolutionary change. This type of thinking seems a natural byproduct of a perspective that holds that evolutionary change occurs as a result of changes in the frequency of genes within a given population produced by random mutations forced through a sieve of selective pressures.

tiktaalik

Sequence of tetrapod evolution.

As phrased, it is easy to see why Gould and Eldredge thought the standard view might be flawed. Empirical evidence derived from the fossil record seems to unambiguously contradict the gradualist position. What Gould and Eldredge missed in formulating their ideas is that this view of gradualism is at best a caricature of the modern understanding of evolutionary processes. True, evolution is sometimes – almost dogmatically – viewed as a sluggish process. But evolutionary biologist had become aware of the fact that evolutionary change occurs at varying rates long before Gould and Eldredge put forward the idea of punctuated equilibrium.

Evolutionary change, as understood in the modern synthesis, is a product of four fundamental processes: natural selection, gene flow, genetic drift, and mutation. The rate of evolutionary change varies in light of the frequency and intensity of those processes. Consider for example two populations, whimsically named Population A and Population B. Populations A and B experience the same selective pressures, but Population A lives a rather monastic lifestyle, isolated high in some mountain valley. Population B, on the other hand, is rather promiscuous – its members spend a lot of time mating with members of neighboring populations. According to the principles of the modern synthesis, Population A will evolve more rapidly than Population B because Population A accumulates mutations without the buffering effects of gene flow. In principle, one can tune the four dials of natural selection, gene flow, genetic drift, and mutation up or down for any given hypothetical population and achieve differing rates of evolutionary change.

This is all rather humdrum, boilerplate evolutionary biology. It’s dogma today and was at the time Gould and Eldredge came up with the notion of punctuated equilibrium. Indeed, Sewell Wright had laid bare these very principles in his shifting balance theorem, formulated some forty-five years prior to the publication Gould and Eldredge’s seminal papers on punctuated equilibrium. Elsewhere, the idea of long term evolutionary stasis had been explored through John Maynard Smith’s forays into game theory, resulting in the concept of evolutionary stable strategies. As elaborated by Dawkins, evolutionary stable strategies implicitly involve statistically stagnant gene complexes – and therefore stable populations – because mutations are actively penalized by selection (1976; 1982).

Really, a lot of the fuss over Gould and Eldredge’s ideas boils down to marketing. Punctuated equilibrium is a beautifully coined term, at once fluid, memorable, and imbued with the electric hum of scientific novelty. It’s a lot more gratifying to say or write “punctuated equilibrium” than it is to say or write “evolutionary change can occur at a variety of rates depending on the strength of the underlying processes”. Punctuated equilibrium, with its inherent suggestion that generations of Darwinists had gotten things fundamentally wrong, provided excellent fodder for headline-hungry popular periodicals.

This wouldn’t detract too much from Gould and Eldredge’s work, were it not for their clear attempts to paint their ideas as revolutionary. Their initial 1977 paper even goes so far as to cite Thomas Kuhn’s The Structure of Scientific Revolutions, suggesting they thought their ideas rather more auspicious than they appear through the sober lens of retrospection. In balance, what they did was quite beneficial. The then banal realization that evolutionary change was not ubiquitously gradual was widely known among evolutionary biologists, but frequently missed by the inexpert. Giving the general concept a name greatly increased its public visibility, spreading the word as it were to many who would have otherwise persisted in ignorance.

Phyletic gradualism (top) vs. punctuated equilibrium (bottom)

Phyletic gradualism (top) vs. punctuated equilibrium (bottom)

Gould and Eldredge’s greater sins were to completely abandon the potential for evolutionary change to occur along a gradual gradient, and, more severely, suggest that species level selection plays a prominent role in shaping long term evolutionary trends. Concerning the former, Gould and Eldredge pointed out that gradualism seemed to demand steady orthogenetic selection, such that traits were more or less guided in a steady direction by consistent – but very small – selection pressures over the course of millions of years. Selection pressures of these kind would be swamped by other factors, so it was unrealistic to presume they were meaningful, especially in light of fossil evidence to the contrary (Dawkins 1982). Problematic is that their assessment fails to recognize that gradual evolutionary change can be produced by a mix of countervailing forces. In no scenario is it actually realistic to presume a stable but mild selection pressure is the only force exerted on a biological population. A strong selection gradient may be counteracted by high levels of gene flow or low levels of mutation, resulting in a net rate of evolutionary change identical to what one would expect under conditions of weak selection alone. As Dawkins points out (1976, 1982), much evolution is a result of organisms evolving in response to pressures exerted by other organisms, resulting in evolutionary arms races. The selective dynamics underlying evolutionary arms races are precisely the kind that would produce steady directional change.

In making their case against gradualism, Gould and Eldredge also greatly oversimplified the nature of the evidence. Fossils provide an excellent record of long term change, cataloguing the results of evolutionary processes over the course of millions of years. Yet it’s worth remembering the fossil record is primarily one of morphological change in hard tissue. Soft tissues like skin and stomachs and brains are only occasionally preserved in the fossil record. Moreover, behavioral change – surely relevant to any claim about the nature of evolutionary processes – can only be studied indirectly. Gould and Eldredge’s reliance on the fossil record implicitly grants preference to morphology as the only meaningful stage for observing evolutionary change, ignoring the fact that a fossil sequence that shows little change in limb length over the course of millions of years might disguise important changes in soft tissue and, critically, behavior. Put simply, the long term evolutionary stasis Gould and Eldredge saw as a basis for punctuated equilibrium is largely a product of what kinds of information do and do not fossilize.

Which brings us to their gravest sin: the claim that punctuated equilibrium shines light on the fundamental role of species level selection in shaping evolutionary processes. This again seems a product of the parochial perspective of a person who spends most of their time looking at fossils. These are necessarily low resolution records, revealing trends that play out on the scale of tens of thousands to millions of years. Little wonder, then, that the most pronounced signal of evolutionary change in the fossil record will often be that left behind by speciation events. Gould and Eldredge seem to have mistaken the locus of evolution for the locus the selection. Populations evolve, individuals do not. That is precisely what we see in the fossil record. From here, it is easy to slip into the trap of thinking selection is operating on level of populations.

The problem here is that populations evolve as the result of differential selection operating on either the individuals that comprise the population or, more fundamentally, the individual alleles whose frequency provides the definitional basis of evolutionary change. Evolutionary processes can be abstracted to involve the differential proliferation of replicators, bits of information that have sufficiently high levels of longevity, fidelity, and fecundity to be sensible to selective forces (Hull 2001; Dawkins 1976 & 1982). Candidates for the unit of selection must meet those criteria. A sufficiently short nucleotide sequence passes muster. But does a species?

Species do seem to last a long time, so perhaps we can tick one box in favor the species as replicator position. But it’s difficult to see how a species can display either fidelity or fecundity. Species do not reproduce – the individuals of which a species is comprised do. Whatever fidelity or fecundity is exhibited by a species is a product of processes that occur at the level of individuals. Speciation events are not good candidates for instances of species reproduction, because – by definition – they involve a species changing into something else. In that case fidelity seems compromised. When we turn to fecundity, the argument for species level replication seems just as dubious: parent species don’t sire lots of copies of themselves.

On geological time scales, speciation seems to occur in the blink of an eye. But selection operates on timescales that make most speciation events appear gradual. Even if we grant the already suspicious claim that species can coherently serve as replicators, the fact nonetheless remains that selection operating on replicators with a faster turnover rate will swamp the effects of species level selection (Dawkins 1982). That is, individuals reproduce and introduce novel mutations into the gene pool at the rate of generations. Depending on the species, that can be anywhere from days to decades. Speciation, by comparison, occurs at a relatively glacial pace. Populations become reproductively isolated and evolutionarily distinct on a scale that must be measured in anywhere from millennia to millions of years. The idea that some selective pressure operates on the species as a whole, when all evolutionary change is a product of the differential reproduction of the individuals within that species, is far-fetched at best.

As Dawkins points out (1982), species level selection also finds itself tripped up by one of the very arguments Gould and Eldredge leveled against gradualism. Gradualism seems to demand slight but consistent directional selection. We’ve already discussed the problems with this, but consider the idea when turned to species level selection for complex adaptations. A demand is placed not only on directional selection for a trait, but directional selection on many traits that might not be genetically intertwined. Species level selection falls into the same orthogenetic trap Gould and Eldredge had laid for gradualism, but does so far more deeply and devastatingly.

Species level selection is a chimera. Any given instance of speciation marks a point at which all the interesting change has already occurred at the level of individuals and genes. None of which is to say speciation and extinction aren’t evolutionarily important. They most certainly are. Rather, the crucial point is that selection can’t operate on the level of the species because selection pressures can’t make it that far up the chain. By the time a selection pressure becomes sensible at the level of the population or species, it has already been taken care of by adaptations expressed on the level of the individual. If we picture selection as hierarchical process, the most navigable of selection pressures will never even been sensed by genes. Behavioral plasticity and learning will take care of them. If an organism proves too developmentally inflexible, a beneficial mutation resulting in a slight adaptive advantage (these produced at the rate of generations) will take care of the problem. By the time a selection pressure made it the level of the species, individuals within the population will have had tens of thousands of chances to deal with it, and chances are, they already will have. Populations and species evolve as a result of the aggregate effects of selection on individuals and the genes they carry.

In the final analysis, punctuated equilibrium is a concept well worth keeping. It seems to make the notion that evolution is not merely a steady trudge into the future considerably more digestible by taking the concept embodied by the phrase “evolutionary change can occur at a variety of rates depending on the strength of the underlying processes” and compressing it into a simple, memorable term – “punctuated equilibrium”. But it’s worth remembering that punctuated equilibrium – as formulated by Gould and Eldredge – overstates the case against gradualism, misrepresents the evidence presented by the fossil record, and makes a grossly misleading – and flatly incorrect – argument about the nature of selection. Let’s use punctuated equilibrium to remember that sometimes evolution can happen very fast and discard the rest.


References and Further Reading:

Dawkins, R. 1976. The Selfish Gene. Oxford University Press

Dawkins, R. 1982. The Extended Phenotype. Oxford University Press

Hull, D. 2001. Science and Selection. Cambridge University Press

Gould, S. J. & N. Eldredge. 1993. Punctuated equilibrium comes of age. Nature. 366

Gould, S.J. & N. Eldredge. 1977. Punctuated equilibria: the tempo and mode of evolution reconsidered. Paleobiology 3 (2): 115-151

Kuhn, T. 1962. The Structure of Scientific Revolutions. University of Chicago Press

Maynard Smith, J. & G. R. Price. 1973. The logic of animal conflict. Nature 246 (5427): 15–8.

Wright, S. 1932. The roles of mutation, inbreeding, crossbreeding and selection in evolution. Proceedings of the 6th International Congress of Genetics: 356-366.

Evolutionary Psychology Isn’t Done Evolving Yet

Editorial_cartoon_depicting_Charles_Darwin_as_an_ape_(1871)

Caricature of Charles Darwin from The Hornet, ca. 1871.

Rejecting evolutionary psychology (EP) is tantamount to rejecting evolution. Or so goes the argument put forward by evolutionary psychologist Glenn Geher in a recent Psychology Today editorial. As Geher writes, there does seem to be some disconnect involved in accepting evolution, on the one hand, and rejecting evolutionary psychology on the other. It’s the sort of about-face that seems dependent on a certain amount of cognitive dissonance. For many – particularly in the humanities and social sciences – this incongruence is probably very often politically or ideologically motivated. Rightly uncomfortable with the sort of late 19th and early 20th century typological thinking – sometimes crudely justified by a slipshod invocation of Darwinian ideas – that contributed to classist and racist social agendas, many rebel against the notion that human behavior is biologically determined.

eugenics-cropped

This is peculiar for a number of reasons. For one, it seems to demand either a rejection of all Darwinian accounts of behavior or a vaguely vitalistic assertion that human behavior is governed by forces distinctly different from those that shape the behavior of other animals. The problem here is that the differences between humans and our animal cousins are largely differences of degree, not of kind. This leaves the task of identifying the point at which an organism becomes a creature ungoverned by fitness-enhancing imperatives a matter of arbitration.

To the extent that Geher is rebutting the position represented by the Standard Social Science Model, I tend to agree with him. For many, the notion that humans are more or less infinitely plastic hasn’t lost its allure. It is, after all, appealing to think that we are born a blank slate. Unfortunately, this is mostly a product of wishful thinking.

For better or worse, humans do share an array of motivations, preferences, and inclinations that are the products of natural selection. Which gets to the more interesting area where Geher might be wrong, or – at least – not entirely right, in his assertion that rejecting evolutionary psychology is equivalent to rejecting evolution. True, the rejection of EP among certain segments of the humanities and social sciences involves a liberal seasoning of cognitive dissonance. But there are also reasons why individuals with an understanding of evolutionary theory and confidence in its ability to unify biological and behavioral phenomena under a single explanatory umbrella might find EP wanting.

On a proximate level, EP seems to fall short of fully explaining the clearly context sensitive expression of human universals, much less the social, ecological, and epigenetic factors that contribute to behavioral diversity. Evolutionary psychology can expose the roots of phenomena like male aggression by pointing to male-male status competition and differential reproductive success (Wrangham & Peterson 1996; Daly & Wilson 1988). But any given case of male violence is contingent upon a variety of environmental factors. In an important sense, placing the explanatory onus on fitness – and therefore the transmission of genetic information – seems to ignore the central dogma of molecular biology. Segments of DNA are transcribed into corresponding strands of RNA which code for protein synthesis, culminating – in terms of behavior – in the production of hormones like testosterone and cortisol that contribute to patterns of aggression. This is a simplification, but the basic point is this: the chain of causation between genes and behavior is long and complicated, and can only be understood probabilistically. Serious evolutionary psychologists are aware of this, recognizing that behavior unfolds at the interface between environmental and genetic inputs. Yet, by placing their emphasis on behavioral adaptation and, implicitly, the genetic variation underlying traits, evolutionary psychologists give short-shrift to other important factors.

The disconnect between biological adaptation and behavior is particularly pronounced in the concept of massive modularity. The massive modularity hypothesis posits that individual behaviors are the product of specialized cognitive algorithms that exist explicitly because they conferred some fitness advantage on members of an ancestral population phrenologicalchart(Tooby & Cosmides 1992). The actual degree of modularity – if any – exhibited by the human mind is a thorny question, far from being resolved. Here, suffice it to say that I’m skeptical that the modularity of the human mind can be properly described as massive or that modularity is necessary to explain most human behaviors. As a heuristic for thinking about the evolutionary roots of behavior and formulating adaptationist hypotheses, modularity has some utility. But as a firm conceptualization of how the mind actually works, it lacks clear empirical support. Neurologically, there is little evidence for the existence of structures corresponding to the cognitive algorithms suggested by modularity. There is no reason to presume that human universals like cooperation or theory of mind need to be accompanied by a corresponding set of specialized cognitive modules. Furthermore, the notion of massive modularity seems to impose a level of rigidity that defies what we know about human behavioral plasticity and ignores the likely crucial but currently poorly understood influence of epigenetic changes. Evolutionary psychologists have countered this argument with a metaphor involving a globe-trotting, context sensitive jukebox, but this argument doesn’t yield any predictions that are sufficient to distinguish it from alternatives (Ermer et al. 2007).

Much explanatory work can in fact be done without assuming the burden of such a highly specialized cognitive architecture. This is particularly true when it comes down to thinking explicitly about the components of humanity’s evolved psychology. Our remarkable facility with social learning, for instance, is very likely the product of natural selection. In concert with our capacity for language – an evolved trait, for sure, though not necessarily an adaptation – social learning provides a generalized mechanism that has served as a scaffold for behaviors shaped by the accumulation and transmission of social information (Alvard 2003; Sterelny 2003). Examples such as this go a long way toward illustrating one of the primary deficits of the EP program: a broad failure to take into account the multiple scales of information that contribute to the construction of the behavior researchers are trying to explain. The genes we inherit from our parents, present in their germ line because of the role they have typically played in building reproductively successful phenotypes, only partially explain any given trait. Maternal effects and environmentally induced epigenetic changes throughout growth and development are crucial. Some behaviors are likely adaptations in precisely the sense intended by evolutionary psychologists, but others involve the dynamic interaction between ecological and social sources of information – places where the boundary between adaptation and non-adaptive plasticity (i.e. plasticity not explicable in terms of heritable genetic information) gets fuzzy.

Critics of EP have also occasionally charged its proponents with being overly adaptationist, in the pejorative sense of the term outlined by Stephen Jay Gould and Richard Lewontin in their 1979 paper, “The Spandrels of San Marco”. I’m typically sympathetic to the adaptationist perspective and find some of the arguments put forward by Gould and Lewontin less than convincing, but in the case of evolutionary psychology, the central criticism is frequently valid. In an exercise limited only by the bounds of imagination, evolutionary psychologists posit the existence of some cognitive adaptation then postulate a set of plausible circumstances that would have selected for it among the mobile bands of foragers ancestral to modern humans. The problem here is twofold. First, testing adaptationist hypotheses can be tricky. In the strict, historical sense of the term, for a trait to be an adaptation it must have a genetic component that proliferated because it contributed to a good solution to a given adaptive challenge, such that it conferred higher fitness on its bearers than conspecifics lacking said trait (Sober 1984). Unfortunately, the adaptive challenges that shaped the trait are, by definition, in the past. If environments have been more or less stable from the point at which the trait became a fixed feature of the population and the point at which the trait is observed, this isn’t much of problem. But if things have changed over the intervening years, researchers are confronted with the issue of adaptive lag  – a result of a disparity between extant circumstances and the circumstances that selected for the trait (Laland & Brown 2006; Dawkins 1982). If adaptations are identified by the increased reproductive success they facilitate relative to a specific set of selective pressures and said pressures are no longer at work, empirically demonstrating adaptation can prove difficult. Evolutionary psychologists are thus left with ubiquity and the appearance of “design” (reasonable, because natural selection is the primary force responsible for the appearance of design in biological systems) as criteria for identifying psychological adaptations. These are heuristics that might point researchers in useful directions, but they do not provide unequivocal measures of adaptation.

The second problem extends from first. Sometimes the distinguishing characteristic of adaptation is the point in time at which the trait in question evolved. Invoking adaptive explanations is most useful when accompanied by an understanding of the conditions that selected for the trait in question. This makes distinguishing between ancestrally derived and uniquely acquired characteristic essential (Thornhill 2007). At some point, many of the features of the general body plan shared by all tetrapods (amphibians, reptiles, mammals, and birds) were probably adaptations. But these homologous traits evolved as adaptations at some point well before the emergence of any of the aforementioned classes of animals. This is an extreme example, but it points to another flaw in certain veins of EP. Consider, for purposes of illustration, the problem of cheater detection. It has been hypothesized that humans should have some ability to detect individuals likely to defect from social contracts, because these individuals represent free-riders imposing costs on the cooperators they’ve duped. In other words, humans should be able to identify cheaters (Cosmides & Tooby 1992). This is quite reasonable, and, I think, probably true. It also might not be an exclusively human trait, because its advantages should be present whenever survival and reproduction depends on participations within a larger social unit. Considering the amount of cooperating humans do with non-kin, cheater detection may be more elaborated in our line, but it ought to be present in other primates as well.

The problem is not whether human psychology and behavior has been shaped by evolutionary processes. It clearly has, so in that sense EP is based on a truism. There are, of course, those who take issue with this. The distinguished anthropologist Marshall Sahlins, for instance, has spilled considerable ink railing against attempts to develop evolutionary explanations for human behavior, adopting the curious tactic of arguing that evolutionary explanations are false by demonstrating an apparent inability to understand any of them.

The real question is whether or not human behavior has been shaped by evolution in the manner conceived by evolutionary psychologists. There isn’t really an unequivocal answer in this regard, but there is plenty of room for skepticism. Though attempts to formulate Darwinian explanations for human behavior date back to at least Darwin himself, Charles_Darwin_photograph_by_Herbert_Rose_Barraud,_1881a relatively recent proliferation of interest has spawned a number of variously competing and complimentary paradigms. This is a good thing, encouraging the kind of discourse that fuels scientific progress. Though EP has deservedly gained some traction, it’s still too early to dismiss all of its detractors as victims of the sort of tortured intellectual gymnastics exemplified by the proponents of the Standard Social Science Model. Serious evolutionists can – and in many cases should – take issue with a number of the claims leveled by EP without running the risk of being dismissed as intellectual charlatans.

Ultimately, what we are trying to explain through the application of a theoretical paradigm like EP is phenotypes. It should be unsurprising then that an understanding of the forces that selected for specific adaptations in past environments, the adaptive challenges that shaped them into universal components of human phenotypes, can only teach us so much. This point is enhanced when one recognizes that many behaviors aren’t necessarily explicable in the adaptive sense at the heart of the EP program. Behavior is one of those phenomena for which the most interesting explanations might often reside at the proximate end of the explanatory spectrum. This isn’t an anti-Darwinian position. The neo-Darwinian synthesis provides an exceptionally powerful toolkit for understanding and explaining behavior. Maybe, I’d venture to say, one the best. But as our understanding of evolutionary processes deepens, it becomes more and more apparent that there is more at work in shaping phenotypes than the genes that proliferated because of their role in building fit phenotypes in ancestral environments.

References:

Alvard, Michael S. 2003. The adaptive nature of culture. Evolutionary Anthropology. Evolutionary Anthropology 12:136–149

Cosmides, Leda and John Tooby. 1992  Cognitive Adaptations for Social Exchange. In The Adapted Mind: Evolutionary Psychology and the Generation of Culture. Jerome H. Barkow, Leda Cosmides, & John Tooby eds. Pp. 161-228. New York: Oxford University Press.

Daly, Martin and Margo Wilson. 1988  Homicide. Aldine Transaction

Dawkins, Richard. 1982. The Extended Phenotype: the Long Reach of the Gene. Oxford University Press: New York

Ermer, Elsa., Leda Cosmides, and John Tooby. 2007  Functional Specialization and the Adaptationist Program. In The Evolution of Mind. Steven W. Gangestad & Jeffry A. Simpson eds. Pp. 153-160. New York: The Guilford Press.

Gould, Stephen Jay and Richard C. Lewontin. 1979. The spandrels of San Marco and the Panglossian paradigm: a critique of the         adaptationist programme. Proceedings of the Royal Society B. 209: 581-598

Laland, Kevin N. & Gillian R. Brown. 2006. Niche construction, human behavior, and the adaptive-lag hypothesis. Evolutionary Anthropology. 15: 95-104

Newcombe, Nora S., Kristin R. Ratliff, Wendy L. Shallcross, and Alexandra Thyman.2009  Is Cognitive Modularity Necessary in an Evolutionary Account of Development? In Cognitive Biology: Evolutionary and Developmental Perspective on Mind, Brain, and Behavior. Luca Tommas, Mary A. Peterson, & Lynn Nadel eds. Pp. 105-126. Cambridge, MA; MIT Press.

Sober, Elliott. 1984. The Nature of Selection. MIT Press.

Sterelny, Kim. 2003. Thought in a Hostile World: The Evolution of Human Cognition. Malden, MA: Blackwell Publishing

Thornhill, Randy. 2007  Comprehensive knowledge of human evolutionary history requires both adaptationism and phylogenetics. In The Evolution of Mind. Steven W. Gangestad & Jeffry. A. Simpson eds. Pp. 31-37. New York: The Guilford Press.

Tooby, John. and Leda Cosmides. 1992 Psychological foundations of culture. In The Adapted Mind: Evolutionary Psychology and the Generation of Culture. Jerome H. Barkow, Leda Cosmides, & John Tooby eds. Pp. 19-136. New York: Oxford University Press.

Wrangham, Richard and Dale Peterson. 1996  Demonic Males: Apes and the Origins of Human Violence. New York, NY: Houghton Mifflin Company.

Intelligent Design and the Problem of Scientific Demarcation: The Messy Reasons Why Intelligent Design Doesn’t Count

NGS Picture ID:422890

Archaeopteryx siemensii – The Berlin Specimen

Science, in many respects, is a sprawling, messy endeavor. It does, however, have boundaries. The borders of science remain particularly pertinent in a political climate that remains remarkably tolerant of conservative assaults on science and critical thinking. In many states, conservative lawmakers and right-wing school boards have launched sweeping – and, to those with an appreciation for fact-based discourse, revolting – revisionist campaigns, attempting to rewrite science and history curriculums to better match their ideological biases. Identifying which of their crimes against free expression and public education is most heinous is something of an arbitrary affair, but their attempts to inject religion into the biology classroom have been among the most pernicious and persistent. It is for this reason that intelligent design makes a particularly useful fulcrum for analyzing scientific demarcation criteria.

Discussions of what does and does not count as science have been far ranging and, occasionally, contradictory. Some philosophers and scientists cite a fairly narrow range of criteria as sufficient markers for the border between science and pseudoscience. Others argue that demarcation demands the inclusion of factors relating not only to what counts as scientific explanation, but what counts as science as a process, pointing to important elements of the methodology and sociology of science. A few have even argued that demarcation is a pseudo-problem. This latter argument probably goes too far, since demarcation has important practical consequences (e.g. what types of projects should be funded by public money, what types of claims should be invested with the most confidence, and – in particular – the type of confidence that comes with the term knowledge, and what types of information should be taught in public educational institutions). In what follows, I do not attempt to establish a comprehensive, universally satisfying set of necessary and sufficient conditions for something to be included within or excluded from the domain of science. Indeed, I think many of the criteria forwarded by philosophers of science capture at least part of the picture, and that few I have encountered are entirely wrong. Rather, the point here is to discuss demarcation in light of a socially and politically relevant problem: persistent attempts to inject religious dogma into the public science classroom. Using intelligent design (ID) as a fulcrum for analysis, I distinguish between scientific explanation and the process of science, identifying a few important criteria for each. Ultimately, intelligent design falls short of the mark in both respects, but nearly as cleanly as some might suspect. It is a free-flowing, wide-ranging discussion. It is also, I am quite confident, a balanced one. Though my ultimate conclusion is probably obvious at the outset, I think following the argument through its entire course is worthwhile.

The Problem of Demarcation

Superficially, establishing a set of clear and simple criteria for what does and does not count as science seems like a fairly straightforward proposal. Science, as a relatively recent addition to the human social repertoire, is commonly viewed as distinct from other activities. It employs unique methods that produce original and robust answers to questions relating to the nature of reality. Additionally, the answers produced by science are often freighted with a level of confidence above and beyond those forwarded by perspectives that fail to measure up to the rigorous standards of scientific practice. For many, a scientific perspective is considered a marker of rational equanimity of thought and scientific explanations are considered the pinnacle of reliability in the otherwise precarious and erratic arena of human knowledge. Indeed, the peculiar nature of scientific claims earns them a well deserved place of privilege in the social and political discourse of some nations, such that science is considered a reasonable guide for a variety of policies, a standard for what types of endeavors are suitable targets for governmental investment, and what types of explanations are considered appropriate for discussion in publicly financed educational institutions. More to the point, scientific explanations are probably the closest humanity comes to determining many important truths about the nature of reality.

It is with regard to questions of what topics do and do not count as suitable targets of scientific education that problems of demarcation stubbornly recur as an issue of broad social import in the United States. Citing presumed deficits in the notion of biological evolution as expressed in the neo-Darwinian synthesis, some have forwarded the notion of intelligent design as a plausible alternative solution to questions pertaining to the origins and diversity of life. Proponents of this view would like to see it injected into the public education system as a means of infusing science curriculums with a measure of balance they might otherwise lack. Critics justifiably charge that intelligent design is merely religious dogma rebranded and refashioned to mimic scientific practice. In this view, it has have no place in science education because its religious roots render its inclusion therein a violation of the Establishment Clause of the United States Constitution, which has been consistently (and, I think, accurately) interpreted as a prohibition against the preferential treatment of any system of religious belief with respect to the function of the federal government. Of course, beyond the issue of strict legality is a question of whether or not the claims of intelligent design have the same explanatory merit as those included within the neo-Darwinian synthesis. At issue here is not only whether or not the framework is legally admissible, but whether or not the framework is a useful guide for understanding the aspects of the structure of reality that well corroborated scientific theories are thought to address.

Problematic in this is that the issue of establishing clear and consistent demarcation criteria for what does and does not fall within the bounds of science has proven signally thorny. Colloquially, science is regarded as a process of rational discovery, wherein researchers follow the evidence of empirical observation to whatever conclusions it might lead. This picture captures something of the spirit of the thing, but fails to fully articulate what science actually is and how it actually operates in practice. In testifying on behalf of the plaintiffs in the 1982 case of Mclean v Arkansas and defending the subsequent decision to reject the inclusion of Creationism (or “creation science” – the progenitor of intelligent design) in public school science curriculum, Michael Ruse outlined five criteria that could be used to distinguish science (e.g. Darwinian evolution) from religion and/or pseudo-science (creation science) (1,2). According to Ruse, science is:

  1. guided by natural law
  2. explanatory by reference to natural law
  3. testable against the empirical world
  4. its conclusions are tentative
  5. is falsifiable

Held against these criteria, Ruse argued, Creationism was exposed as poorly disguised religious dogma.

Though admirably clear and succinct, Ruse’s demarcation criteria are imperfect. Larry Laudan criticized them as simultaneously misconstruing the nature of the process of scientific discovery and setting a bar for admission so low as to be almost meaningless (3). Creation-science made claims that were both testable and falsifiable, satisfying conditions 3 and 5. For instance, a literal reading of the Biblical Noah story suggests a number of hypotheses about the kinds of evidence we can expect as a result of a massive global flood. Comparison between said claims and observable reality demonstrates that, as a scientific explanation, Creationism is indisputably false. Being wrong, however, is not the same as being unscientific. Relative to conditions 1, 2, and 4, the problem is not whether Creationism meets those criteria (for the most part, it does not), but whether what is commonly (perhaps dogmatically) accepted as science does so either. Much of what counts as science has little to do with the invocation of explanatory laws. Newtonian mechanics, for instance, seems perfectly capable of capturing and predicting the relationships between massive objects, yet offers little in the way of an explanation for why those relationships hold. Similarly, though eschewing dogma is considered crucial to the scientific enterprise, the reality of scientific practice often tells a different story. One particularly influential philosopher of science, Thomas Kuhn, went so far as to argue that dogma is a critical component of scientific pedagogy4, pointing out that students of any particular discipline are rarely encouraged to directly question the predominant explanatory paradigm of their field and the decisive evidence thought to justify it. With regard to Ruse’s fourth criteria, tentativeness, it is also true that the proponents of Creationism have not proven entirely intransigent, allowing their framework to morph into the more sophisticated framework of intelligent design.

Clearly, Ruse’s criteria fail to set up reliable guidelines for distinguishing between science and pseudo-science. To be clear, components of his framework seem to accurately characterize aspects of the scientific process. The problem is that they do not outline the entire corpus necessary and sufficient conditions for inclusion within (or exclusion from) the genus of ideas and investigative practices appropriately accepted as scientific. It is at this point perhaps useful to make a distinction between the scientific process and scientific explanations. These are, of course, overlapping domains. However, the possibility (however remote) for something to match any hypothetical set of criteria characterizing scientific explanations absent the work associated with the process of scientific discovery – that is, to be a scientific explanation by pure coincidence – makes the distinction worthwhile. Similarly, something might well resemble the process of science without really going about producing any actual scientific explanations. Scientific explanations are what the process of science is meant to produce as output. In other words, scientific explanations are the ends of science, while the scientific process is the means of getting there. Science, then, must be defined with respect to certain ends (the production of scientific explanations) and the specific ways in which it goes about achieving them.

Given this, there are two facets to the question of whether or not intelligent design is science. The first asks, “does it produce scientific explanations?” Does it share the same goals as other scientific endeavors and is its success in attaining those goals evaluated by the same criteria? The second relates to how closely the process by which those explanations are produced matches the processes by which scientific explanations are produced.

Intelligent Design as Scientific Explanation

That a theory be falsifiable is a frequently cited criterion for its acceptance as properly scientific. As advocated by Karl Popper, falsificationism relates to both scientific explanations and the process of scientific discovery (5). A valid scientific explanation should be falsifiable, and researchers engaged in the work of science should go about trying to falsify it. By suggesting that researchers should go about making bold conjectures and subjecting them to observational and/or experimental tests that stand a reasonable chance of turning up contradictory evidence, Popper sought to overcome Hume’s longstanding “problem of induction”. Briefly put, the problem is this: strictly speaking, scientific theories can never be proven by corroborating evidence, no matter how plentiful. There is neat a Bayesian proof of this, but its inclusion exceeds the scope of the present essay. Suffice it to say that the kind of evidence uncovered by individual experiments has limited extension, which is a big part of why the results of t-tests are interpreted in the rather torturous manner they are. Properly understood, positive evidence for a given hypothesis only demonstrates that, under a constrained set of circumstances, the hypothesis has a probability of being false below a certain threshold. Consequently, the most prudent course of action is to adopt a deductive approach and go about attempting to disprove hypotheses. No matter how many instances of positive evidence a researcher accrues, they can never be entirely certain that a given theory holds always and everywhere – that is, that it is an accurate description of any actual aspect of reality. However, a recalcitrant instance of disconfirmation can give us a lot of confidence about what is not true of the nature of reality. In short, opening up the potential for scientific claims (at least those pertaining to what is not true about reality) to be couched in the certainty of deductive disconfirmation eliminates the difficulties associated with inductive reasoning. Unfortunately, the actual practice of science tends to deviate from Popper’s prescriptions. That a given claim be falsifiable is a necessary component for it to be accepted as scientific. Falsifiability is a pretty strong candidate for a demarcation criterion, but – as Kuhn pointed out – the search for falsification does not really characterize much (if not most) of the work scientists actually do (6). Scientific ideas should absolutely be disprovable, but scientists do not typically go about hunting for evidence that contradicts the paradigm in which they work. As a result, falsifiability is a necessary, but far from sufficient, criterion for scientific demarcation.

On that note, it is difficult to fault the proponents of intelligent design for not putting a lot of apparent effort into finding out if their ideas are actually true – that is, an accurate description of some feature of observable reality by virtue of its ability to withstand frequent attempts at disconfirmation. Still, the question remains: is intelligent design, as an explanation, falsifiable? To answer that requires reference to what seems to be the central dogma of intelligent design – that certain biological systems and structures (eyes, immune systems, flagellar motors etc.) are “irreducibly complex”. This is meant to convey the notion that, absent any component of their current structure, they would cease to function entirely and that there is no natural process that could build them. It is a essential a rejection, probably based largely in either a misunderstanding of or inability to understand, the incremental process of descent with modification.

Superficially, irreducible complexity seems like not only a falsifiable claim, but a falsified one. A human eye, for instance, might not work very well if some malicious entity were to pluck out its lens. Consequently, an eye without one would not be very useful. However, the lens-free eye (or insert whatever component of whatever structure or system you desire) is something of a straw man. All that is required for an eye to evolve through the natural selection of blindly generated genetic mutations is that each modification on the road to any given modern eye (there are many) confer on its recipient some competitive advantage. From photoreceptor cells to direction sensitive eye cups, on up to any modern eye, all that is required is that each new feature – each subtle modification of the pre- existing structure – increase the fitness, or is associated with increased fitness, of its bearer. Eyes and flagellar motors need not appear entirely functional. Darwinian arguments only hold that each step along the long chain from no eyes and no flagellar motors to the modern manifestations thereof needs to confer some utility above and beyond that of extant, competing alternatives. As with creationism, it seems intelligent design is not an unscientific explanation – merely a wrong one.

This, however, seems a little too permissive. Irreducible complexity would count as a testable (and therefore falsifiable) empirical hypothesis if there were some stable, coherent, rigorously defined criteria for identifying it. Unfortunately, what does and does not count as irreducibly complex is arbitrary. It relates not to what can and cannot be reduced to a simpler functional form by some reverse-engineered process of evolution by natural selection, but what a given human can and cannot conceive of as irreducibly complex. Irreducible complexity refers to the shortcomings of the individual researcher’s conceptual repertoire, not a distinct feature of the natural world. Consequently, it is difficult to see how the claim of irreducible complexity is testable and, more specifically, falsifiable. A. C. Grayling characterized debating the religious as boxing with jelly (7). So it is with falsifying irreducible complexity: because of its extreme malleability, it is difficult – if not entirely impossible – to disprove. Its information content and the sorts of expectations that can be derived from it are limited only by the imaginations of its proponents.

The same problem holds for the broader notion of an intelligent designer. Proponents of ID have been careful not to specify the nature of the agent holding the cosmic reigns, possibly (read: probably) because they do not want to burden their idea with the sort of specific dogma that would render it inadmissible as a component of public school curriculum. The intelligent designer might be Yahweh, Zeus, advanced extraterrestrial beings, or any other entity capable of performing the necessary manipulations. Here, the problem is that the definition of the agent at work is not sufficiently circumscribed for consequences of its actions to be amenable to observational testing. Irreducible complexity is thought to be just such a consequence, but it too is unwieldy. As discussed, when something appears to be irreducibly complex, one might just as well ask why it is humans can’t conceive of the processes that formed it, rather than assume it must have been made by something other than undirected natural processes. The need here is not for scientists to be able to observe the designer directly, but for some observation to be expected as an inevitable consequence of its intervention. Prediction is a commonly cited feature of scientific explanations, included – for instance – in Carl Hempel’s deductive-nomological model (8) and James Woodward’s discussion of the manipulability conception of causal explanation (9). If the intelligent designer is going to do any explanatory work, ideas about its nature must at the very least be defined with a specificity adequate for the generation of predictions. Absent this, the intelligent designer has roughly the same explanatory merit of Hans Driech’s entelechy (10). Which is to say that an intelligent designer is, in light of current definitions, entirely superfluous.

Intelligent Design and the Process of Scientific Discovery

Thomas Kuhn criticized Popper’s suggestion that falsificationism represented a criterion sufficient for marking the boundary between science and pseudo-science. According to Kuhn, falsification only becomes important during periods of the sort of extraordinary research that often precipitates the adoption of new paradigms (6). Falsification, as a component of scientific practice, stems from the accumulation of recalcitrant problems and only really results in theory change in the presence of an alternative capable of overcoming some of those problems. Kuhn argued that the day-to-day process of scientific discovery was a more humdrum affair, characterized by a community of researchers working to solve rudimentary puzzles and bring theory and observation into closer harmony. Guided by a set of shared standards for solving scientific puzzles and evaluating their solutions (paradigms, in the disciplinary matrix sense of the term) scientists spend most of their time using an established theoretical framework to explain phenomena. During these periods of normal science, failures to explain observational evidence or experimental results are typically interpreted as errors on behalf of the researcher, not the theory they are attempting to employ.

Kuhn’s more extreme views on the incommensurability of alternative theories aside, there is much to appreciate here. As with any attempt to capture the essential components of the scientific process, Kuhn’s formulation is incomplete, but he does capture something essential – science, in some way or another, seems to involve the behavior of a community working to accomplish a common goal in accordance with a set of widely shared standards for success and failure. The scientific process is the means by which scientific explanations are produced and tested. It is a dynamic social activity, a point made Kuhn and echoed – to varying degrees – by Imre Lakatos and Paul Thargard in their respective attempts to formulate useful and accurate demarcation criteria. Lakatos argued that science is denoted by a progressive research program, guiding the investigations of researchers and leading to the discovery of novel facts about the nature of reality (11). Thagard reiterated the need for science to be a progressive process, bringing explanations and reality into ever tighter accord (12). Both also pointed to something critical – the need for scientific theories to be evaluated not only in terms of their adequacy in explaining or predicting a given set of observations, but it terms of their relative adequacy in light of alternatives.

To be clear, highlighting the importance of the social structure of scientific communities in describing the process of scientific discovery should not be interpreted as an undervaluation of the role of empirical evidence. Empirical evidence is absolutely critical, and represents a core component of the standards used to determine the veracity of scientific claims. Nevertheless, the reality from which empirical evidence is derived is impotent in terms of explanatory and predictive content absent a community of researchers with shared values for evaluating competing claims about the nature of reality. Presumably, reality was comprised of and unfolded in accordance with the same underlying processes well before the advent of anything resembling science. What’s new – and what makes science a distinct knowledge-gaining activity – is the way in which a community of individuals relates to that reality. An accurate – if broad – definition of the process of scientific discovery might read as follows: Science, as a process, is characterized by a community of variously competing and cooperating researchers working to sculpt enhanced understandings of reality that are evaluated in accordance with a set of shared values concerning what counts as success and failure.

In this sense, science itself might be fruitfully cast as a process that is in important respects analogous to Darwinian evolution. This is an idea Kuhn, Lakatos, and Thagard hinted at (albeit in somewhat different forms) in the emphasis they placed on the role of alternative accounts in shaping scientific progress. It was more or less directly stated by Bas C. Van Fraasen (13) and has been significantly elaborated by the philosopher of biology and science David Hull (14,15). In this view, scientific communities are conceived of as representing Darwinian populations. Individuals within a given community share significantly overlapping views about how to explain the phenomena of interest, including an overarching explanatory framework and notions about how to refine, test, and extend that framework. However, their ideas in this regard are not identical – there is variation within the population of scientists with respect to how to interpret a given theory and wield it as an explanatory tool. Scientific explanations are produced as a product of competition and cooperation among the individuals and different interpretations that constitute a given scientific community. External reality – through the mechanisms associated with competition and cooperation relative to shared standards of evaluation like accuracy, coherence, and parsimony – sculpts the population and the explanations it produces. As Kuhn pointed out in his seminal work, The Structure of Scientific Revolutions, individual scientists can be dogmatic and intransigent (16). Fortunately, for science to function as a process, individual scientists do not need to undergo much conceptual evolution. Certainly it would be nice if they did, but it isn’t necessary for science to experience progress. Over time, the composition of scientific communities change, allowing the population as a whole to experience the sort of conceptual evolution necessary for science to work as a process for uncovering more and more information (or inventing increasingly useful accounts) about the nature of reality.

This, of course, is not a comprehensive treatment of what features are necessary to distinguish a scientific process from a pseudoscientific one. Nevertheless, it should provide a sufficient foundation from which to address the question of whether or not the advocates of intelligent design are engaged in something akin to a scientific process. Relative to the criteria discussed above, it is possible to identify four pertinent questions:

  1. Is there a community with a shared goal?
  2. Are the members of said community engaged in progressive research?
  3. Do the members of said community vary in their ideas concerning the interpretation and application of their governing paradigm?
  4. Do they share criteria for evaluating success or failure in achieving the shared goal (question 1) that are commensurate with the broadly recognized and accepted criteria of science?

A complete and accurate answer to these questions would require a considerable amount of what I shall term, for lack of a better word, ethnographic research within the intelligent design community. The closest thing I am aware of to this type of work is Jason Rosenhouse’s book Among the Creationists (17). Based on Rosenhouse’s account of his considerable time spent interacting with Creationists and Intelligent Design advocates at conferences built around said claims, it seems fair to answer question one in the affirmative. There is certainly a community and they do seem to share a common goal: destabilizing Darwinian explanations for the origins and diversity of life.

An answer to question two is somewhat more equivocal. On the one hand, it might be generously granted that the proponents of intelligent design do sometimes carry out research. On the other, it hardly seems appropriate to characterize this research as progressive. For the most part, the advocates of intelligent design seem to spend their time concocting negative evidence for evolution by naturalselection, pointing to this or that feature of the natural world as something purportedly inexplicablewithin the Darwinian framework. They have not, however, generated anything that might even approximate a suitable alternative that could offer a potential corrective by which Darwinism might overcome its apparent flaws, much less a suitable candidate for its wholesale replacement. In short, they have not forwarded a way by which intelligent design might actually improve humanity’s understanding of relevant facts.

For question three, I am willing to move beyond a partial yes and grant that there is variation within the intelligent design community concerning the way in which the idea should be interpreted and applied. According to Rosenhouse, there are some who recognize that, as a scientific explanation, intelligent design lacks merit and others who appear quite content with the business of trying to sap Darwinian defenses.

Finally, there is the question of whether or not the members of the intelligent design community share criteria for evaluating success or failure that are commensurate with those employed in established sciences. Here, any answer must be considered far more tenuous. Still, the persistence of intelligent design despite its obvious explanatory stagnation is suggestive that the appropriate answer is probably negative. Irreducible complexity, insofar as it might be rather indulgently granted status as a falsifiable claim, has been resoundingly disproven, yet many proponents of intelligent design continue to tout it as one of their strongest arguments. In this, they do not seem to be engaged in a process that can be appropriately characterized as scientific – if the community shared criteria for evaluating claims against observational and experimental results, the population should have responded accordingly and abandoned the notion of irreducible complexity to monumentally heaping dustbin of failed ideas. Though it is difficult to identify precisely when intelligent design emerged from the Creationist movement, it is fair to say the basic argument has been around since the mid-1980s – shortly after the judge adjudicating the Mclean v. Arkansas dispute issued his decision. Thirty years (give or take) seems like plenty of time for the population to come around to the idea that the irreducible complexity argument does not work, provided they are willing to employ the same evaluative criteria used by scientists. By contrast, the evolutionary community had largely come around to the idea that genes/individuals (as opposed to groups or species) were the targets of natural selection by the early 1980s – around fifteen years (give or take) after W. D. Hamilton (18) and George C. Williams (19) had presented the landmark papers that facilitated the shift in outlook.

 The Status of Intelligent Design: Science or Pseudoscience

Perhaps it might have once been reasonable to permit the intelligent design movement status as something of an incipient science. This, of course, would demand turning an assiduously blind eye to the notion’s obviously religious roots. But as Lakatos argued, it is probably prudent to allow emerging theories some leniency with regard to putative instances of falsification or failures to produce explanations that meet the criteria of science (11). Precisely how much wiggle-room a new idea should receive, and how long it should be permitted, is unclear, but it seems reasonable to draw the line somewhere. If intelligent design has not crossed that line yet (again, an exceptionally generous allowance), it is probably hovering somewhere very close to it. As an explanatory framework, its failure is unequivocal. Irreducible complexity – its most promising candidate for generating empirical hypotheses – has, if taken seriously, been repeatedly refuted. However, it is not entirely clear that it even has the coherence and stability necessary for subjection to empirical evaluation. Irreducible complexity seems more like an artifact of the intelligent design advocate’s inability to comprehend Darwinian explanations and apply the criteria of scientific evaluation than a reflection of something that actually exists in reality. The same is true of the central notion of an intelligent designer. This entity has not been rigorously enough defined to yield a clear picture of the consequences we should expect to observe were such an entity involved in the processes underlying the origins and diversity of life.

Answering whether or not the proponents of intelligent design are engaged in the sort of process that might produce a valid scientific explanation is somewhat more ambiguous. Some of the necessary features are there, others are not. There is a community and they do seem to share a common goal. Furthermore, members of that community vary in the way they think intelligent design ought to be interpreted and applied – there is fuel for the sorts of competition that underscores progressive research. But at the same time, the intelligent design community does not seem to be actually engaged in progressive research. Additionally, there is reason to doubt that they take the value criteria used to judge scientific success or failure into account, as illustrated by the persistent advocacy of the irreducible complexity hypothesis (again, provided we are charitable enough to grant it status as an actual scientific hypothesis). On this note, it might be fair to say that, if the advocates of intelligent design are engaged in something like a process of scientific discovery, it is a very peculiar one.

Divorced of its obsession with uncovering negative evidence for the neo-Darwinian picture of biological evolution, intelligent design shares a goal roughly commensurable with that of Darwinism. This, of course, is the desire to explain the origins and diversity of life. Superficially, such an intelligent design might have a place within an existing process of discovery – an actual alternative to Darwinian ideas. That is not to suggest it is a good – or even promising – scientific explanation of any aspect of the biological world. Rather, it is to suggest that even an idea apparently doomed to abject failure has a potential place in the process of scientific discovery. That a theory is not true is not a particularly usefully criteria for exclusion from the scientific process – after all, it is hard to make a reliable statement about its veracity one way or another without it having first been subjected to the scrutiny entailed by the scientific process. The history of scientific discovery is littered with far more failed ideas than it is successful ones and the scientific process is sometimes described as self-correcting, weeding out faulty ideas as a natural output of its proper function. Intelligent design’s persistence could be thought to relate to its exclusion from the process of scientific discovery. However, there is an issue of epistemological incommensurability (to employ some fancy philosophical lingo) that renders such a merger illusory. The advocates of intelligent design may pay lip-service to the rigorous means by which scientific ideas are evaluated, but their actions tell a different story.

Establishing a set of necessary and sufficient conditions for what does and does not count as science is exceedingly difficult. If a definition is too restrictive, it might omit things that are widely thought to count as science. If it is too permissive, it renders the demarcation problem meaningless. In this regard, the concept of methodological naturalism has been forwarded as minimum standard for inclusion within the genus of ideas that can be properly viewed as scientific. Contrary to positions of ontological naturalism, methodological naturalism is not a comprehensive statement about what types of things exist in the universe. It is a practical recognition of what types of things are amenable to scientific investigation: things that are either directly observable as matter and energy or subject to inference through the consequences of their interaction with observable matter and energy. These are the only sorts of things that have produced scientific explanations in the past. Furthermore, they provide one of the core selective criteria that guide the process of scientific discovery. There are further issues to discuss with methodological naturalism – to insert it as a footnote at the end of lengthy essay does the idea a disservice, a problem I intend to remedy at a later date. However, it does highlight a core difficulty with the intelligent design framework. As long as intelligent design insists on positing the existence of a governing agent without elaborating on ways in which that agent might be observed, it trespasses the bounds of science and strays into the realm of pseudoscience.

With respect to public education, this is an issue that transcends the narrow scope of legal propriety. In that respect, the answer is clear: intelligent design, as with all other religious doctrine, has absolutely no place in the science classroom. Religion, as an important component of the social landscape, should not be ignored. It has important consequences for the way people behave and the development of disparate cultures. In this sense I’m sympathetic to Daniel Dennett’s idea that the proper place of religion within the public school curriculum is as a component of some kind of humanities course that covers all religion equally, discussing their core doctrines and historical importance with absolutely no reference to their truth or falsity.

But as an explanation for how the world works, religious ideas have no place in the educational standards we should esteem within our society. Scientific ideas have a special place in this regard because they are, unequivocally, the closest we have ever come to saying what is an is not true of reality. They are the reason we enjoy the rich host of technologies that populate the modern landscape, the reason medical science has pushed the average age at death back by decades, the reason humanity was able to place a member of its species on the moon. Science is the method by which we have achieved a glimpse, however fragmentary, into the endlessly astounding and awe-inspiring tapestry of laws and processes that comprise the fabric of reality. As a result, it has been afforded a richly deserved place of privilege in modern society. It has, in many respects, usurped religion’s place as a guide for discovering our place in the cosmos. For many, this has been – and continues to be – a hard pill to swallow. Their hurt feelings and ill ease should not be allowed to hamper progress. The world is a far more marvelous place absent ancient myths and superstitions, however ephemerally comforting they may be.

Notes and References:

  1. Michael Ruse. “Creation-Science is Not Science.” Science, Technology, and Human Values 7. No. 40 (Summer 1982) pp. 72-78
  2. Robert T. Pennock. “Can’t philosophers tell the difference between science and religion?: Demarcation revisited. Synthese (2011) 178:177-206
  3. Larry Laudan. “Commentary: Science at the Bar – Causes for Concern.” Science, Technology, and Human Values 7. No. 41 (Fall 1982): 16-19
  4. Thomas Kuhn. “The Function of Dogma in Scientific Research”. pp. 347–69 in A. C. Crombie (ed.). Scientific Change (Symposium on the History of Science, University of Oxford, 9–15 July 1961. New York and London: Basic Books and Heineman, 1963).pp. 347–69
  5. Karl Popper. Conjectures and Refutations. (London: Routledge & Kegan, 1963).
  6. Thomas Kuhn. “Logic of Discovery or Psychology of Research.” In M. Curd, J.A. Cover, & C. Pincock eds. Philosophy of Science: the Central Issues. (New York, NY: W.W. Norton & Company, 2013) pp. 11-19
  7. Grayling, A. C. The God Argument: The Case Against Religion and for Humanism. (New York, NY: Bloomsbury, 2013).
  8. Carl G. Hempel. “Explanation in Science and History.” In R. G. Colodny ed. Frontiers of Science and Philosophy. (London and Pittsburgh: Allen and Unwin and University of Pittsburgh Press, 1962)
  9. James Woodward. Making Things Happen: A Theory of Causal Explanation (New York: Oxford University of Press, 2003)
  10. Rudolf Carnap. “The Value of Laws: Explanation and Prediction.” In M. Curd, J.A. Cover, & C. Pincock eds. Philosophy of Science: the Central Issues. (New York, NY: W.W. Norton & Company, 2013) pp. 651-656
  11. Imre Lakatos. “Science and Pseudoscience.” Philosophical Papers, 1 (Cambridge: Cambridge University Press, 1977)
  12. Paul R. Thagard. “Why Astrology is a Pseudoscience.” In P. Asquith and I. Hacking eds. Proceedings of the Philosophy of Science Association 1(East Lansing, MI: Philosophy of Science Association, 1978)
  13. Bas C. Van Fraasen. The Scientific Image. (Oxford: Clarendon Press, 1980)
  14. David Hull. Science as a Process. (Chicago, IL: The University of Chicago Press, 1988)
  15. David Hull. Science and Selection. (Cambridge: Cambridge University Press, 2001)
  16. Thomas Kuhn. The Structure of Scientific Revolutions: 50th Anniversary Edition. (Chicago: The University of Chicago Press, 2012)
  17. Jason Rosenhouse. Among the Creationists: Dispatches from the Anti-Evolutionist Front Line. (Oxford University Press, 2012)
  18. D. Hamilton. “The genetical evolution of social behavior I & II” Journal of Theoretical Biology 1964 7(1):1-16
  19. George C. Williams. Adaptation and Natural Selection: A Critique of Some Current Evolutionary Thought. (Princeton Universty Press, 1966).

I originally wrote this essay for another purpose, but liked it enough that I thought I might rework it into a blog post to share with the world (read: the handful of people who might actually read this post).