Read This Book, Dammit: The Secret of Our Success

Starting in the late 1970s and early 1980s, a new program of research began to emerge in the study of human culture and behavior. Building on pre-existing tools from population genetics and evolutionary biology, researchers like Luigi Cavalli-Sforza, Marcus Feldman, Robert Boyd, and Peter Richerson began to construct a theory of cultural evolution rooted in Darwinian principles. They showed that attention to the functional roots of culture could be couched in a larger framework capable of explaining both the nature of culture and the processes behind cultural change.

The notion that cultures evolve was hardly new. Archaeologists and anthropologists had been working under that assumption for a few decades, striving to refine their theoretical and methodological approaches into the roots of a mature, rigorous science. Ultimately, these efforts yielded a framework that used a thoroughly Darwinian lexicon – adaptation, selection, evolution – in only loosely Darwinian ways. Researchers developed a focus on local ecological specialization – a useful step forward – but frequently situated their insights in a framework that was both incomplete and inconsistent. The recognition that cultural change was not only evolutionary, but sensibly Darwinian, provided the tools necessary to build formal – and testable – explanations of cultural phenomena.

In the years since Cavalli-Sforza, Feldman, Boyd, and Richerson laid down their pioneering work, the theory of cultural evolution as a Darwinian process – capable of both causing and responding to new patterns of biological evolution – has been consistently vindicated, demonstrating its utility in the lab and field. Joseph Henrich’s book, The Secret of Our Success is a thrilling exploration of the frontiers of that research. Henrich puts up a strong case that underlying humanity’s broad ecological success and expansive behavioral repertoire is our faculty for creating, transmitting, manipulating, storing, and accumulating massive amounts of non-genetic information in the form of culture.

Cumulative cultural evolution, as it is called, is unique to humans (putting aside emerging evidence for simple forms in New Caledonian crows – the difference in degree is large enough that we might as well call it one of kind). Other species have cultural traditions – those crows, for instance, make tools, as do chimpanzees – but none of them retain that information and build on it in any meaningful way over successive generations. The techniques individual chimps learn for termite fishing or nut-cracking are lost at death, inevitably hung up on the barriers that inhibit the transmission of all the other traits organisms acquire throughout their lifetimes.

In technical parlance, this is called Weismann’s barrier. Put simply, it means that inheritance is a one way street – information moves from germ (sex) cells to somatic (body/tissue) cells, but not the other way around. A chimp might learn a great deal about how to use tools to access otherwise inaccessible resources throughout its lifetime, but it has no way to get that information from the neurons in its brain to the eggs or sperm in its reproductive system.

Somewhere in the hominid line, our ancestors found a way around that obstacle, sidestepping the whole business of one-way genetic transmission by transmitting lifetime’s worth of acquired information from individual to individual in the form of culture. The foundations of this remarkable evolutionary transition rest in humanity’s a spectacular facility for social learning. Other species are, of course, capable of social learning, but these abilities are vastly enhanced in humans. We pay far more attention to each other than other animals, selectively targeting individuals that exhibit signals of above-average proficiency or expertise. In the same vein, we have a highly developed theory of mind (the ability to think about what other people are thinking) allowing us to understand each other in terms of intentionality and purpose. It has even been suggested that our unusually small iris, set against a very white sclera, is an adaptation for non-verbal communication – making it easier for us to keep track of other people’s attention and communicate our own.

Critically, growing evidence indicates our social learning expertise – unlike many other forms of human knowledge and behavior – is innate. Henrich discusses experiments in which children matched against adult chimps and orangutans on a variety of tasks. In most domains, they do about the same or a little worse than the other primates. But in social learning, human children massively outperform their hairier cousins.

Such highly evolved adaptations for social learning provide a scaffold for extraordinary levels of information sharing. Even absent language, humans watch one another and pay special attention to signals of above average proficiency and prestige in order to learn new or better ways to solve adaptive challenges. They create and maintain social norms that encourage cooperation, foster stable traditions, and aid in patterns of ingroup-outgroup competition. Social learning allows us to make and accumulate culture.

This, more than anything else, explains why a species with relatively little genetic variation displays such a sweeping range of behavioral variation and ecological specialization. It gets to the why of the of descriptive insights uncovered by earlier cultural evolutionists – that humans display local ecological adaptation – by presenting a plausible and, increasingly, empirically justified mechanism. Humans can meet the challenges of living on frigid ice sheets in the high arctic and sweltering jungles in the subtropics because we have the capacity to accumulate information about how to live in those environments at a rate far in excess of that afforded by strict biological adaptation. And critically, it’s not a matter of individual genius. Humans learn about how to live in new environments through the accumulated wisdom of generations of trial-and-error learning, resulting in cultural packages that are expertly tailored to the challenges of specific ecosystems.

Clearly, this point stands in contradiction to those who would link humanity’s extreme success as a species to extraordinary – and innate – individual intelligence. Individual humans can be pretty smart, but they rarely (if ever) have the cognitive horsepower necessary to build the sophisticated cultural innovations necessary to survive in novel environments from scratch. This is true of modern technology and scientific progress as much as it is of forager subsistence and ritual observance. There is a popular tendency to think of technological innovation as a matter of lone geniuses and marvelous insights. But James Watt’s steam engine was inspired by the earlier Newcomen steam engine. Similarly, Albert Einstein’s theories of special and general relativity drew inspiration and built on insights from the work of Gottfried Leibniz and Bernhard Reimann. Individual genius is real (who could argue that Einstein wasn’t a genius?) but the fruits of genius accrue incrementally.

For his purposes, Henrich makes this point another way. To illustrate the failings of individual intelligence – and, by contrast, the power of cumulative cultural evolution – he relates a variety of historical anecdotes . In this light, we might think of them a little natural experiments. In each, healthy, intelligent European explorers found themselves in a scenario where they are forced to survive on their wits alone in an unfamiliar environment. Be it the muggy, swampy coasts of the Gulf of Mexico, the icy wastes of the high arctic, or the arid sprawl of the Australian outback, the outcome is inevitably the same: suffering and death. Those that survived did so because kindly natives, with the cumulative knowledge necessary to survive in a particular ecosystem, lent the naive Europeans a hand.

Tellingly, the causal mechanics of successful cultural adaptations are usually opaque to the people who employ and perpetuate them. Most people don’t understand the physics of bow and arrow technology or the insulative properties of snow and ice. They don’t understand the chemistry of effective poisons for hunting or detoxifying otherwise inedible plants. Yet, using the cumulative intelligence of many individuals over multiple generations, they develop technologies that successfully exploit principles of aerodynamics, thermodynamics, and chemistry to build sophisticated suites of cultural know-how that allow them to live and thrive in almost any environment.

The breakthrough Henrich presents is not that culture is useful. That’s pretty obvious, intuitively speaking. It’s in the emerging understanding of how humans make culture – and how culture makes humans – in dynamic patterns of feedback and response between our genetic architecture and cultural developments over successive generations. Learning how to process plants and meat, and passing that information down from generation to generation, has worked extraordinary changes on our guts. Domesticating certain ungulates and incorporating their milk into their diets has modified certain population’s ability to metabolize milk well into adulthood. Specialized adaptations allowed humans to move up into otherwise inhospitable latitudes, eventually altering the skin pigmentation of some European populations. The Darwinian framework of gene-culture coevolution allows researchers to move beyond insightful explanation about the plausible roots of human cultural and behavioral variation and get down to the serious business of scientifically explaining these things.

And that is the core point. The revolution here isn’t descriptive, it’s explanatory. Placing our understanding of cultural change in a comprehensive, unified Darwinian framework has moved the study of human behavior forward in a way that other, similarly minded attempts have so far failed to achieve. As more and more researchers across the social sciences – from psychology and sociology to economics and anthropology – come to appreciate and accept the utility of the Darwinian perspective, these fields (particularly anthropology) are beginning to move out of the aimless shadows of what Thomas Kuhn called pre-paradigmatic science.

The reasons for this are simple: the more researchers who work within a coherent, mutually intelligible framework, the greater a field’s capacity for real scientific progress. This is because science itself is something of a Darwinian process. It works through patterns of competition and cooperation among individual researchers (and research groups), who collaborate on complex problems and criticize each other’s work where it falls short of established criteria. This process doesn’t work very well if everyone is working under an entirely different framework – Marxist anthropologists can’t add much to the discussion of Darwinian approaches because they lack both the specialized knowledge and the shared values needed to make sense of and properly evaluate Darwinian work (and vice versa).

In this line, the work of Henrich and other evolutionarily minded social scientists has been immensely beneficial, forging as it has a deeper, broader understanding of the roots of human behavior. And there’s a compelling case to be made that the growing popularity of this theoretical framework isn’t some intellectual fad. Rather, it’s a product of people who share similar goals (to explain things) and similar standards for judging how well those aims have been met (internal coherence, experimental and observational evidence, falsifiability) responding to relevant evidence. The array of approaches couched under the wider framework of gene-culture coevolution just seem to work.

Henrich’s synthesis of this research is among the best that I have read, carefully explaining how evolved psychological traits – like a bias toward watching and mimicking prestigious or successful individuals or a tendency to monitor and enforce social contracts – work in concert with our ever-increasing capacity for high-fidelity information storage and transmission – language evolution, writing technology, printing presses, internet – to create a potentially boundless realm of cultural innovation. Humans are a remarkable species. But, as Henrich argues, our singularity comes not from our innate intelligence – which has been much overblown. Instead, it comes from our ability to put our heads together, creating resilient forms of collective intelligence that allow us to survive – and thrive – practically anywhere we find ourselves.

The Secret of Our Success: How Culture is Driving Human Evolution, Domesticating Our Species, and Making us Smarter – by Joseph Henrich

Bruce G. Trigger, A History of Archaeological Thought & the Growth of Archaeology as a Scientific Discipline

book cover

It probably is not too large an exaggeration to describe Bruce G. Trigger’s A History of Archaeological Thought as a monumental work. The breadth and depth of knowledge Trigger brings to bear in elucidating the development of archaeology as a serious academic discipline is often extraordinary. No less so is the equanimity with which he approaches most of the material, sensibly applauding landmark developments yet refraining from lambasting absurd trends (even when the latter might have been entirely appropriate). Lucidly written and rich in detail, A History of Archaeological Thought’s primary significance stems from Trigger’s scrupulous eye for context, not only in articulating the relationships among individual paradigms, but in situating said paradigms within the broader social contexts of their development. Such an approach not only renders salient the particular merits and weaknesses of various approaches, but charts the sometimes agonizing route archaeology has taken in becoming an ever more rigorous scientific discipline*.

Ostensibly a history book, A History of Archaeological Thought is really about theory and how any given theory relates to the world it purports to describe – a point made clear in the opening chapter. If it were a novel, Trigger’s work would derive its primary narrative propulsion from the tension between positivist, realist, and idealist epistemologies. These ideas have ontological implications (as they must, if they are to function as coherent ways of knowing) but the primary fulcrum of interaction concerns how individuals might cultivate knowledge about the world, rather than what that world consists of. To that end, Trigger paints positivism and idealism as occupying opposite ends the epistemological spectrum, with realism striking something of a pragmatic middle ground (though it is certainly more closely aligned with positivism than idealism). This is a fair treatment: the most ardent positivists see science as an epistemology of the exclusively observable, while radical idealists would counter that sensory observations are so grossly manipulated by cognitive biases that objective – and therefore scientific – knowledge is illusory. Realism is much more closely aligned with positivism, but permits scientific endeavors some freedom to draw conclusions about the unobservable from the observable consequences thereof. In practice, many research programs fall within the interstices of these positions, and the investigators working within said programs do not always make their philosophical positions explicit. Nevertheless, they do serve to broadly encapsulate the competing (and occasionally cooperating) schools of thought that have shaped archaeology’s development as a scientific discipline.

In relatively short order, Trigger makes it clear that he believes social context is of considerable importance in understanding the way archaeology has changed over time. Although this position is now considered uncontroversial by all but the most naïve of empiricists, it was once considered anathema by staunch positivists. At the same time, the realization that scientific endeavors can be influenced by factors other than empirical results has lent an unwarranted veneer of credibility to extreme idealists championing the absolute subjectivity of knowledge. Rather than robbing science of its claims to progress, a thorough recognition of the social malleability of research reveals something far more subtle and interesting: that scientific progress, though certainly real, is far messier, more nebulous, and harder won than commonly believed.

Trigger begins his history proper by pointing out that curiosity about the past extends well into prehistory. That being the case, such interests were presumably fleeting, lacking as they did any official investment in sustained research. In Medieval Europe, transitory investigations of the past were limited by the at once fanciful and myopic scope of Biblical interpretation. During the European Renaissance, studies of the past took on a more formal structure. A nostalgic, idealized view of Greek and Roman cultures encouraged an interest in classical studies. Work in said field continued throughout the succeeding centuries, eventually leading to increasingly specialized and systematized research, despite a noteworthy trend toward theoretical conservatism. Regarding the progress of archaeology as a science, the most significant development during its disciplinary infancy (beyond a basic recognition of the value of empirical evidence) was almost certainly the rise of a community structure and the growing importance of peer-review.

The value of the aforementioned development is, of course, debatable. In keeping with Trigger’s externalist explanations, the context in which a science develops is often just as important as the internal changes that might alter it. Certainly without the increasing secularization of society that followed in the wake of the Protestant Revolution and the subsequent reinvigoration of rational and empirical inquiry of the Enlightenment, scientific progress – of any kind – would have been impossible. Nonetheless, the recognition that scientific progress was contingent upon a range of historical developments is not the same as recognizing what it takes for a given discipline to be scientific. Here, something of a digression into what exactly constitutes science is worthwhile.

As is often the case with any intellectual endeavor worth its salt, the delineation of what is science and what constitutes scientific progress has engendered considerable debate. Philosophers and scientists alike have levelled opinions, some more convincing than others. Karl Popper’s doctrine of falsificationism has proven massively influential, primarily due to its proposed solution to the problem of induction. Rendering scientific postulates falsifiable should open them up to the exacting razor of deductive logic. Unfortunately, scientific practice has proven far more unwieldy – and purportedly falsified theories far too resilient – for falsifiability to serve as a reasonable criteria for demarcating scientific practice or progress. Naturally, good scientific theories should be open to potential disconfirmation, at least providing a deductive trapdoor for fatally flawed concepts, but the general trend in science has been to cleave to a more broadly verificationist stance – in line with both positivist and realist schools of thought. That being the case, it is not sufficient to say an idea or paradigm counts as scientific if it has garnered a certain amount of corroborating evidence.

For one thing, individual scientists often lack the objectivity to fairly evaluate their theories in light of available evidence. Mutually incompatible theories can persist side by side, bolstered by considerable apparent evidence. Similarly, ideas that are known in retrospect to be inescapably (even ridiculously) wrong are, when in vogue, substantiated by impressive arrays of what seem to be empirical facts. Ptolemaic geocentricism, though convoluted, matched and explained a considerable range of observational evidence. In contesting it, Copernicus relied primarily on parsimony – a concept the natural world need not necessarily reflect. Along similar lines, the initial reaction against the naïve evolutionism espoused by some 19th and early 20th century fell out of favor largely as a consequence of ideological and sociological shifts, rather than a failure to accumulate substantiating evidence. Though Trigger never shows anything more than a correlation between the shifting social landscape and patterns of archaeological thought, A History of Archaeological Thought’s emphasis on social context is remarkably convincing when it comes to demonstrating the vicissitudes of scientific progress.

To this end, Trigger often adopts a Kuhnian perspective on what constitutes science and how scientific progress comes about. Often misconstrued as a drastically relativist take on science, Thomas Kuhn’s seminal work, The Structure of Scientific Revolutions (2012), called into question the absolute rationality of the process of scientific discovery. For Kuhn, a scientific enterprise could be distinguished by its community structure and its “puzzle-solving” orientation. Within scientific communities, scientific debates over the validity of competing theories are not resolved by exclusive reference to empirical findings, but through a process of persuasion that invokes certain value-laden criteria, such as parsimony or intelligibility. Kuhn did not disregard empirical results as a driving force in scientific change – indeed, he recognized a sustained lack of empirical corroboration as an important impetus driving scientific debates in the first place (1970) – but cast science and scientific progress as more or less emergent products of community structures. This, above all else, was Kuhn’s core insight: without a community structure, comprised of researchers with variably commensurable positions (as dictated by the exigencies of social context and individual history) science, or at least scientific progress, does not occur. This is precisely why the development of a professional community and systems of peer review are absolutely critical steps in the cultivation of any rigorous scientific discipline.

trigger

Bruce G. Trigger

This is a point Trigger’s work repeatedly substantiates as it articulates the dynamic patterns of critique and transformation that characterized work within the archaeological community. In the 19th century, an optimistic – albeit crude and ethnocentric – faith in human progress gave rise to what appeared to be convincing pictures of cultural evolution. Critiques rooted in historical particularism, in concert with a growing pessimism concerning the ingenuity of humanity and an ideological rejection of European racism, eventually stripped early evolutionary explanations of their scientific credibility. Over time, the changes stemming from cross-disciplinary critiques became more nuanced, even as the techniques developed within individual disciplines became more sophisticated. The primary causal explanations forwarded by historical-particularists – diffusion and migration – were shown to be insufficient to account for a wide range of archaeological observations. Nevertheless, the application of historical-particularism as an overarching paradigm for structuring research and explanations persisted, facilitating the development of improved methods for establishing ever finer chronological controls over the archaeological record. It was only with the rise of neoevolutionism as pioneered by Leslie White and Julian Steward, and the subsequent prominence of processualism, that historical-particularism fell out of favor.

It was in the second half of the 20th century that archaeology began to reach its full expression as a scientific discipline (that is, a discipline engaged in the ongoing process of using empirical evidence and a simultaneously cooperative and competive community structure to solve archaeological puzzles). That is not to say this is the point at which archaeology became a science, or that it was here that archaeology really started to explain things. Rather, it seems that by the middle of the 20th century, archaeology had accumulated enough data, methodological sophistication, and theoretical diversity to become a mature, progressive scientific endeavor.

Kuhn was onto something when he stressed the importance of community structure, but his picture of what constituted science and scientific progress was only partial. First, he did not place enough emphasis on the role of empirical findings. Writing after the publication of The Structure of Scientific Revolutions, Imre Lakatos (1977) and Paul Thagard (1978) both highlighted the need for a successful scientific discipline to be capable of empirically resolving outstanding problems. Lakatos stressed the need for continuing verification and the discovery of novel facts, while Thagard suggested that scientific progress was to some degree relative – successful scientific programs could be distinguished by their ability to explain more phenomena than alternatives. Later, David Hull (2001; 1988) would further elaborate on the importance of competition (and cooperation) both within and among paradigms, developing an explicitly selectionist interpretation of scientific progress. A general – if ultimately incomplete – synthesis of the aforementioned works would suggest that a successful and progressive scientific discipline must, at the very least, include the following:

  • A professional community, comprised largely of highly trained individuals (sufficient for discourse to be non-redundantly predicated on previous findings).
  • A proliferation and diversification of perspectives within said discipline (facilitating conflict and cooperation).
  • The critique of said perspectives with regard to criteria of coherence and corroborative evidence, in addition to a priori value criteria (simplicity, intelligibility, etc.)
  • The subsequent refinement and further diversification of theoretical perspectives.
  • The continued production of a professional community, theoretical diversity, empirical evaluation, and so forth…

As universities began to churn out more and more trained archaeologists, the discipline came to have a substantial population of competing (both subtly and overtly) ideas, both within and among paradigms. Marxists could debate the proper way to do Marxist archaeology, ethnoarchaeologists could debate the proper way to relate modern subsistence practices to archaeological evidence, and both Marxist archaeologists and ethnoarchaeologists could debate the propriety of each other’s core assumptions. Additionally, through the use of ever more refined analytical methods, 20th century archaeology was beginning to acquire the empirical stores against which hypotheses could be judged, even as the proliferation and diversification of approaches under the processualist umbrella (ecological, behavioral, cognitive, general systems, etc.) encouraged the sort of cooperation and competition needed to drive scientific progress.

As Trigger illustrates, even postprocessualism – often retrospectively regarded as somewhat less than scientific – has contributed to the scientific progress of archaeological research. By making salient the difficulties inherent in archaeological interpretation and shedding light on the deeply ingrained political biases held by many researchers, postprocessual critiques have encouraged a level of self-awareness that has proven immensely valuable in the pursuit of scientific explanations for the patterns in the material record. Over time, the overemphasis of subjectivity encouraged by some postprocessual subdisciplines has undermined their own authority and validity as independent knowledge gaining activities, even as the convoluted use of reified constructs to investigate reified constructs has served to obfuscate – rather than clarify – anything relevant to archaeological method or interpretation. Nonetheless, the postprocessual school provides perhaps the most useful demonstration of the importance of theoretical diversity, mutual criticism, and competition. By stimulating thoughtful discussion concerning which types of questions are amenable to archaeological investigation and highlighting the types of socio-political baggage that tend to erode individual subjectivity, postprocessualism has helped cultivate a more rigorous scientific approach among those that continue to practice archaeology as science.

In the concluding chapters of A History of Archaeological Thought, Trigger makes note of the value inherent in theoretical diversity, praising the growth of multidiscipinary approaches and calling for more of the same. Darwinian archaeologists, for instance, might attempt to understand a given artifact sequence relative to potential selective pressures, drawing on the sophisticated methodological work of historical-particularists to create high-fidelity battleship curves in order to better understand the chronology of change, while simultaneously deploying the methods of zooarchaeologists and behavioral ecologists to tease out potential correlations between artifact change and subsistence practices. Such work can be further bolstered by employing an array of working hypothesis derived from a number of theoretical perspectives. Multidisciplinary approaches implicitly acknowledge the multicausal nature of human behavior, while multivocality decreases the chance that important patterns will be missed or illuminating questions ignored. The use of either – and preferably both – increases both the analytical tractability of archaeological questions and the confidence assigned to empirically justifiable answers.

Throughout the 20th century – in particular with rise of processual archaeology – researchers have turned to the philosophy of science for guidance in building and refining their discipline. While this has been a fruitful pursuit, it has also been misleading. Inspired by the work of Carl Hempel, Lewis Binford cast the goals of archaeology as a search for the laws behind the regularities of human behavior. In essence, he was arguing that archaeology should model itself on physics. Such a desire was fundamentally misplaced. As Trigger points out (perhaps echoing Nicholas Maxwell) the logic of any science is given by the questions it seeks to ask. Human behavior and cultural systems are often extraordinarily complex and open to influence by a vast assortment of potential causes. The processes that govern human behavior and cultural change are shaped, to a considerable degree, by contingency. They are, in the language of complexity theorists, rather sensitive to initial conditions. Take the aforementioned problems and submerge them in the miasma of potenial ambiguity that is a dynamic depositional environment and remove all potential for manipulative experimentation and what results is a set of monumentally complex questions. Given this, Trigger argues that in some respects the goals of archaeological science probably dictate a different set of investigative parameters than those applicable in physics or chemistry[1]. Above all else, Trigger’s A History of Archaeological Thought – erudite and perspicacious as it is – makes one thing clear: the complexity of the questions is such that finding answers is only feasible through the application of a piecemeal, multidisciplinary approach. Furthermore, that recognition has been hard won. It should not be undervalued.

* A monothetic description of archaeology as scientific can be misleading. There are those within the discipline that practice archaeology as a science, but characterizing archaeology as a whole as a science is not entirely accurate. Some archaeologists produce research that fails to meet the criteria of science, while others outright disavow scientific veracity as a reasonable goal for archaeological research.

[1] I have to disagree with Trigger’s suggestion that parsimony might not be a good value criteria for assesing archaeological explanations. Parsimony, as a functional component of the scientists heuristic repertoire, is only useful when distinguishing between two competing explanations of more or less equal empirical content. In such cases parsimony dictates that researchers reject the explanation with the most ad hoc elements, a stricture that is still useful for archaeology – no matter how complex the phenomena in question turn out to be.

References:

Hull, David L. 2001. Science and Selection: Essays on Biological Evolution and the Philosphy of Science. Cambridge: Cambridge University Press

Hull, David L. 1988. Science as a Process: An Evolutionary Account of the Social and Conceptual Development of Science. Chicago, IL: University of Chicago Press

Kuhn, Thomas S. (2012) The Structure of Scientific Revolutions: 50th Anniversary Edition. Chicago, IL: University of Chicago Press.

Kuhn, Thomas S. (1970) Logic of discovery or psychology of research? In I. Lakatos & A. Musgrave (Eds.) Criticisms and the Growth of Knowledge (pp. 4-10) Cambridge: Cambrdige Univeristy Press.

Lakatos, Imre. (1977) Science and pseudoscience. Philosophical Papers, vol. 1. (pp. 1-7). Cambridge: Cambridge University Press.

Maxwell, Nicholas. (1974) “The Rationality of Scientific Discovery”, Philosophy of Science 41,

  1. 123-153 and 247-295.

Popper, Karl. (2002) The Logic of Scientific Discovery. New York, NY: Routledge

Popper, Karl. (1963) Conjectures and Refutations. London: Routledge & Kegan Paul.

Thagard, Paul R. (1978) Why astrology is a pseudoscience. In P. Asquith & I. Hacking (Eds.) Proceedings of the Philosophy of Science Association, Vol. 1. (pp. 223-34). East Lansing, MI; Philosophy of Science Association.


This review was initially produced a few months back for an archaeological theory class. Considering how good I thought Trigger’s work was, I thought I would make my opinions available online – albeit in slightly modified form.

Paleo-Hokum: The Human Tendency to Build Romantacized Versions of the Past

Conservatives often seem gripped by an almost crippling nostalgia for days gone by, idealizing the 1950s as some kind of wholesome social Eden or arguing that the moral strictures concocted by people living 3000 years ago provide a useful template for a how to live in 2014. Occasionally characterized as hallmarks of the conservative disposition, such willfully romanticized, ferociously uncritical views of the past are products of a type of delusional sentimentality wherein one constructs a largely fictitious picture of history and argues that the present should be structured accordingly. The notion that conservatives have a proclivity toward adopting signally imaginative pictures of history is not entirely unfair. Indeed, the Right’s habit of repeatedly attempting to rewrite public school history and science curriculum to better match their ideological sensitivities has been well documented. The flaw in this perspective has nothing to do with its veracity. Rather, it comes from the notion that it is a trait to unique to conservatives.

Artist's rendering of a Paleolithic hunt.

Artist’s rendering of a Paleolithic hunt.

Personally, I am more sympathetic to the perspective that this tendency to construct and subsequently fetishize incongruous versions of history is a more broadly human characteristic. Take for example the recent “Paleodiet/Paleo Lifestyle” trend. Personal experience suggests that the rank-and-file of the Paleodiet movement consists of left-leaning folks. Unfortunately, such anecdotal evidence makes for a rather wobbly foundation upon which to build broader conclusions, and reliable data on the political demographics of the Paleo movement are hard to come by. So, for the sake of inclusivity, let’s just say for now that the Paleo trend seems to be a load of bullshit just about anyone from anywhere on the political spectrum can get behind. Liberals might find the rhetoric more readily palatable, appealing as it does to their instinctive revulsion regarding all things industrialized, capitalistic, or otherwise offensive to a strong sense of equity. Conservatives – especially those huddled out in the feral, intellectual hinterlands of the Far Right – might be somewhat more likely to find Paleo lifestyles unsettling, given their appeal (both implicit and explicit) to the notion that humans have evolved, or their not-so-subtle suggestion that there was a period of history that could be called Paleolithic. After all, reputable scholars have demonstrated that the world is only 6000 or so years old.

In any event, the exact demographics of the Paleo movement are largely immaterial to my overall point. The core of my argument is that the Paleo diet is based on a nonsensical view of the past, and that such views are not entirely monopolized by people sympathetic to Right Wing ideology – rather, the construction of romanticized versions of the past is a broadly human theme.

To be clear at the outset, I do not take issue with whether or not this trend is actually healthy. Its health consequences are more or less incidental to the core philosophy. Those who experience health benefits probably do so because it is beneficial to cut back on high calorie, low nutrient foods, not because they are eating a diet that more closely resembles the one that humans have “evolved to eat”. And therein lies my primary grievance: it’s not that a Paleo lifestyle is somehow deleterious, it’s that it is based on both a distorted picture of the past and a shoddy understanding of the process of evolution.

The Paleodiet is essentially a hodge-podge of pseudoscience and outright fantasy, concocted more out of imagination than a real understanding of the evolutionary history of modern Homo sapiens. It would hardly be unfair to cast it as a modern manifestation of the primitive Eden Jean-Jacques Rousseau invented in the 18th century. Rousseau argued that humanity resided in a state of simple, gentle savagery, until the disease of avarice tore us from our position of grace. Such a rosy view – especially absent empirical evidence – appropriately warrants a hearty dose of incredulity. Nonetheless, Rousseau’s ideas about humanity’s utopian ancestry have proven immensely influential, particularly in the humanities, where researchers have seized upon his narrative as a reflex against the abject barbarity of European colonialism and the pervasive racism that clouded anthropological and sociological thought throughout the 19th and early 20th centuries. It is hard to see some of the more spurious ethnographic work produced by subsequent generations of scholars as anything less than ideologically motivated reactions to colonialism, racism, and “social Darwinism” (a misnomer among misnomers), fashioned in the image of Rousseau’s initial musings.

index

Jean-Jacques Rousseau, 1712-1778

The Paleodiet is heir to this tradition of romanticizing the primitive. As pioneered by Rousseau it is – make no mistake – a liberal tradition. It sets itself the task of vilifying the aspects of modernity some people find uncomfortable in light of their ideological disposition by casting them as “unnatural” (more on that shortly). Humans, the argument goes, are the behavioral and physiological progeny of selective pressures that would have been prevalent during the millennia our ancestors spent as bands of nomadic foragers, scratching a living out of the Pleistocene landscape. Our fall from grace began with plant and animal domestication, spiraling toward the cacophonous dénouement that is a modern world  populated by Big Macs, Hot Pockets, microwave burritos, and white bread.

In a way, the basic claim is a truism. Modern humans are the product of millions of years of evolution. The vast majority of our genetic architecture was in place well before the invention of agriculture. Similarly, the majority of the selective pressures that would have been significant in shaping the suite of behaviors that distinguish us from our primate relatives would have related to the environment(s) hominids inhabited during the five or so million years of the Plio-Pleistocene. During that time, our hominid ancestors were almost certainly foragers, and definitely didn’t have access to Big Macs. Shouldn’t we be adapted to eat a diet of fruit, grass fed meat, and tubers, rather than corn fed, hormone enriched beef product slathered in special sauce, sandwiched between two gluten-rich buns?

Though such a perspective might have some intuitive appeal, it rests on a number of fallacious assumptions. First, it bespeaks a teleological view of the evolutionary process. Humans, according to the Paleo philosophy, evolved to subsist on a diet typical to the average Paleolithic forager. Once we had reached that stage, selection ceased to operate and our evolution came to a halt. For several reasons, this is a clumsy way of looking at the process of evolution. For one, organisms – human included – don’t evolve to do anything. Genetic variation is generated in a manner blind to the challenges that will arbitrate its proliferation or eradication. Additionally, such arguments suggest that evolution has a stopping point – that once creatures have become appropriately adapted to their environment, they cease to evolve. The notion that we – or any other creature – evolved toward some end point and stopped once we got there is simply wrong. Some might argue that this is an overly simplistic caricature of the Paleo movement’s core arguments. The more sophisticated Paleo adherents probably don’t think humans have ceased to evolve. Instead, they recognize that evolution typically occurs gradually, so humans haven’t had time to adapt to a post-agricultural, and – more importantly – post-industrial diet.

cowsegyptianhieroglyphics1

Hieroglyphic depiction of agricultural practices.

In a sense, this is true, but it exposes more unfounded assumptions. Most evolutionary change is a product of the gradual accumulation of advantageous mutations. Populations evolve by slow, subtle steps, so the notion that humans might not have experienced a lot of evolutionary change since the advent of agriculture, some 10000 to 12000 years ago, is not completely unreasonable. However, there is evidence that some human populations have experienced non-negligible genetic change since the Neolithic Revolution (i.e. the widespread adoption of agriculture as a primary means of subsistence). Adult lactase persistence, for instance, is associated with a pair of single-nucleotide polymorphisms that arose – or were at least selected-for – after certain human populations began to consistently engage in ungulate husbandry. The ability some people have to metabolize milk into adulthood is a product of recent evolutionary change, probably due to the fitness gains associated with prolonged access to a new source of caloric energy. So while evolutionary change tends to occur quite slowly, it can still happen rapidly enough to make us better suited to modern diets than the staunchest Paleo advocates would have us believe.

Consider also that plant and animal domestication did not occur as suddenly as is popularly conceived. The long dance of coevolution that is domestication began long before the Neolithic Revolution, as humans began to interact with their ecological neighbors in more and more complex ways. The apparent suddenness with which agriculture became ubiquitous in places like Mesopotamia and Mesoamerica is a result of a cascade of innovations at the tail end of a longer process of give and take. Prior to building irrigation systems and tilling fields, humans were likely setting up camp near useful perennials and annuals, actively encouraging their growth through more subtle types of landscape modification. Later, individuals began to engage in broadcast sewing, actively distributing the seeds of useful plants around productive and accessible stretches of land. Modern wheat, corn, cows, and chickens may be recent innovations, but we’ve been eating their ancestors for centuries.

2517816_orig

Furthermore, the very notion that humans are specifically adapted to a certain diet is fallacious on at least two fronts. First, it ignores the extraordinary range of phenotypic plasticity Homo sapiens displays. We are a species marked by a remarkable range of behavioral and physiological flexibility. To suggest we are adapted, even in the broadest possible sense, to a particularly well bounded lifestyle ignores one of the primary components of our nature. Paleoecological evidence indicates the Plio-Pleistocene was a time of considerable environmental variability. Under such circumstances, a rigid diet regime would have been hard to maintain. Indeed, the overall trend of hominid dietary evolution seems to be one of increasing generality, with successive generations becoming better and better suited to eating a wider and wider variety of plants and animals.

A second, but not unrelated, point is that we do not have a perfect picture of what prehistoric diets really consisted of. Even if humans were adapted to eating a certain range of forager staples, exactly what those staples were remains cloudy. More than likely, our prehistoric diet was frequently dictated more by what we could physically capture and metabolize, rather than some idealized set of nutritional guidelines. Basic behavioral ecology would suggest most species, including humans, will preferentially target those food items that yield the largest caloric returns relative to costs associated with capture and/or collection. That means – and both archaeological and ethnographic evidence bears this out – that what people ate likely varied from ecosystem to ecosystem. Homo sapiens was a more or less global species before the dawn of institutionalized agriculture. Different populations occupying a wide range of environments seemed to get by just fine by exploiting an expansive variety of food items. In the high Arctic forager populations subsisted (and still do) on a diet massively biased toward animal protein, while groups living in the verdant tropics incorporated (and still do) myriad fruits and nuts, in addition to a greater diversity of animal protein, into their diet. Given this apparent versatility, the argument that there is one diet humans are designed to eat looks somewhat less than convincing.

Paleo Lifestyle rhetoric is littered with appeals to nature, as if we’ve somehow lost our way and become something abhorrent before the eyes the All Knowing Universe. Advocates of this lifestyle have set up an arbitrary demarcation between natural and unnatural, as if the innovations associated with agriculture mark a frontier beyond which humans suddenly began to behave “unnaturally”. This sets up a false dichotomy between the types of behavior we engage in today and the types of behavior our ancestors engaged in the largely invisible past. Eating Hot-Pockets and watching You Tube videos may accurately be considered behavioral novelties in the broad scope of human evolution. But it is not reasonable to infer from said novelty that such behaviors are somehow unnatural.

On a purely philosophical level, the very idea of unnatural behavior subverts the Paleo advocate’s attempts to find an evidentiary basis for their dietary choices, implying as it does a certain level of mysticism. Methodological naturalism – the primary scaffolding around which all scientific research is constructed – denies the very possibility of anything “unnatural” occurring. How could it? Nothing can or ever will transpire in violation of what we might loosely call the Laws of Nature. To suggest otherwise is a direct invocation of the supernatural or at the very least a crude argument for some kind of vitalism, both of which strain scientific credulity and provide sufficient room for motivated advocates to weasel around unwelcome dispositive evidence. Either way, it boils down to a pile of fluffy nonsense.

Brass tacks, the Paleo Lifestyle is based on a romanticized version of the past. Like the conservative fantasies of 1950s suburban utopia or Wild West individualism, it is a canvas onto which people project their dissatisfaction with the present, crying, “if only things were thus…” Perhaps such fantasies provide a star by which people reckon their course, assuaging their fears that there might not be a right way to live. After all, many people probably find the idea that humanity has lost its way more comforting than the idea that it never really had one to begin with.