Combating Political Religion: How Small Government, Free Market Dogma Fails to Account for Observable Reality

There is growing sense that those interested in finding out what is true of the world are becoming a rarer and rarer breed. Everywhere we look, someone is trumpeting some blatant inanity. Vaccines cause autism. Adding flouride to water is a government conspiracy. Genetically modified organism are dangerous. Organic food is particularly nutritious. Christians are a persecuted minority. The 44th President was a foreign national and communist agent. The 9/11 Terror Attacks were an inside job. The world is only 6000 years old. Humans can’t influence the climate.

Nonsense is everywhere, but the impression that it is more prevalent than ever is mostly a matter of appearances. Humans are innately tuned to focus on the negative aspects of their environment. Good reasons for this abound, easily distilled in the recognition that it is far more consequential for us to spend our time thinking about the things that could be better than it is to spend it thinking about the things that are going just fine. On the landscapes of our ancestors, where decisions about what to pay attention to were a regular matter of life and death, it was vitally important to take note when things were about to turn sour – when herds of prey were about to migrate to a new territory, when seasonal changes were about to reduce the availability of edible fruits, when an unfriendly band of visitors turned up in your neighborhood.

Continue reading

The Righteous Mind: Religion, Cooperation, and Evolution

I’ve read a book.

In perfect candor, this is a feat I’ve accomplished once or twice in the past, but it never fails to stoke a certain sense of accomplishment and smug self-adulation. After all, I’ve forsaken untold hours of watching TV and playing video games in favor of an identical amount of time spent turning pages and reading words. Basically, the sort of opportunity cost only saints are meant to bear.

In this case, the book came with the additional reward of containing a surfeit of the sort information the late French pedant Claude Levi-Strauss might have called “good to think”.

Without further delay, the book: The Righteous Mind, by social psychologist Jonathan Haidt. I won’t go so far as to give an exhaustive review – suffice it to say that the book was good and you ought to read it, providing as it does a succinct and provocative run-down of research into the psychological underpinnings of our moral and political inclinations.

Continue reading

This Changes Everything: Capitalism vs. the Climate (and Naomi Klein vs. Science)

This Changes Everything is a strange book. I agree with its central premise. Capitalism is a fundamentally flawed ideology, that, unchecked, has the capacity to cause untold social and ecological destruction. Within the bounds of the market, there are no mechanisms suitable to address climate change. The energy corporations responsible for pumping greenhouse gases into the atmosphere aren’t sensitive to the prospect of rising sea levels, more severe fire seasons, long-term drought, or more frequent and intense natural disasters. They might contain people who recognize these problems, but the ultimate arbiter of their decisions is short-term profit accumulation. Suddenly developing a social and ecological conscience – and acting accordingly – would be economic suicide. Corporations that remained responsive to the interests of shareholders would swiftly swoop in and happily gobble up the share of the energy market abandoned by their more environmentally friendly competitors.

Much has been made of finding market-friendly solutions to climate change. The idea that corporations that make billions off the extraction and production of hydrocarbons will somehow responsibly and organically respond to social and ecological threat of climate change is pure fantasy. There’s too much inertia in their current mode of production, and too little incentive for them to change it. Indeed, as Klein reports, some fossil fuels giants have tens of billions of dollars invested in future extraction initiatives. Shifting away from burning hydrocarbons would entail huge immediate blows to their bottom line.  Which brings us to Klein’s central thesis: while the burning of fossil fuels is directly linked to the increases in atmospheric carbon heating the planet, the final and ultimate cause of climate change is the profit motive and the haphazard paths followed in pursuit thereof. In this regard, Naomi Klein makes a pretty good case.

Klein’s critique of capitalism is bold and refreshing. Interestingly, it’s a point the fossil fuel industry’s most zealous advocates had seized upon well before the terms “global warming” and “climate change” had entered the popular vernacular or become the focus of intense, widespread public scrutiny. Klein reports on conferences held by organization like the the Heartland Institute and Heritage Foundation where attendees issued dire prognostications about the social, political, and economic implications of climate change. Not in terms of the direct ramifications of large-scale environmental change, mind you, but in terms of the large-scale social planning that will inevitably be needed to address them. Surprisingly often, people at these conferences accepted the reality of climate change. Many had even made peace with humanity’s role in causing it. Their concern was not whether climate change was real or not. It was if and how the reality of climate change might redefine the social order, undermining decades of neoliberal policy and the ceaseless march of privatization and deregulation. In short, they were concerned that a public tuned-in to the threats posed by a changing climate could begin to use their influence as voters to exert control over the behavior of markets – via the intermediary control of representative governance. In other words, they recognized that addressing climate change demands top-down intervention – i.e. socialism.

It’s hard to overstate the perversity and cynicism of this outlook. Recognizing that unregulated energy markets contain no mechanism for responding to the social and ecological toll exacted by a changing climate, these people are more concerned with protecting their bank accounts than working to ensure the wellbeing of future generations.

Of course, one could go too far in tarring the intentions and motivations of people so concerned about the threat of democratic socialism that they are willing to openly deceive the public about the risks associated with a changing climate. These people are unbridled greed-heads, to be sure. But deeper down, they’re also true believers, possessed of such desperate, unwavering faith in the wisdom of the invisible hand that they are willing to ride full-bore into the maw of ecological chaos, confident that, in the end, the market will provide.

It was once well-understood and widely accepted that there were bound to be economic transactions that involved variables and produced outcomes that couldn’t be accounted for in the price of goods. Recognizing the potential for markets to accumulate unforeseen costs and produce unpredicted benefits, economists advocated the use of taxes and subsidies, facultatively increasing the price of goods that are ecologically or socially harmful, reducing the prices of those that engender surprising benefits. Economists modelled markets as if they were comprised of perfectly rational, infinitely selfish, all-knowing agents as a matter of mathematical convenience. For simplicity, theorists conceived of transactions as instantaneous auctions, wherein everyone knew all the relevant information – including all the potential downstream costs and benefits, however distantly realized – and were entirely open about their values and motivations. Everyone knew everything they needed to know and no one tried to deceive anyone.

Somewhere along the line, this thinking leaped off the rails, and the market principles espoused and enumerated by the likes Smith, Pareto, Walras, Keynes, and Hayek morphed into an ideological religion. Enough people indifferent to nuance and obsessed with the myth of the self-made man read Hayek and Friedman – filtering their works through a hazy lens of Ayn Rand – that market liberalization became a religious crusade. They began to take the simplifying assumptions of economists too seriously. Instead of treating them like normative prescriptions for how things would work in perfect world, they began to treat them like divine ordinances about how things should work in the real world. Market fundamentalists and their allies have since made it their mission to shape the world into an Eden of free, unmitigated exchange – a perfect paradise for the idealized creatures of economic theory. Sadly, this is about as reasonable as setting up a game preserve for unicorns.

Their vision of utopia is one entirely divorced from the realities of human behavior and the natural world from which it emerged. In this vein, market fundamentalists have come to mirror the hardline communists ideologues the United States fought a nearly five decade cold war – punctuated here and there by intense moments of southeast Asian or central American heat – against. They’re so enamored of a romanticized ideology that they’ve been rendered blind to the unburnished strictures of reality. Decades of work in behavioral economics, psychology, sociology, and anthropology have revealed evidence that humans are irrational, myopic, parochial, tribal creatures, riddled with internal contradictions, from an awe-inspiring capacity for altruism and selflessness to a sickening taste for self-indulgence and materialism. The governing tenets of normative theories of economics have been repeatedly proven to be false. They make the modelling simpler, but lose descriptive fidelity with reality in the process.

When it comes to their personal fortunes, the people placing such fevered conviction in the benevolent providence of markets probably aren’t wrong. For them, the market will provide – at least for the time being. Incredibly wealthy and politically influential, they’ve got what it takes to ride out whatever storms (both literal and metaphorical) anthropogenic climate change might throw their way. Some of them will even likely make a tidy profit doing so. Already a market has emerged for in the insurance sector for companies interested in buffering themselves against the potential costs and disruptions that are bound to come with a changing climate. Climate change is the perfect storm for disaster capitalism (which Klein has written about elsewhere), opening the door for people to make millions off the suffering of others.

For those unfortunate enough to occupy rungs farther down the economic ladder, outlooks are considerably more grim. Market fundamentalists and their cronies in various world governments have placed so much rabid faith in the wisdom of the market that they are unwilling to budge an inch from the territory they’ve staked out on the frontiers of ideological fanaticism. Their belief that a market, sensitive only to the feedback of profits lost and profits gained, will always provide the best possible outcome for the most possible people has no more basis in reality than Karl Marx and Friedrich Engels’ visions of communist utopia. The men who flew a Boeing 767 into the South Tower of the World Trade Center had just a much justification for their belief that they’d be greeted in the afterlife by 72 virgins as a man like Ted Cruz does for his belief that a market entirely unleashed from the shackles of government oversight and regulation will maximize human flourishing.

Even without the looming specter of climate change, the idea that wholesale privatization and deregulation will benefit anyone outside a small minority of wealthy elites is difficult – if not fundamentally impossible – to justify. Markets simply don’t have the ingredients necessary to set prices that account for all the potential costs and benefits that come with production and consumption. Nor is there a compelling argument to made that, were prices thus set, humans would respond to them in a way consistent with their own long-term best interests. This crude reality alone should be sufficient to derail campaigns for endless market liberalization. Unfortunately, the zealots have sunk their claws so deep into the fabric of modern society that the precepts of unmitigated capitalism are treated like features of the divine order of the cosmos, built and bred into the marrow of social, political, and economic institutions the world over.

Now, as humanity has finally become sensitive to the full range of costs associated with centuries of barrelling growth and consumption, the need to overturn this fanaticism has grown more urgent than ever. This Changes Everything’s greatest strength is the force and clarity with which Klein makes this point, supporting it with detailed reporting and mountains of evidence.

Yet, for a book with such a compelling central thread, I was surprised by how frequently I found myself disagreeing with the author. Klein consistently evokes apocalyptic language, writing of human extinction and the habitability of the planet earth as if either is actually at stake. Climate change could spell untold human suffering and ecological devastation, but it’s very unlikely to drive the human species to extinction. Likewise, she romanticizes the primitive past and indigenous lifeways, treating pre-industrial societies like expert conservationists, living in perfect, blissful, harmony with the earth. Based on available archaeological evidence, this view is naive at best. Globalization is painted as a ubiquitous evil – never mind the fact that, inasmuch as it has contributed to climate change, it has also raised billions of people out of crushing poverty and perpetual hunger. Her treatment of GMOs, geoengineering, and nuclear power evinces a relationship to science that is more a matter of ideological opportunism than a devotion to reason and evidence.

Each of these points is worth addressing, because each entails a breed of error that does much to undermine the strength of Klein’s larger argument.

First, there’s the issue of human extinction. Klein oft references the final end of the human species, written in some onrushing future by the blind avarice and indulgence of past and present generations. Perhaps she’s being deliberately hyperbolic, but I don’t see how that level of exaggeration and emotionalism serves her point. It is disturbingly likely that the drought, biodiversity loss, superstorms, ocean acidification and sea level rise caused by unmitigated climate change could unleash a cascade of escalating disasters, each one feeding into the next, locking humanity into an endless, frantic cycle of catch-up. Klein is savvy enough to recognize how these crises will be handled within the logic of the market – profiteering and exploitation will run rampant, as a small minority reaps enormous benefits from the misery of everyone else.

Yet it is exceedingly unlikely that climate change will cause the extinction of Homo sapiens. This isn’t really much of a ray of hope – it’s entirely plausible that the compounding cycles of disaster released in the wake of worst-case-scenario climate change could reduce the human population to scattered bands living on the fringes of the high arctic, scavenging rancid scraps from the shores of poisoned seas and eating one another to survive. More probable scenarios – mass human displacement, massive social and economic inequality, bloody conflicts over bread and water, the emergence of hardcore corporate feudalism – aren’t much more appealing. But short of making the planet earth literally uninhabitable to our kind of organism (which it seems very unlikely to do) climate change will not drive humans extinct. Our facility with social learning, coupled with our capacity to store and transmit cultural knowledge from generation to generation, make us one of the most adaptable organisms to have ever existed. You’d have to look to tardigrades (water bears) or certain strains of fast-adapting bacteria for a more resilient species.

As with her dire prognostications, Klein’s approach to primitive and/or indigenous lifeways leaves much to be desired. She readily and consistently falls into Rousseau’s old trap, speaking wistfully, if not explicitly, of the wisdom and probity of the “noble savage”. There’s a sort of magnanimous racism to this kind of thinking, which I’m sure anyone given to it would be damn quick to deny. It suggests there’s something fundamentally different about “primitive” peoples, something that makes them more finely tuned to nature and equality than the rapacious scalawags that spilled out of Western Europe and the Mediterranean. This is pure rubbish. In terms of behavior, the only meaningful difference betweens industrial and pre-industrial peoples are cultural. It just so happens that Europeans happened to have inherited and modified innovations in agriculture, animal husbandry, food production, preservation, and storage – significantly conditioned by ecological happenstance – that facilitated massive population increases. Later, they became widely infected by the ideological prescription that material surplus and increase were desirable above all else.

The exact processes that led to this have been dealt with extensively. I won’t dwell on them here. Instead, suffice it to say that the concept of primitive utopia that emerged in the 19th century – persisting, in various forms, to muddy the thinking of an otherwise intelligent author in the first decades of the 21st – is a rosy-eyed fiction. Primitive societies often give credence to the Hobbesian diagnosis of a life that was “nasty, brutish, and short”. Rates of interpersonal violence are higher. Infanticide is commonplace. People die preventable deaths from injury and disease and animal attack. And, more to the precise point of Klein’s romanticism, they are hardly conservationists. Strong evidence indicates that the first Americans had a major role to play in the extinction of the North American megafauna – mammoths, saber-toothed cats, short-faced bears, giant bisons and ground sloths. In Australia, the first humans played a role in the eradication of a menagerie of bizarre giants – marsupials the size of hippos, carnivorous kangaroos, and an eight foot long tortoise, among others. In New Zealand, the Maori drove the giant moa to extinction. More prosaically, the archaeological record implicates humans in a number of resource depressions, extinctions, and extirpations. By around 1500 years ago, California hunter-gatherers around the Sacramento Valley had significantly reduced deer and elk populations. The earliest inhabitants of Easter Island introduced invasive species and over-exploited local resources, destroying the local ecology. My own research reveals a series of local depressions in steller sea lion populations in the seas around Sanak Island off the Alaska Peninsula – well before the arrival of Russian sailors – caused at least in part by human hunting.

Humans are humans, gifted with the same level of foresight, cursed with the same level of myopia, wherever they live. The idea that there has ever been, anywhere, a perfect relationship between humanity and nature (or even a clear demarcation between the two) is a point of hopeful fiction. The places where superficial appearances are otherwise relate not to the pure, uncorrupted conservationist ethos of indigenous peoples, but to a lack of technology or a sufficient resource base to sustain long-term population growth.

Globalization is a trickier beast. Free trade agreements have been disparaged across the political spectrum, often due to their perceived role in job loss. Because they make it easier for companies to outsource work to wherever they can find the cheapest employees, free trade agreements are often implicated in the decline in the availability of local manufacturing jobs. There is some truth to this – free trade agreements have resulted in job loss – but the larger reality is that, in the United States, most of the manufacturing jobs lost in recent years have gone to robots, not foreigners.

More insidiously, globalized commerce tends to increase the carbon footprint of economic endeavors and undermine localized efforts at political self-determination. As Klein notes, an increasingly globalized economy depends on the transportation of goods over longer and longer distances. This inherently entails pumping more carbon into the atmosphere, as cargo planes circle the globe and massive container ships drive through the oceans. Not only does this intensify and accelerate climate change, it also distances consumers from the direct ecological costs of their economic decisions. The already frail and unreliable tools consumers might have available to punish a local factory for producing goods in a way that damages local water and air supplies are entirely extinguished when the poisoned water and smoggy air are thousands of miles away. Throw in an international court system that allows foreign polluters to sue local governments for establishing regulations that favor cleaner businesses closer to home and the prospect for constructing sustainable, environmentally conscious markets look incredibly dim.

Klein’s solution is to refocus economies on a local level. This is all well and good, but it ignores the ways in which people in the developing world have benefitted from globalized trade. Certainly something must be done – urgently – to address the ecological costs of international commerce, but it shouldn’t be done at the cost of throwing billions of people back into poverty. Localized trade is generally a good idea: it lowers the carbon footprint of economic transactions and puts people in direct contact with the consequences of their economic behavior. But in finding a way to realize those benefits in local communities, it’s essential that we don’t fall into the trap of placing a higher premium on local lives simply as a consequence of proximity. The people who have been lifted out of poverty by global trade matter too – let’s find a way to limit the harm produced by a globalized economy without eliminating its benefits.

Which brings to my final major criticism: Klein’s selective science-phobia. She has a wealth of praise for solar and wind technology, but every other potential energy source is either greeted with the wary eye of a hardened Luddite or outright dismissed as too scary or too tainted by corporate greed to be a feasible alternative to fossil fuels. In my view, the best solutions to some of the environmental problems posed by global trade are technological. Let’s not do away with international commerce because of its large carbon footprint. Instead, let’s just do away with the large carbon footprint and power humanity’s aerial, terrestrial, and aquatic shipping fleets with clean fuels. This, of course, involves developing more and more efficient means of converting solar energy into electricity and inventing more efficient and resilient storage techniques (i.e. better batteries) – both realistic, if unrealized, prospects. It might be the science fiction enthusiast in me, but I have a hard time swallowing the argument that the solution to any of our problems will somehow involve less technology and innovation.

Klein seems to have a narrow and rigid list of technologies she considers worthy of approbation. Ubiquitously, they are those technologies that are perceived to make the most unobtrusive use of existing natural resources: wind and solar power. I don’t disagree with her that these are technologies that should be pursued with vigor. Especially not when it comes to solar power. Rather, my point of contention is with her ideologically defined disregard for any technology that involves manipulating the natural world in ways that might be trespass some vague bound of permissible use.

The most glaring example is nuclear power, which Klein brings up and dismisses repeatedly without offering any solid justifications for doing so. As near as I can tell, her concerns boil down to the fact that nuclear power seems scary, representing as it does a distillation of man’s abusive dominion and exploitation of the natural world. There seems to be an arbitrary boundary between good innovation – where humans create novel materials and systems to harness solar energy – and bad innovation – where humans create novel materials and systems to harness the energy of nuclear fission. Probably this has a lot to do with the relationship between nuclear power and nuclear weapons, in addition to the looming specter of disasters like Fukushima and Chernobyl.

Nuclear power, however, is safe. Since its invention, nuclear power has been linked to 300 deaths worldwide (and that’s a rather generous estimate). Over the same period (starting with Fermi’s discovery in 1934) coal mining has been the direct cause of 29,949 deaths in the United States alone. Globally, the count is surely much higher. The generation of energy from coal has killed 100 times more people in the United States than the generation of nuclear power has killed in the entire world.* Obviously, this doesn’t even begin to account for the colossal environmental costs of coal – even if we were to stop mining and burning coal tomorrow, the environmental toll would still be counted for generations to come.

The idea that we should dismiss nuclear power because of a few frightening accidents is patently absurd, especially when one considers the fact that technology already exists to build reactors that are inherently safe. In the 1970s, 80, and early 90s researchers at the Argonne National Laboratory developed and tested the integral fast reactor (IFR) – a reactor made safe by the very physics upon which it operates. The IFR was tested, simulating loss of coolant flow (the problem at Fukushima) and all normal shutdown options: it shut itself down, proving itself a meltdown-proof reactor.

Klein, I’m sure, would be quick to point out that there are additional hazards associated with an IFR. Some of its constituents – like liquid sodium – are inherently dangerous. Though the IFR produces less waste, it still comes with attendant waste disposal problems. These are real concerns, but they aren’t cause for a wholesale abandonment of nuclear energy. They should motivate further research, not outright dismissal. The raw reality is that every technology comes with its share of problems. Windmills kill bats and produce vibrations that disturb burrowing animals and subterranean communities. Solar photovoltaics are often produced using heavy metals like cadmium, which have the potential to accumulate in food chains. There are no perfect solutions. And, often enough, the only way to find out how good a solution is and what costs it carries is to try it out.

This is precisely the reason why I don’t see the use in taking large-scale geoengineering options entirely off the table. Researchers have dubbed these the Pinatubo Option, after a volcanic eruption that laid the seed for thinking about climate change in terms of solar radiation management (SRM). The basic idea is to ameliorate the effects of harmful greenhouse gases by pumping aerosols like sulfur dioxide into the atmosphere to cut down on the amount of solar radiation (i.e. sunlight) that reaches the earth’s surface. These are clearly last-case scenario options, but the idea that they should be automatically shunted into the intellectual dustbin because they come with a fog of unknowns – some of them likely dangerous – is truly strange.

In general, Klein is eager to steer the safest possible route, both ideologically and environmentally, eschewing all but the most well-established green technologies. Investments in nuclear power and geoengineering are risky, and therefore anathema. I used to be sympathetic to that kind of thinking – less and less so, as I grow and learn. The fact of the matter is that all human progress entails some amount of risk and uncertainty. Dealing with that fundamental fact is the flat fee that comes with living in a dynamic, vibrant society that values curiosity and exploration.

The puzzling thing about This Changes Everything is that it can simultaneously be such an incredibly forceful, scrupulously sourced argument against the perilous excesses of unrestrained capitalism and so gravely misguided when it touches on issues of human nature and the power of innovation. In the final analysis, Klein’s book is as much an ideological screed as it is a cold assessment of the facts. She meets the barking madness of free market fundamentalists with an ideological fervor redeemed only by the fact that it currently aligns with humanity’s best interests. This, of course, makes her an ally – not only to progressives, but to all humanity. She’s not wrong: capitalism, left unchecked, will devour the world.

Klein fetisizes indigenous lifestyles, exaggerating their cultural commitment to sustainability and regeneration. Low impact living is a natural outworking of forager lifestyles, not an internalized ideological commitment to perpetual balance. At the same time, she casts the Enlightenment and Scientific Revolution as Icarian follies – humanity learned too much, too fast, hungrily consuming the world’s resources in callous indifference to the potential consequences. Certainly it’s true that the fruits of the Enlightenment/Scientific/Industrial revolution have poisoned the natural world. That point is virtually inarguable. But the endless cycles of discovery, criticism, debate, and revolution they set in place are also responsible for every good thing in existence. The notion of individual human rights was an Enlightenment invention. Thanks to science, we now have treatments or cures for hundreds of terrible diseases. Our very understanding of the natural world – including the ways in which we’ve harmed it and the ways to cease doing so – are due to the scientific revolution. Klein’s most cherished ideal – sustainability – is a modern invention. It’s not a vestige of the primitive past, but a modern discovery.

Climate change is a problem. As are the underlying patterns of production and consumption. More fundamentally, the driving ethos of the industrialized West – that markets, unleashed, are pristine, unimpeachable optimality engines, spelling the best lives for the most possible people anywhere and everywhere they reach – is not only blatantly fallacious, it’s wantonly destructive. Markets are very good at some things (e.g., stimulating innovation) and very bad at others (e.g., adapting to variables that can’t be accounted for in price, or basically anything that requires even a modicum foresight about the social or environmental implications of market behavior). For those of us willing to accept this rudimentary truth, the necessity of top-down – yes, that is, socialist – intervention is obvious. This isn’t a matter of surrendering to a colorless dystopia of central-planning. It means whipping the dusty, tattered, rapidly decaying tools of representative government into shape and using them to assert our will in systems that are otherwise beyond our control.

This really isn’t a radical proposition. Within the confines of the market, few of us have the capital necessary to exert meaningful influence over the behavior of giants like Exxon and BP, but all of us will be affected by the environmental consequences of their business model. Using our powers as voters and citizens (diminished – and diminishing – though they may be) is the only viable option left to us. That’s why the efforts of the protestors at places like Standing Rock are so important. Civil disobedience is rapidly becoming our last line of defense against an economic system hell bent on devouring the world.

Nowhere in this recognition is there an obvious repudiation of the larger framework of values and methods that emerged out of the Enlightenment and Scientific Revolutions. There’s no denying that they gave us the coal-fired steam and gas fueled internal combustion engines whose exhaust is currently warming the planet. But there’s also no denying that, inasmuch scientific discovery has the power to doom us, it is also true that it is the only thing that can save us. Modernity comes with its own litany of woes. It also comes with a wealth of invisible comforts that make life today better than life at literally any other time in human history. Fewer people live in poverty or die preventable deaths. The attendant ecological problems are real and in urgent need of redress. In no way is that a matter of trading one delusional ideology for another. The truth is much deeper and far more difficult to master: there are no perfect solutions. Utopia is an illusion. It has not and will never exist. But progress is real. It’s just riddled with error and struggle, giving way to faltering improvements – each new order better than the last, but still flawed and ripe for replacement.

Such is the case with the mythological market of the rational, all-knowing, self-made man. It has its merits. Hard work and self-determination are great. Competition is a powerful engine of innovation. A market wound-up and left to its own devices is a blind behemoth. Let’s use the tools of scientific discovery and representative government to give it a little discipline and foresight.

* It might be contended that these numbers aren’t fair. More people have worked in coal than in nuclear power, so obviously more people have died in the former than the latter. I considered this, and tried to calculate per capita fatalities. Unfortunately, good labor statistics for the nuclear sector are notoriously difficult to find. That said, I gave it my best shot. Between 1934 and 2015, 1 in every 680 coal workers in the United States died of job-related illness or injury. Grossly underestimating the number of nuclear power workers (assuming that no one has ever worked in nuclear before 2015 and using the Nuclear Energy Institute’s best workforce estimates), nuclear power related injuries and illnesses in the United States have claimed the lives of 1 in every 11,111 employees. Coal has gotten safer over time. In 2015, just 1 in every 8,567 workers died. The same is true of nuclear, however: in the same year, 0 of an estimated 100,000 nuclear workers died.

God these things are long…

Here’s the book. Criticism withstanding, it’s well worth a read.


The Use and Abuse of Cultural Relativism

A little over a year and a half ago, my wife and I were in Cambodia, sitting cross-legged on the wooden bow of a weathered prop boat, chatting with a local guide as he took us through a floating village populated – he told us – by ethnic Vietnamese immigrants. The boat cut through a wide channel in a patch of inundated forest on the Tonle Sap, our wake fanning out lazily to disappear among the trees or slosh mildly against the wooden bases of floating houses.

I don’t recall the precise details, but somehow our conversation turned to taxonomy. Our guide wanted to know what, according to our view, counted as an animal. Was an ant an animal? What about a human? This initially struck me as something of an odd question, but through the course of our back-and-forth it became apparent that it was quite sensible when understood from his perspective. Though the Western system of taxonomic classification exudes a comfortable aura of familiarity to those who have been raised with it, it is far from the most obvious way to order the natural world. No doubt this is part of the reason it didn’t occur to anyone prior to the past three centuries of human history.

In the view of our guide, the relationships among living things are defined by principles of opposition and symmetry. There must be two of each kind in a category and the relationships among categories are defined by an array of similarities and differences. That dogs and monkeys and snakes count as animals seemed to him common sense. That ants and humans are also members of the same category struck him as a little more peculiar.

When it comes to ordering the world of birds and beasts, the primary difference between our guide and ourselves was that we happened to subscribe to a classification system that orders the living world according to relationships of descent with modification. We apply a Linaean classification system structured around the notion that all living things share a common origin, and that their relationships are defined by how recently they diverged from a shared ancestor in a nested hierarchy extending back to a successful batch of single-celled organisms that emerged from the primordial soup some 3.5 billion years ago. Though many of the features that seemed most salient in his classification system are irrelevant to our Darwinian framework, they nonetheless have their own internal logic.

Unfortunately, our conversation was far too short for anyone involved to build a comprehensive picture of the other’s worldview. Nevertheless, a few general points are obvious. Foremost, that our guide’s view differed so substantively from our own is surely not the product of any innate difference in cognitive capacity. Nor can it be attributed to any inherent difference between his most recent ancestors and our own.

Instead, his perspective is primarily a product of vast networks of highly contingent influences. A non-exhaustive list might include the distinct and ever changing traditional beliefs of whatever ethnic group (or groups) he might belong to, his parent’s interpretation thereof, the ideas of his friends, the recent political history of Cambodia, and so forth. It’s highly unlikely that his ideas about the world are identical to those of his friends and neighbors, but are nonetheless shaped by sampling a more widely shared cultural repertoire. The same can be said of my wife and I – the fact that we have learned about biological evolution is a consequence of the circumstances in which we were born and raised, and the distinct cultural trajectory pursued by a certain subset of Western Europeans following the invention of the printing press and the subsequent intellectual revolution of the Enlightenment.


A Brief History of Cultural Relativism

The view that human differences in knowledge, belief, or practice are usefully (if only partially) explained by the proximate influence of culture, and that these difference can be usefully illuminated through an understanding of the internal logic of the cultures that produce them, is the perspective offered by cultural relativism.

Unfortunately, cultural relativism is a widely misunderstood principle. The concept (though not the term itself) probably first emerged in the writings of the anthropologist Franz Boas in the closing decades of the 19th century. Since then, it has been a useful tool for ethnographers seeking to understand culturally motivated behavior. At the same time, it has also been a plague upon the larger enterprise of producing scientifically justifiable explanations for human behavior. Upon entering the popular vernacular, it has offered of a veneer of intellectual rigor and scholarly nuance to facile arguments about the plurality of knowledge and the primacy of subjective experience. It has granted succor to the idea that there are “other ways of knowing” as reliable and equally deserving of confidence as the scientific method. It has sired pleas for “trigger warnings”, outrage over “microaggressions”, and an exaggerated emphasis on the subjective experience of feeling offended as a suitable justification for curtailing speech.

In its most extreme incarnations, cultural relativism has been advanced as a prohibition on any and all cross-cultural evaluation of beliefs, practices, and knowledge-claims. Everything from traditional accounts of cosmic creation and religious explanations of natural phenomena to moral arguments about the nature of good and bad behavior is taken off the table, shielded from scrutiny by the notion that cultural differences are sufficiently deep to render the humans they divide mutually unintelligible. Superficially, this stance has often been made to look like a sophisticated embrace of uncertainty, but it is really nothing of the sort. Instead, it is a pedantic, dehumanizing, and pusillanimous retreat from the hard work of uncovering truth and defending modern ideas about universal human rights.

Cultural relativism can be decomposed into three components. The first is methodological cultural relativism. This is a tool used by anthropologists and ethnographers to understand the beliefs, customs, ideas, and behavior of people with cultures and histories different from their own. In many respects mundane, methodological cultural relativism has proven extremely useful, allowing researchers to strip away the often blinding baggage of personal history and cultural bias.

Sadly, methodological relativism has come to be associated with the parasitic vices of normative (or moral) and cognitive relativism. Too often, these have served to rob mundane cultural relativism of its methodological utility. Cognitive realism is what emerges when one looks upon the real problems inherent in the human quest for knowledge – the things that make rock-solid, axiomatic certainty so difficult to achieve – and withdraws from the search entirely. Similarly, normative relativism represents a withdrawal from the challenges that emerge from the fluid, turbulent ambiguity inherent in developing and applying coherent ethical systems.

Taken together, cognitive and normative relativism are the seeds for a system of intellectual and ethical obfuscation that will hence-forth be referred to as fundamentalist cultural relativism (FCR). Rather than a useful system of intellectual criticism and methodological prescriptions, FCR is the intellectual equivalent of a mountaineer collapsing at the base of gnarly, ferocious peak and declaring it impossible to summit before even giving it a serious attempt. Put more simply, it’s what happens when someone looks at a problem, notes that it’s hard, and immediately declares it intractable.

As a consequence, many serious academics have abandoned cultural relativism entirely. The physicist David Deutsch has cast it as a brand of “persistently popular irrationality.” Insofar as he is referring to the idea’s most common flavor, so thoroughly infused with cognitive and normative relativism and particularly pervasive among the social justice crowd, he isn’t far off the mark. Indeed, the extreme relativism that flourished in the humanities and social sciences in the 1980s and 1990s is nothing if not a persistently popular brand of irrationality. With the spread of the vapid school of vigorous intellectual masturbation and enthusiastic pedantry politely referred to as “postmodernism,” the notion that all knowledge-claims or moral positions are equally valid became dogma throughout many of the subdisciplines that fall within the scope of the humanities and social sciences. My own discipline, anthropology, is still recovering from the effects of this intellectual rot.

One of the chief problems with that the brand of cultural relativism (justifiably) decried by thinkers like Deutsch is that it represents an abuse of a more useful principle. The notion that the beliefs, customs, and behaviors of different cultural groups could be usefully analyzed and understood in their own terms was a radical advance over previous anthropological methods. When the earliest forms of the concept were first advanced, beliefs about the social – and even biological – supremacy of white Western men were scarcely given a second thought. White men in Western, industrialized societies occupied the pinnacle of a long chain of social and biological evolution.

According to these views, Western culture represented the purest, most complete manifestation of a natural order, the metric against which all other human cultures should be judged. The cultural differences uncovered in places like the equatorial rain forests of Papua New Guinea or the frigid expanses of the high Arctic were commonly viewed as deviations from perfection. Today, these notions are rightly considered signally ridiculous and repugnant by all but the most intransigent racists and xenophobes. But to many prominent and serious thinkers occupying important positions of academic and political authority in the late 19th and early 20th century, nothing could have been more obvious than their own innate superiority.

Eventually, ethnographers and anthropologists like Franz Boas came to recognize that the rampant ethnocentrism of their peers was translating into bad science, fundamentally hobbling efforts to actually understand the roots of human behavioral variation. The idea that “traditional” societies, such as those found practicing foraging or horticulturalist lifestyles in the rainforests of Brazil, were somehow frozen in time – primitive vestiges of points along the slow but inevitable march toward the industrialized modernity of centralized state authority and market economics – represented a roadblock to intellectual progress. Recognizing the faults in this perspective, some turn of the century anthropologists began to advocate analyzing cultures according to their own internal logic.

It’s worth noting that the kind of relativism espoused by Boas and his students was almost certainly more permissive that the version being advocated here. Nonetheless, as an analytical tool, cultural relativism represented a considerable improvement over the parochial, Western-centric worldview that had been distorting the thinking of early explorers and ethnographers. Rather than viewing non-Western cultures as primitive holdovers, serious anthropologists began to recognize that individual cultures – and the differences among them – could be fruitfully illuminated by seriously attempting to understand them in their own terms. Substantial utility could be found in recognizing that each culture is a product of a vast web of historical contingency.


Cultural Relativity as a Modern Research Tool

To understand why this is so, it’s useful to couch the argument in more modern terms. Human behavior, like that of other animals, is a product of a complex stew of influences. This includes a suite of genes – some highly conserved, others more recently selected as a result of the specific challenges that came along with being a bipedal, largely hairless, highly social primate living on the African savannah for the better share of the Pleistocene. It also includes a capacity to learn and usefully modify our behavior over the span of hours, days, weeks, months, or years – allowing us to respond to environmental variability much more rapidly than the relatively glacial pace at which natural selection sorts the genetic wheat from the chaff. But the inheritance of genetic information is ubiquitous across the living world, and learning is not particularly rare. Humanity’s peculiar spark is found in our unique capacity to create, accumulate, store, and transmit vast sums of non-genetic information – i.e. culture.

Culture itself is an evolving system, emerging as generations of individuals interact with one another while attempting to cope with the challenges of living in particular environments and trying to build stable social arrangements therein. Ideas about how best to live are dictated not only by hard practicality (e.g. what foods offer the best caloric returns relative to caloric energy spent in acquiring them), but the beliefs, customs, superstitions, and additional cultural bric-a-brac that tends to accumulate as a byproduct of human efforts to live in a world they don’t fully comprehend. The ability to accumulate, compress, retain, and manipulate all of that extra information symbolically, and transmit it across generations, is the primary feature that has allowed humans to not only survive, but actively flourish, in environments as diverse as the frozen expanses of Canadian Arctic to the tropical rain forests of Papua New Guinea. The capacity for culture transformed an African primate into a global species. Without understanding culture – and the processes that shape it – it is fundamentally impossible to produce a complete account of human behavioral variation.

Cultural relativism, then, is a useful lens for investigating one of the principle variables shaping human behavior. If you want to understand why people think and act the way they do, it is often useful to try your best to see things from their angle – to adopt what anthropologists call an “emic”, or “inside-looking-out” view of culture. This is not controversial. It is simply a specific incarnation of what philosopher James Woodward has called the manipulability conception of causal explanation, a counterfactual account of causal relationships that suggests one can find out why things are the way they are by imagining how they might be otherwise. People are the way they are, in part, because of the culture they experience. If you changed or removed that culture, they would be different. Thus, to understand why they are the way they are, you should understand the set of beliefs, customs, practices, ideas, rituals and what-not that are caught under the umbrella term, “culture”.

This is also an idea that has been hijacked and corrupted into something that, in its own ways, has come to prove almost as debilitating as 19th century Western ethnocentrism. Extreme relativists advocate the position that cultures can only be understood in their own terms, that outsiders have no justifiable basis for cross-cultural evaluation, that all culturally derived knowledge is equally true, and that all culturally derived moral positions are equally valid. Proponents of this school of cultural relativism argue not that the differences between my perspective and that of my guide are usefully illuminated by understanding the details of the differing cultural contexts from which they emerged, but that they are also equally true pictures of the way the world works.


Fundamentalist Cultural Relativism and Science

By completely abandoning the prospect of “etic” (or “outside-looking-in”) analysis, the proponents of fundamentalist relativism place themselves in a thorny situation. Extreme relativism implies a fairly strict allegiance to idealist epistemologies. This means that they subscribe to the notion that subjective experience is the absolute arbiter of reality – or, roughly, that our minds make the world, rather than our minds being a product of a world with an existence independent of and discoverable by human observers.

Under this view, the special utility of science as a knowledge gaining activity is implicitly (and often explicitly) denied. This represents a shift from the rather mundane observation that the process of scientific discovery is a cultural phenomenon to the much more drastic (and far less tenable) position that scientific knowledge – the product of that process – is culturally contingent as well. Force is only equivalent to the mass of an object multiplied by its acceleration in cultures with the concepts supplied by Newtonian physics, a molecule of water is only comprised of two hydrogen atoms covalently bonded to an oxygen atom is societies with chemists, and the electrical resistance of a conductive metal only decreases with lower temperatures in societies that have developed an understanding of electromagnetism.

Such a line of thinking may seem ridiculous – a hyperbolic straw-man set to topple in the slightest breeze – but is in fact an accurate representation of positions forwarded by serious and influential intellectuals in the postmodern movement. This has resulted in a disconcerting number of otherwise intelligent people (typically either heavily informed by or professionally engaged in the humanities and social sciences) taking fundamentalist cultural relativism seriously. Instead of placing special emphasis on the process of scientific discovery, progressive intellectuals too-often celebrate “other ways of knowing”, as if astrology or eastern spiritualism can produce a knowledge-claim half as worthy of confidence as rigorous empirical investigation and peer review.

To a degree, this breed of extreme relativism may seem harmless enough. After all, as astrophysicist and science-popularizer Neil deGrasse Tyson is fond of pointing out, the findings of science are true regardless of whether or not anyone buys into them. Most of the people who fall into the trap of fundamentalist CR aren’t likely to be engaged in the process of scientific discovery – they can watch from the sidelines and pooh-pooh the veracity of scientific claims without impinging on their actual veracity in any way, just as I could sit on the sidelines of a basketball game and deny that a player has successfully made a free-throw without affecting whether or not a new point has actually been scored.

This would be true if a tolerance for nonsense didn’t inevitably yield ugly results. Someone making important life choices based on the sign of the zodiac is foolish and risky, but generally innocuous. But what of an impotent Chinese man who thinks ground-up rhinoceros horn will give him a powerful erection? Or parents who treat their child’s meningitis with maple syrup? How about parents who believe in faith-healing, praying their hearts out as their child withers and dies in agony?


Fundamentalist Cultural Relativism and Morality

Dangerous consequences also emerge when the principles of FCR are applied to problems of moral and ethical evaluation. Here, identifying the deficits of extreme relativism requires slightly more nuance than is necessary to articulate its failings with regard to the problem of scientific truth. This is because the discovery or demonstration of moral absolutes is extraordinarily difficult. Moral systems emerge from culture and cultures change from place to place and evolve over time. As a result, ideas about right and wrong behavior change along with the cultural substrate in which they are embedded.

Consider, for example, the notion of individual human rights. Today, the idea that individuals within human societies have certain rights is frequently taken for granted, and the moral consequences of either supporting or infringing on those rights are often taken into account when debating the merits of law or political action on the local, federal, and international stage. Prior to the advent of modern notions of human rights, the idea that certain humans should have access to a certain range of social resources and granted a certain level of equal treatment as a de facto condition of their existence was an alien proposition. Rights instead flowed from monarchs or religious officials. But since the Enlightenment, the Western notion of human rights has continued to expand, becoming increasingly inclusive as people have come to identify and labor to eradicate previously unexamined strains of prejudice and bigotry. One day – probably very soon – they will expand to encompass certain non-human animals. Our ideas about human rights have changed over time, continually altering the standards by which we judge moral propriety.

Another example should drive the point down to bedrock. Nowhere is the temporal plasticity of moral strictures more clearly demonstrated than in religion, where interpretations of the dictates of moral propriety outlined in sacred texts are constantly renegotiated in light of secular changes. The precise phrasing varies with translation, but the content of the Bible possessed by modern Episcopalians is basically the same as the content of the one possessed by 17th century Puritans. Nevertheless, the two sects exhibit significant differences concerning what kinds of behavior are considered morally acceptable. The primary cause of these deviations is that modern Episcopalians have shifted their understanding of doctrine to accommodate wider cultural changes regarding the perception of what does and does not count as righteous behavior.

So clearly, moral prescriptions change with time and context. Given this, anyone advocating a cross-cultural evaluation of moral propriety might seem to be on shaky ground. Here, it becomes important to recognize that cultural relativism, as a methodological tool, says nothing about moral evaluation. Appropriately applied, cultural relativism supplies an observer with the perspective needed to see – for instance – why parents in certain sects of fundamentalist Christianity might deny their child life-saving medical care. It does prevent anyone from feeling or expressing moral revulsion at the underlying beliefs and the practices they engender. Nor does it prevent a society that places value on the preservation of human life from interceding on the child’s behalf and putting the parents in prison for criminal negligence or even homicide.

The same can be said of any of the horrors that flow from religious fundamentalism: honor killings, homophobic hate crimes, female genital mutilation, child brides, suicide bombings, the torture and murder of heretics in places like Saudi Arabia, and the full litany of offenses that trespass the bounds of the moral intuitions of people who value human life and equality. Cultural relativism should be deployed as a tool to understand why these things occur – why the confluence of certain social contexts and religious beliefs leads people to kill and mutilate one another. It says nothing about how we should react to these things. By positing cultural relativism as a prohibition on a moral evaluation, extreme relativists retreat from all responsibility for upholding modern liberal values like universal education, racial and gender equality, freedom from oppression, and access to basic healthcare.

Indeed, though the articulation of universal human values is a notoriously thorny problem, one can only deny their existence by adopting a fantastical view of the forces structuring human behavior. As a rather odious stew of idealism, FCR entails a rank denial of the existence of anything resembling human nature – replaced, presumably, with a Lockean conception of humans as infinitely malleable blank slates. In this view, human moral proscriptions are fashioned from the aether, their only worldly determinant the cultural milieu in which they arise.

But worldwide, humans create and enforce prohibitions on certain types of in-group killing. Likewise, a capacity to monitor social contracts and detect cheaters seems to be innate, offering a clear indication that humans universally appreciate fairness in social arrangements. An aversion to incest is similarly widespread. Though they are differently expressed in myriad taboos and moral prescriptions, these are strong contenders for universal moral preferences. They very likely have a basis in humanity’s evolved psychology, and therefore offer the crude foundations for a universal code of ethics.

On the other end of the spectrum, humans have innate predispositions toward misconduct that can be exacerbated by an exaggerated emphasis on a cultural as an unassailable fount of moral knowledge. For instance, humans have an ugly impulse toward tribalism. Allowing cultural boundaries to play a greater role in shaping values than our shared identity as humans not only grants tribalism succor, it is a natural consequence of taking absolute moral relativism seriously. It is also detrimental to the project of building and maintaining humanistic moral codes and the universalized standards of human thriving they entail. With its exaggerated celebration of boundless pluralism, FCR has the potential to prove inimical to the practical goals of building and maintaining the social institutions most amenable to human success. Consider the very project of building stable social institutions. While it is important to encourage diversity, it is also true that some degree of assimilation is critical to the formation and long-term stability of societies. When the locus of identity in a given society is a multitude of distinct religions, ethnic affiliations, or political subgroups, the resulting fragmentation is a recipe for long term instability and strife. Each group is bound to pursue clannish interests, guided by moral codes that may be both mutually exclusive and entirely divorced from the best interests of the collective.

This is precisely what has happened in the East Ramapo Central School district in New York, where Orthodox Jews seized the local school board and began cutting services in an attempt to alleviate their tax burden. Their goal was to avoid paying taxes on schools their children didn’t intend, but their myopic focus on religious and ethnic affiliation has lead them to neglect – and even harm – the wellbeing of their neighbors. Adherence to the moral proscriptions of an ancient faith, in concert with a very likely evolved predisposition to favor cultural familiars, has led them to place value on their identity as Orthodox Jews to the exclusion of their identity as human beings – and all the ethical imperatives that identity implies.

Yet even in the absence of any evolved, universal preferences to form the foundation of widely applicable moral prescriptions, it is impossible to advocate extreme forms of cultural relativism without abandoning modest claims about right and wrong. For example, it might be argued without a lot of protest that individual behaviors that encourage or contribute to the physical health or well-being of others are laudable. Conversely, those that can be shown to be directly or indirectly harmful to others are not. It’s good to feed your kids, bad to starve them. A parent who poisons the lungs of her offspring with second-hand smoke can be sensibly accused of engaging in some form of wrongdoing. But what of the men who have foisted upon their wives the bizarre and inarguably sexist tradition of wearing burqas? Not only is this practice reflective of the possessive, proprietary interests of a regressive patriarchy, it has also been linked to vitamin D deficiency and associated problems like rickets.

According to FCR, the latter claim offers no basis for ethical evaluation. Lacking any direct knowledge of what it is like to be a Muslim woman, I have no basis for suggesting that their adherence to the tenets of Islamic faith leads them to live a less healthy and fulfilling life than they could otherwise. Boiled down, extreme relativism argues that my desire to see all humans, everywhere, granted basic human rights is ill-founded. Moreover, it posits that my opinion that certain cultural practices or religious beliefs are inimical to that goal is bigoted. Instead of expressing an interest in the well-being of all humans, the proponents of FCR see this view as “Islamophobic”.


Cultural Relativism as Ethical Obstructionism

By suggesting that cultural differences are essentially unbridgeable and denying the possibility of either uncovering or negotiating a universal standard of human thriving, FCR  has the curious consequence of more substantially “otherizing” (to borrow a rather obscurantist term from the social justice world) people from distinct cultural backgrounds. In one of its more popular incarnations, it argues that everyone is gifted with a “positionality” (yet another term of polite PC pedantry) that renders their ideas about right and wrong both externally unintelligible and permanently unassailable. It is to say, in effect, “your worldview is so different from my own that I can hardly justify feeling or expressing outrage when you either abuse or are abused by a member of your cultural subgroup.” At best, this is a recipe for social stagnation. At worst, it’s a way of abrogating centuries of moral progress – dispensing with hard-earned notions of human rights in favor of milquetoast ideas about cultural sensitivity.

FCR is also a direct progenitor of the modern strain of intellectual and moral sensitivity sweeping college campuses in the form of pleas for “trigger warnings” and concern over “microaggressions”. These violations of Enlightened, humanistic ethics seem superficial in comparison to some of more heinous transgressions countenanced by FCR, but they presage something sinister. Where once FCR might have precipitated innocuously nondescript rhetorical utterances along the lines of “who are we to judge?”, it now motivates an authoritarian push for the establishment of pristine thought-sanctuaries – places where ideas are vetted for the slightest hint of potential trespass against the ideals and preferences of cultural subgroups. In this regard, FCR has turned from a bastion for moral cowardice to a direct assault on civil liberties – and it is here that it most significantly earns a sobriquet typically reserved for regressive strains of religious intolerance and slavish adherence to ideology: fundamentalism.

Thomas Paine was absolutely correct when he observed, in The Rights of Man, that natural rights are irrevocable. Though it may be a recent invention, the introduction of the concept of “human rights” represents a monumental transition in our understanding of human ethics. Absent its cataclysmic obliteration from the realm of human thought, it will remain a critical component of our modern assessments of right and wrong. The secular notion of individual human rights is, inarguably, an improvement over previous moral codes, yet FCR – manifest in the mutual unintelligibility implied by overwrought concerns over “positionality” and an obsequious devotion to cultural sensitivity – would have us abandon that progress by asserting that people’s culturally derived beliefs about the will of Allah or the efficacy of vaccines are more important that anyone’s right to live a healthy, independent life. Brass tacks, there are good reasons to think some ways of living are better than others.

I would humbly submit that we shouldn’t throw the proverbial baby out with the bathwater. Cultural relativism, in its original, lighter form has proven immensely useful to anthropologists and ethnographers seeking to understand the proximate causes of locally expressed human differences. It was a substantial leap over the ham-fisted, Western-centric, deeply racialized theorizing of 19th century intellectuals. The underlying principle, that all human cultural systems deserve to be understood in their own terms, is useful as more than just a methodological tool for ethnographers. On the global stage, it has a role to play in the shaping of foreign policy and international affairs. On the more humble scale of individual lives, it has a role to play in helping neighbors understand one another.

It just needs to be deployed under the recognition that, inasmuch as it is useful tool for building human understanding, it is has little value as tool for evaluating knowledge claims and moral proscriptions. To the extent that cultural relativism, manifest in its fundamentalist form, is interpreted as endorsing the primacy of subjective experience in dictating the structure of reality or mandating abstinence from adopting or defending human rights on the grounds that those rights are a “Western construct”, it deserves all the ridicule it receives. Science is the best tool for gauging truth. Cultural practices that inhibit the achievement of universal human rights can be justifiably viewed as harmful and ought to be stridently opposed and vigorously critiqued. These assertions aren’t oppressive or marginalizing or bigoted. They’re true.

Jeffrey Guhin was Absolutely Right About Neil deGrasse Tyson and Absolutely Wrong About Science

Writing rebuttals to the random thoughts that emerge from Neil deGrasse Tyson’s twitter feed has become something of a cottage industry of late. He appears to make a game of trying to cram profundity into 140 characters. The results might be generously described as mixed. His most recent misfire came in the form of a proposal to build a virtual nation called “Rationalia”, where all policy decisions are adjudicated by evidence.

In response, sociologist Jeffrey Guhin entered the ‘rebut Tyson’s twitter feed’ industry with a perversely ill-conceived takedown. The flaws with Tyson’s reasoning are rather elementary and simple to articulate. Brass tacks, a nation in which policy was dictated by the weight of evidence wouldn’t be able to make much policy. While it’s hard to think of an issue where evidence is entirely immaterial, there are plenty of issues where the weight of that evidence is far less than decisive. Choices about what kinds of policy to enact on issues like abortion, capital punishment, and resource redistribution can and should be informed by evidence, but they are ultimately decided by the ceaseless competition among changing value systems.

It’s clear that Guhin has some sense of this, but instead of driving the point home, he turns to an attack on the entire process of scientific discovery and the veracity of the results it yields.  In doing so, he reveals an embarrassing misunderstanding of the way science works and the reasons for which it is granted special credence as a knowledge-gaining activity. Indeed, it’s difficult to read Guhin’s piece without coming away with the impression that he literally does not understand science at all.

Guhin’s primary gripe with science seems to be that scientists are people and, like all other people, they are driven by irrational impulses and blinkered by unexamined prejudices. This is an extraordinarily mundane observation, but it has long provided fodder for assaults on science from people in across the “other ways of knowing” spectrum, from eastern spiritualists to vehement anti-vaxxers. In terms of originality and impact, it might fit somewhere between the observation that rocks tend to be hard and you’ll die if you don’t eat.

The fact that scientists can be just as biased and irrational as anyone else is precisely why science, as a process, eschews appeals to authority. General relativity isn’t considered a powerful scientific theory because the man who came up with it, Albert Einstein, was a well-respected scientist. It’s considered powerful because its predictions match observable reality with incredible precision. Other scientists checked Einstein’s work, making observations and performing experiments to test how closely it aligned with reality. Their results indicated that general relativity is an immensely successful explanatory framework.

This is the feature of science that Guhin really overlooks. Much of the rationality of science emerges from the structure of scientific communities. Guhin’s ignorance of this fundamental point suggests he spends more time cataloging the perceived moral infractions of science than actually thinking about how science works. Myriad researchers compete and cooperate with one another in the shared pursuit of new knowledge. Though any individual scientist might be blind to the flaws of her experimental methods or pet hypotheses, plenty of her peers will gladly assist her in uncovering every point of error. The community structure of science serves as a course-corrective for the subjective biases, irrationality, and dogmatism exhibited by any of its individual constituents.

So when Guhin points to social Darwinism and phrenology as scientific failures, he neglects to mention that their eventual dismissal is a clear indication that the process of scientific discovery works just fine. Those ideas fell out of favor because the cold arbitration of observable reality, in concert with the relentless scrutiny of peer review, found them wanting. Recognizing that they didn’t do any explanatory work, scientists cast those ideas aside, where they joined the colossal dust-heap of failed scientific ideas.

As Guhin rightly suggests, the history of science is, more than anything else, a story of failure. In the long run, most scientific ideas turn out to be wrong in some way or other. Many just need to be tweaked, but others are discarded outright. Usually the results are pretty innocuous. J.J. Becher’s phlogiston theory of combustion never hurt anybody, nor did Joseph Priestly’s recalcitrant defense of it.

Indeed, almost all of the failures Guhin seeks to cast as instances where science grossly violated the bounds of human ethics are really nothing of the sort. Hitler’s Third Reich and Stalinist Russia labored to present a veneer of scientific credibility, but never really exhibited anything of the sort. Both were expressions of state religion, where ideological fundamentalism and political fanaticism actively stifled scientific research and trampled many of the values most esteemed in science. Other sins Guhin tries to pin on science, like scientific Marxism, were dismissed decades ago because they were never really scientific in the first place.

It’s absolutely critical to remember that every time a scientific idea has turned out wrong, it has been a scientist or a community of scientists that discovered its faults. More importantly, in the quest to understand the nature of reality – to construct reliable explanations of how the real world actually functions – science is the only thing that has ever worked. Measured against its litany of failures, the halls of successful scientific explanations can seem rather sparsely populated. But science is also the only process capable of landing a robot on a comet and building enormously sophisticated pocket computers. It’s the only source to turn to when you want to explain the structure of the recurrent laryngeal nerve in a giraffe or pluck information about the origins of the cosmos from data on the temperature of empty space and the Doppler shift of distant galaxies. It’s the only method for identifying the causal linkages between patterns of global climate change and human behavior. It’s the only tool for uncovering the causes of diseases and successful methods for treating them. With the right kind of belligerent myopia, it’s easy forget that all these things are the product of science. Though it might only do so rarely, science is literally the only method for uncovering truths that transcend the boundaries of language and culture.

Given all this, it might be possible to see a nugget of truth beneath Tyson’s otherwise unrefined suggestion. It points to a more modest claim: that in any decision-making process where scientific evidence can be brought to bear, that evidence absolutely should be granted special emphasis. It’s not that values don’t have a role to play. It’s that values independent of reason and evidence are a recipe for unmitigated disaster.


Read This Book, Dammit: The Secret of Our Success

Starting in the late 1970s and early 1980s, a new program of research began to emerge in the study of human culture and behavior. Building on pre-existing tools from population genetics and evolutionary biology, researchers like Luigi Cavalli-Sforza, Marcus Feldman, Robert Boyd, and Peter Richerson began to construct a theory of cultural evolution rooted in Darwinian principles. They showed that attention to the functional roots of culture could be couched in a larger framework capable of explaining both the nature of culture and the processes behind cultural change.

The notion that cultures evolve was hardly new. Archaeologists and anthropologists had been working under that assumption for a few decades, striving to refine their theoretical and methodological approaches into the roots of a mature, rigorous science. Ultimately, these efforts yielded a framework that used a thoroughly Darwinian lexicon – adaptation, selection, evolution – in only loosely Darwinian ways. Researchers developed a focus on local ecological specialization – a useful step forward – but frequently situated their insights in a framework that was both incomplete and inconsistent. The recognition that cultural change was not only evolutionary, but sensibly Darwinian, provided the tools necessary to build formal – and testable – explanations of cultural phenomena.

In the years since Cavalli-Sforza, Feldman, Boyd, and Richerson laid down their pioneering work, the theory of cultural evolution as a Darwinian process – capable of both causing and responding to new patterns of biological evolution – has been consistently vindicated, demonstrating its utility in the lab and field. Joseph Henrich’s book, The Secret of Our Success is a thrilling exploration of the frontiers of that research. Henrich puts up a strong case that underlying humanity’s broad ecological success and expansive behavioral repertoire is our faculty for creating, transmitting, manipulating, storing, and accumulating massive amounts of non-genetic information in the form of culture.

Cumulative cultural evolution, as it is called, is unique to humans (putting aside emerging evidence for simple forms in New Caledonian crows – the difference in degree is large enough that we might as well call it one of kind). Other species have cultural traditions – those crows, for instance, make tools, as do chimpanzees – but none of them retain that information and build on it in any meaningful way over successive generations. The techniques individual chimps learn for termite fishing or nut-cracking are lost at death, inevitably hung up on the barriers that inhibit the transmission of all the other traits organisms acquire throughout their lifetimes.

In technical parlance, this is called Weismann’s barrier. Put simply, it means that inheritance is a one way street – information moves from germ (sex) cells to somatic (body/tissue) cells, but not the other way around. A chimp might learn a great deal about how to use tools to access otherwise inaccessible resources throughout its lifetime, but it has no way to get that information from the neurons in its brain to the eggs or sperm in its reproductive system.

Somewhere in the hominid line, our ancestors found a way around that obstacle, sidestepping the whole business of one-way genetic transmission by transmitting lifetime’s worth of acquired information from individual to individual in the form of culture. The foundations of this remarkable evolutionary transition rest in humanity’s a spectacular facility for social learning. Other species are, of course, capable of social learning, but these abilities are vastly enhanced in humans. We pay far more attention to each other than other animals, selectively targeting individuals that exhibit signals of above-average proficiency or expertise. In the same vein, we have a highly developed theory of mind (the ability to think about what other people are thinking) allowing us to understand each other in terms of intentionality and purpose. It has even been suggested that our unusually small iris, set against a very white sclera, is an adaptation for non-verbal communication – making it easier for us to keep track of other people’s attention and communicate our own.

Critically, growing evidence indicates our social learning expertise – unlike many other forms of human knowledge and behavior – is innate. Henrich discusses experiments in which children matched against adult chimps and orangutans on a variety of tasks. In most domains, they do about the same or a little worse than the other primates. But in social learning, human children massively outperform their hairier cousins.

Such highly evolved adaptations for social learning provide a scaffold for extraordinary levels of information sharing. Even absent language, humans watch one another and pay special attention to signals of above average proficiency and prestige in order to learn new or better ways to solve adaptive challenges. They create and maintain social norms that encourage cooperation, foster stable traditions, and aid in patterns of ingroup-outgroup competition. Social learning allows us to make and accumulate culture.

This, more than anything else, explains why a species with relatively little genetic variation displays such a sweeping range of behavioral variation and ecological specialization. It gets to the why of the of descriptive insights uncovered by earlier cultural evolutionists – that humans display local ecological adaptation – by presenting a plausible and, increasingly, empirically justified mechanism. Humans can meet the challenges of living on frigid ice sheets in the high arctic and sweltering jungles in the subtropics because we have the capacity to accumulate information about how to live in those environments at a rate far in excess of that afforded by strict biological adaptation. And critically, it’s not a matter of individual genius. Humans learn about how to live in new environments through the accumulated wisdom of generations of trial-and-error learning, resulting in cultural packages that are expertly tailored to the challenges of specific ecosystems.

Clearly, this point stands in contradiction to those who would link humanity’s extreme success as a species to extraordinary – and innate – individual intelligence. Individual humans can be pretty smart, but they rarely (if ever) have the cognitive horsepower necessary to build the sophisticated cultural innovations necessary to survive in novel environments from scratch. This is true of modern technology and scientific progress as much as it is of forager subsistence and ritual observance. There is a popular tendency to think of technological innovation as a matter of lone geniuses and marvelous insights. But James Watt’s steam engine was inspired by the earlier Newcomen steam engine. Similarly, Albert Einstein’s theories of special and general relativity drew inspiration and built on insights from the work of Gottfried Leibniz and Bernhard Reimann. Individual genius is real (who could argue that Einstein wasn’t a genius?) but the fruits of genius accrue incrementally.

For his purposes, Henrich makes this point another way. To illustrate the failings of individual intelligence – and, by contrast, the power of cumulative cultural evolution – he relates a variety of historical anecdotes . In this light, we might think of them a little natural experiments. In each, healthy, intelligent European explorers found themselves in a scenario where they are forced to survive on their wits alone in an unfamiliar environment. Be it the muggy, swampy coasts of the Gulf of Mexico, the icy wastes of the high arctic, or the arid sprawl of the Australian outback, the outcome is inevitably the same: suffering and death. Those that survived did so because kindly natives, with the cumulative knowledge necessary to survive in a particular ecosystem, lent the naive Europeans a hand.

Tellingly, the causal mechanics of successful cultural adaptations are usually opaque to the people who employ and perpetuate them. Most people don’t understand the physics of bow and arrow technology or the insulative properties of snow and ice. They don’t understand the chemistry of effective poisons for hunting or detoxifying otherwise inedible plants. Yet, using the cumulative intelligence of many individuals over multiple generations, they develop technologies that successfully exploit principles of aerodynamics, thermodynamics, and chemistry to build sophisticated suites of cultural know-how that allow them to live and thrive in almost any environment.

The breakthrough Henrich presents is not that culture is useful. That’s pretty obvious, intuitively speaking. It’s in the emerging understanding of how humans make culture – and how culture makes humans – in dynamic patterns of feedback and response between our genetic architecture and cultural developments over successive generations. Learning how to process plants and meat, and passing that information down from generation to generation, has worked extraordinary changes on our guts. Domesticating certain ungulates and incorporating their milk into their diets has modified certain population’s ability to metabolize milk well into adulthood. Specialized adaptations allowed humans to move up into otherwise inhospitable latitudes, eventually altering the skin pigmentation of some European populations. The Darwinian framework of gene-culture coevolution allows researchers to move beyond insightful explanation about the plausible roots of human cultural and behavioral variation and get down to the serious business of scientifically explaining these things.

And that is the core point. The revolution here isn’t descriptive, it’s explanatory. Placing our understanding of cultural change in a comprehensive, unified Darwinian framework has moved the study of human behavior forward in a way that other, similarly minded attempts have so far failed to achieve. As more and more researchers across the social sciences – from psychology and sociology to economics and anthropology – come to appreciate and accept the utility of the Darwinian perspective, these fields (particularly anthropology) are beginning to move out of the aimless shadows of what Thomas Kuhn called pre-paradigmatic science.

The reasons for this are simple: the more researchers who work within a coherent, mutually intelligible framework, the greater a field’s capacity for real scientific progress. This is because science itself is something of a Darwinian process. It works through patterns of competition and cooperation among individual researchers (and research groups), who collaborate on complex problems and criticize each other’s work where it falls short of established criteria. This process doesn’t work very well if everyone is working under an entirely different framework – Marxist anthropologists can’t add much to the discussion of Darwinian approaches because they lack both the specialized knowledge and the shared values needed to make sense of and properly evaluate Darwinian work (and vice versa).

In this line, the work of Henrich and other evolutionarily minded social scientists has been immensely beneficial, forging as it has a deeper, broader understanding of the roots of human behavior. And there’s a compelling case to be made that the growing popularity of this theoretical framework isn’t some intellectual fad. Rather, it’s a product of people who share similar goals (to explain things) and similar standards for judging how well those aims have been met (internal coherence, experimental and observational evidence, falsifiability) responding to relevant evidence. The array of approaches couched under the wider framework of gene-culture coevolution just seem to work.

Henrich’s synthesis of this research is among the best that I have read, carefully explaining how evolved psychological traits – like a bias toward watching and mimicking prestigious or successful individuals or a tendency to monitor and enforce social contracts – work in concert with our ever-increasing capacity for high-fidelity information storage and transmission – language evolution, writing technology, printing presses, internet – to create a potentially boundless realm of cultural innovation. Humans are a remarkable species. But, as Henrich argues, our singularity comes not from our innate intelligence – which has been much overblown. Instead, it comes from our ability to put our heads together, creating resilient forms of collective intelligence that allow us to survive – and thrive – practically anywhere we find ourselves.

The Secret of Our Success: How Culture is Driving Human Evolution, Domesticating Our Species, and Making us Smarter – by Joseph Henrich