Genetic determinism: why we never learn—and why it matters

Here it is, 2014, and we have “Is the will to work out genetically determined?,” by Bruce Grierson in Pacific Standard (“The Science of Society”).

Spoiler: No.

The story’s protagonist is a skinny, twitchy mouse named Dean who lives in a cage in a mouse colony at UC Riverside. Dean runs on his exercise wheel incessantly—up to 31 km per night. He is the product of a breeding experiment by the biologist Ted Garland, who selected mice for the tendency to run on a wheel for 70 generations. Garland speculates that Dean is physically addicted to running—that he gets a dopamine surge that he just can’t get enough of.

Running in place

Addiction theory long ago embraced the idea that behaviors such as exercise, eating, or gambling may have similar effects on the brain as dependence-forming drugs such as heroin or cocaine. I have no beef with that, beyond irritation at the tenuous link between a running captive mouse to a human junkie. What’s troubling here is the genetic determinism. My argument is about language, but it’s more than a linguistic quibble; there are significant social implications to the ways we talk and write about science. Science has the most cultural authority of any enterprise today—certainly more than the humanities or arts!. How we talk about it shapes society. Reducing a complex behavior to a single gene gives us blinders: it tends to turn social problems into molecular ones. As I’ve said before, molecular problems tend to have molecular solutions. The focus on genes and brain “wiring” tends to suggest pharmaceutical therapies.

To illustrate, Grierson writes,

File this question under “Where there’s a cause, there’s a cure.” If scientists crack the genetic code for intrinsic motivation to exercise, then its biochemical signature can, in theory, be synthesized. Why not a pill that would make us want to work out?

I have bigger genes to fry than to quibble over the misuse of “cracking the genetic code,” although it may be indicative of a naiveté about genetics that allows Grierson to swallow Garland’s suggestion about an exercise pill. Grierson continues, quoting Garland,

“One always hates to recommend yet another medication for a substantial fraction of the population,” says Garland, “but Jesus, look at how many people are already on antidepressants. Who’s to say it wouldn’t be a good thing?”

I am. First, Jesus, look at how many people are already on anti-depressants! The fact that we already over-prescribe anti-depressants, anxiolytics, ADHD drugs, statins, steroids, and antibiotics does not constitute an argument for over-prescribing yet another drug. “Bartender, another drink!” “Sir, haven’t you already had too much?” “Y’know, yer right—better make it two.”

Then, what if it doesn’t work as intended? Anatomizing our constitution into “traits” such as the desire to work out is bound to have other effects. Let’s assume Dean is just like a human as far as the presumptive workout gene is concerned. Dean is skinny and twitchy and wants to do nothing but run. Is it because he “wants to exercise” or is it because he is a neurotic mess and he takes out his anxiety on his wheel? Lots of mice in little cages run incessantly—Dean just does it more than most. His impulse to run is connected to myriad variables, genes, brain nuclei, and the reported results say nothing about mechanisms. We now know that the physiological environment influences the genes as much as the genes influence physiological environment. The reductionist logic of genetic determinism, though, promotes thinking in terms of a unidirectional flow of causation, from the “lowest” levels to the “highest.” The more we learn about gene action, the less valid that seems to become as an a priori assumption. The antiquated “master molecule” idea still permeates both science and science writing.

Further, when you try to dissect temperament into discrete behaviors this way, and design drugs that target those behaviors, side effects are sure to be massive. Jesus, look at all those anti-depressants, which decrease libido. Would this workout pill make us neurotic, anxious, jittery? Would we become depressed if we became injured or otherwise missed our workouts? Would it make us want to work out or would it make us want to take up smoking or snort heroin? In the logic of modern pharmacy, the obvious answer to side effects is…more drugs: anti-depressants, anxiolytics, anti-psychotics, etc. A workout pill, then, would mainly benefit the pharmaceutical industry. When a scientist makes a leap from a running mouse to a workout pill, he is floating a business plan, not a healthcare regimen.

And finally, what if it does work as intended? It would be a detriment to society, because, having a pill, it would remove yet another dimension of a healthy lifestyle from the realm of self-discipline, autonomy, and social well-being. It becomes another argument against rebuilding walkable neighborhoods and promoting public transportation and commuting by bicycle. A quarter-mile stroll to an exercise addict would be like a quarter-pill of codeine for a heroin junkie—unsatisfying. Not only is this putative workout pill a long, long stretch and rife with pitfalls, it is not even something worth aspiring to.

And that’s just one article. Scientific American recently ran a piece about how people who lack the “gene for underarm odor” (ABCC11) still buy deodorant (couldn’t possibly have anything to do with culture, could it?). Then there was their jaw-dropping “Jewish gene for intelligence,” which Sci Am had already taken down by the time it appeared in my Google Alert. I’d love to have heard the chewing out someone received for that bone-headed headline. Why do these articles keep appearing?

The best science writers understand and even write about how to avoid determinist language. In 2010, Ed Yong wrote an excellent analysis of how, in the 1990s, the monoamine oxidase A (MAOA) gene became mis- and oversold as “the warrior gene.” What’s wrong with a little harmless sensationalism? Plenty, says Yong. First, catchy names like “warrior gene” are bound to be misleading. They are ways of grabbing the audience, not describing the science, so they oversimplify and distort in a lazy effort to connect with a scientifically unsophisticated audience. Second, there is no such thing as a “gene for” anything interesting. Nature and nurture are inextricable. Third, slangy, catchy phrases like “warrior gene” reinforce stereotypes. The warrior gene was quickly linked to the Maori population of New Zealand. Made sense: “everyone knows” the Maoris are “war-like.” Problem was, the preliminary data didn’t hold up. In The Unnatural Nature of Science, the developmental biologist Lewis Wolpert observed that the essence of science is its ability to show how misleading “common sense” can be. Yet that is an ideal; scientists can be just as pedestrian and banal as the rest of us. Finally, Yong points out that genes do not dictate behavior. They are not mechanical switches that turn complex traits on and off. As sophisticated as modern genomics is, too many of us haven’t moved beyond the simplistic Mendelism that enabled the distinguished psychiatrist Henry H. Goddard to postulate—based on reams of data collected over many years —a single recessive gene for “feeblemindedness.” The best method in the world can’t overcome deeply entrenched preconception. As another fine science writer, David Dobbs, pithily put it in 2010, “Enough with the ‘slut gene’ already…genes ain’t traits.”

As knowledge wends from the lab bench to the public eyeball, genetic determinism seeps in at every stage. In my experience, most scientists working today have at least a reasonably sophisticated understanding of the relationship between genes and behavior. But all too often, sensationalism and increasingly greed induce them to oversell their work, boiling complex behaviors down to single genes and waving their arms about potential therapies. Then, public relations people at universities and research labs are in the business of promoting science, so when writing press releases they strive for hooks that will catch the notice of journalists. The two best hooks in biomedicine, of course, are health and wealth. The journalists, in turn, seek the largest viewership they can, which leads the less scrupulous or less talented to reach for cheap and easy metaphors. And even though many deterministic headlines cap articles that do portray the complexity of gene action, the lay reader is going to take away the message, “It’s all in my genes.”

Genetic determinism, then, is not monocausal. It has many sources, including sensationalism, ambition, poor practice, and the eternal wish for simple solutions to complex problems. Science and journalism are united by a drive toward making the complex simple. That impulse is what makes skillful practioners in either field so impressive. But in clumsier hands, the simple becomes simplistic, and I would argue that this risk is multiplied in journalism about science. Science writing is the delicate art of simplifying the complexity of efforts to simplify nature. This is where the tools of history become complementary to those of science and science journalism. Scientists and science writers strive to take the complex and make it simple. Historians take the deceptively simple and make it complicated. If science and science journalism make maps of the territory, historians are there to move back to the territory, in all its richness—to set the map back in its context.

Studying genetics and popularization over the last century or so has led me to the surprising conclusion that genetic oversell is independent of genetic knowledge. We see the same sorts of articles in 2014 as we saw in 1914. Neither gene mapping nor cloning nor high-throughput sequencing; neither cytogenetics nor pleiotropy nor DNA modification; neither the eugenics movement nor the XYY controversy nor the debacles of early gene therapy—in short, neither methods, nor concepts, nor social lessons—seem to make much of a dent in our preference for simplistic explanations and easy solutions.

Maybe we’re just wired for it.

New findings suggest scientists not getting smarter

Certain critics of rigid genetic determinism have long believed that the environment plays a major role in shaping intelligence. According to this view, enriched and stimulating surroundings should make one smarter. Playing Bach violin concertos to your fetus, for example, may nudge it toward future feats of fiddling, ingenious engineering, or novel acts of fiction. Although this view has been challenged, it persists in the minds of romantics and parents–two otherwise almost non-overlapping populations.

If environmental richness were actually correlated with intelligence, then those who live and work in the richest environments should be measurably smarter than those not so privileged.  And what environment could be richer than the laboratory? Science is less a profession than a society within our society–a meritocracy based on an economy of ideas. Scientists inhabit a world in which knowledge accretes and credit accrues inexorably, as induction, peer review, and venture capital fuel the engines of discovery and innovation. Science has become the pre-eminent intellectual enterprise of our time–and American science proudly leads the world. The American biomedical laboratory is to the 21st century what the German university was to the 19th; what Dutch painting was to the 17th; the Portuguese sailing ship to the 16th; the Greek Lyceum to the minus 5th.

According to this view, then, scientists should be getting smarter. One might measure this in various ways, but Genotopia, being quantitatively challenged, prefers the more qualitative and subjective measure of whether we are making the same dumb mistakes over and over. So we are asking today: Are scientists repeating past errors and thus sustaining and perhaps compounding errors of ignorance? Are scientists getting smarter?

Yes and no. A pair of articles (12) recently published in the distinguished journal Trends in Genetics slaps a big juicy data point on the graph of scientific intelligence vs. time–and Senator, the trend in genetics is flat. The articles’ author, Gerald Crabtree, examines recent data on the genetics of intelligence. He estimates that, of the 20,000 or so human genes, between 2,000 and 5,000 are involved in intelligence. This, he argues, makes human intelligence surprisingly “fragile.” In a bit of handwaving so vigorous it calls to mind the semaphore version of Wuthering Heights, he asserts that these genes are strung like links in a chain, rather than multiply connected, as nodes of a network. He imagines the genes for intelligence to function like a biochemical pathway, such that any mutation propagates “downstream”, diminishing the final product–the individual’s god-given and apparently irremediable brainpower.

IQIQ

Beginning in 1865, the polymath Francis Galton fretted that Englishmen were getting dumber. In his Hereditary Genius (1865) he concluded that “families are apt to become extinct in proportion to their dignity” (p. 140). He believed that “social agencies of an ordinary character, whose influences are little suspected, are at this moment working towards the degradation of human nature,” although he acknowledged that others were working toward its improvement. (1) The former clearly outweighed the latter in the mind of Galton and other Victorians; hence Galton’s “eugenics,” an ingenious scheme for human improvement through the machinations of “existing law and sentiment.” Galton’s eugenics was a system of incentives and penalties for marriage and childbirth, meted out according to his calculations of social worth.This is a familiar argument to students of heredity. The idea that humans are degenerating–especially intellectually–persists independently of how much we know about intelligence and heredity. Which is to say, no matter how smart we get, we persist in believing we are getting dumber.

Galton was just one exponent of the so-called degeneration theory: the counter-intuitive but apparently irresistible idea that technological progress, medical advance, improvements in pedagogy, and civilization en masse in fact are producing the very opposite of what we supposed; namely, they are crippling the body, starving the spirit, and most of all eroding the mind.

The invention of intelligence testing by Alfred Binet just before the turn of the 20th century provided a powerful tool for proving the absurd. Though developed as a diagnostic to identify children who needed a bit of extra help in school–an enriched environment–IQ testing was quickly turned into a fire alarm for degeneration theorists. When the psychologist Robert M. Yerkes administered a version of the test to Army recruits during the first world war, he concluded that better than one in eight of America’s Finest were feebleminded–an inference that is either ridiculous or self-evident, depending on one’s view of the military.

These new ways of quantifying intelligence dovetailed perfectly with the new Mendelian genetics, which was developed beginning in 1900. Eugenics—a rather thin, anemic, blue-blooded affair in Victorian England, matured in Mendelian America into a strapping and cocky young buck, with advocates across the various social and political spectra embracing the notion of hereditary improvement. Eugenics advocates of the Progressive era tended to be intellectual determinists. Feeblemindedness–a catch-all for subnormal intelligence, from the drooling “idiot” to the high-functioning “moron”—was their greatest nightmare. It seemed to be the root of all social problems, from poverty to prostitution to ill health.

And the roots of intelligence were believed to be genetic. In England, Cyril Burt found that Spearman’s g (for “general intelligence”)—a statistical “thing,” derived by factor analysis and believed by Spearman, Burt, and others to be what IQ measures—was fixed and immutable, and (spoiler alert) poor kids were innately stupider than rich kids. In America, the psychologist Henry Goddard, superintendent of the Vineland School for the Feebleminded in New Jersey and the man who had introduced IQ testing to the US, published Feeblemindedness: Its Causes and Consequences in 1914. Synthesizing years of observations and testing of slow children, he suggested–counter to all common sense–that feeblemindedness was due to a single Mendelian recessive gene. This observation was horrifying, because it made intelligence so vulnerable–so “fragile.” A single mutation could turn a normal individual into a feebleminded menace to society.

As Goddard put it in 1920, “The chief determiner of human conduct is the unitary mental process which we call intelligence.” The grade of intelligence for each individual, he said, “is determined by the kind of chromosomes that come together with the union of the germ cells.” Siding with Burt, the experienced psychologist wrote that intelligence was “conditioned by a nervous mechanism that is inborn, and that it was “but little affected by any later influence” other than brain injury or serious disease. He called it “illogical and inefficient” to attempt any educational system without taking this immovable native intelligence into account. (Goddard, Efficiency and Levels of Intelligence, 1920, p 1)

This idea proved so attractive that a generation of otherwise competent and level-headed reserchers and clinicians persisted in believing it, again despite it being as obvious as ever that the intellectual horsepower you put out depends on the quality of the engine parts, the regularity of the maintenance you invest in it, the training of the driver, and the instruments you use to measure it.

The geneticist Hermann Joseph Muller was not obsessed with intelligence, but he was obsessed with genetic degeneration. Trained at the knobby knees of some of the leading eugenicists of the Progressive era, Muller–a fruitfly geneticist by day and a bleeding-heart eugenicist by night–fretted through the 1920s and 1930s about environmental assaults on the gene pool: background solar radiation, radium watch-dials, shoestore X-ray machines, etc. The dropping of the atomic bombs on the Japanese sent him into orbit. In 1946 he won a Nobel prize for his discovery of X-ray-induced mutation, and he used his new fame to launch a new campaign on behalf of genetic degeneration. The presidency of the new American Society of Human Genetics became his bully pulpit, from which he preached nuclear fire and brimstone: our average “load of mutations,” he calculated, was about eight damaged genes–and growing. Crabtree’s argument thus sounds a lot like Muller grafted onto Henry Goddard.

In 1968, the educational psychologist Arthur Jensen produced a 120-page article that asserted that compensatory education–the idea that racial disparities in IQ correlate with opportunities more than innate ability, and accordingly that they can be reduced by enriching the learning environments of those who test low–was futile. Marshaling an impressive battery of data, most of which were derived from Cyril Burt, Jensen insisted that blacks are simply dumber than whites, and (with perhaps just a hint of wistfulness) that Asians are the smartest of all. Jensen may not have been a degenerationist sensu strictu, but his opposition to environmental improvement earns him a data point.

In 1990, Richard Herrnstein and Charles Murray published their infamous book, The Bell Curve. Their brick of a book was a masterly and authoritative rehash of Burt and Jensen, presented artfully on a platter of scientific reason and special pleading for the brand of reactionary politics that is reserved for those who can afford private tutors. They found no fault with either Burt’s data (debate continues, but it has been argued that Burt was a fraud) or his conclusion that IQ tests measure Spearman’s g, that g is strongly inherited, and that it is innate. Oh yes, and that intellectually, whites are a good bit smarter than blacks but slightly dumber than Asians. Since they believed there is nothing we can do about our innate intelligence, our only hope is to “marry up” and try to have smarter children.

The Bell Curve appeared just at the beginning of the Human Genome Project. By 2000 we had a “draft” reference sequence for the human genome, and by 2004 (ck) “the” human genome was declared complete. Since the 1940s, human geneticists had focused on single-gene traits, especially diseases. One problem with Progressive era eugenics, researchers argued, was that they had focused on socially determined and hopelessly complex traits; once they set their sights on more straightforward targets, the science could at last advance.

But once this low-hanging fruit had been plucked, researchers began to address more complex traits once again. Disease susceptibility, multicausal diseases such as obesity, mental disorders, and intelligence returned to the fore. Papers such as Crabtree’s are vastly more sophisticated than Goddard’s tome. The simplistic notion of a single gene for intelligence is long gone; each of Crabtree’s 2,000-5,000 hypothetical intelligence genes hypothetically contributes but a tiny fraction of the overall. If you spit in a cup and send it to the personal genome testing company 23AndMe, they will test your DNA for hundreds of genes, including one that supposedly adds 7 points to your IQ (roughly 6 percent for an IQ of 110).

Thus we are back around to a new version of genes for intelligence. Despite the sophistication and nuance of modern genomic analyses, we end up concluding once again that intelligence is mostly hereditary and therefore also racial, and that it’s declining.

Apart from the oddly repetitious and ad hoc nature of the degeneration argument, what is most disconcerting is this one staring implication: that pointing out degeneration suggests a desire to do something about it. If someone were, say, sitting on the couch and called out, “The kitchen is sure a mess! Look at the plates all stacked there, covered with the remains of breakfast, and ick, flies are starting to gather on the hunks of Jarlsberg and Black Twig apples hardening and browning, respectively, on the cutting board,” you wouldn’t think he was simply making an observation. You’d think he was implying that you should get in there and clean up the damn kitchen. Which would be a dick move, because he’s just sitting there reading the Times, so why the heck doesn’t he do it himself. But the point is, sometimes observation implies action. If you are going to point out that the genome is broken, you must be thinking on some level that we can fix it. Thus, degeneration implies eugenics. Not necessarily the ugly kind of eugenics of coercive sterilization laws and racial extermination. But eugenics in Galton’s original sense of voluntary human hereditary improvement.

And thus, scientists do not appear to be getting any smarter. Despite the enriched environs of the modern biomedical laboratory, with gleaming toys and stimulating colleagues publishing a rich literature that has dismantled the simplistic genetic models and eugenic prejudices of yore, researchers such as Crabtree continue to believe the same old same old: that we’re getting dumber–or in danger of doing so.

In other words, sometimes the data don’t seem to matter. Prejudices and preconceptions leak into the laboratory, particularly on explosive issues such as intelligence and/or race, regardless of how heredity is constructed. Plenty of scientists are plenty smart, of course. But rehashing the degeneracy theory of IQ does not demonstrate it.