Tag Archives: genomics

Genetic determinism: why we never learn—and why it matters

Here it is, 2014, and we have “Is the will to work out genetically determined?,” by Bruce Grierson in Pacific Standard (“The Science of Society”).

Spoiler: No.

The story’s protagonist is a skinny, twitchy mouse named Dean who lives in a cage in a mouse colony at UC Riverside. Dean runs on his exercise wheel incessantly—up to 31 km per night. He is the product of a breeding experiment by the biologist Ted Garland, who selected mice for the tendency to run on a wheel for 70 generations. Garland speculates that Dean is physically addicted to running—that he gets a dopamine surge that he just can’t get enough of.

human hamster wheel Genetic determinism: why we never learn—and why it matters

Addiction theory long ago embraced the idea that behaviors such as exercise, eating, or gambling may have similar effects on the brain as dependence-forming drugs such as heroin or cocaine. I have no beef with that, beyond irritation at the tenuous link between a running captive mouse to a human junkie. What’s troubling here is the genetic determinism. My argument is about language, but it’s more than a linguistic quibble; there are significant social implications to the ways we talk and write about science. Science has the most cultural authority of any enterprise today—certainly more than the humanities or arts!. How we talk about it shapes society. Reducing a complex behavior to a single gene gives us blinders: it tends to turn social problems into molecular ones. As I’ve said before, molecular problems tend to have molecular solutions. The focus on genes and brain “wiring” tends to suggest pharmaceutical therapies.

To illustrate, Grierson writes,

File this question under “Where there’s a cause, there’s a cure.” If scientists crack the genetic code for intrinsic motivation to exercise, then its biochemical signature can, in theory, be synthesized. Why not a pill that would make us want to work out?

I have bigger genes to fry than to quibble over the misuse of “cracking the genetic code,” although it may be indicative of a naiveté about genetics that allows Grierson to swallow Garland’s suggestion about an exercise pill. Grierson continues, quoting Garland,

“One always hates to recommend yet another medication for a substantial fraction of the population,” says Garland, “but Jesus, look at how many people are already on antidepressants. Who’s to say it wouldn’t be a good thing?”

I am. First, Jesus, look at how many people are already on anti-depressants! The fact that we already over-prescribe anti-depressants, anxiolytics, ADHD drugs, statins, steroids, and antibiotics does not constitute an argument for over-prescribing yet another drug. “Bartender, another drink!” “Sir, haven’t you already had too much?” “Y’know, yer right—better make it two.”

Then, what if it doesn’t work as intended? Anatomizing our constitution into “traits” such as the desire to work out is bound to have other effects. Let’s assume Dean is just like a human as far as the presumptive workout gene is concerned. Dean is skinny and twitchy and wants to do nothing but run. Is it because he “wants to exercise” or is it because he is a neurotic mess and he takes out his anxiety on his wheel? Lots of mice in little cages run incessantly—Dean just does it more than most. His impulse to run is connected to myriad variables, genes, brain nuclei, and the reported results say nothing about mechanisms. We now know that the physiological environment influences the genes as much as the genes influence physiological environment. The reductionist logic of genetic determinism, though, promotes thinking in terms of a unidirectional flow of causation, from the “lowest” levels to the “highest.” The more we learn about gene action, the less valid that seems to become as an a priori assumption. The antiquated “master molecule” idea still permeates both science and science writing.

Further, when you try to dissect temperament into discrete behaviors this way, and design drugs that target those behaviors, side effects are sure to be massive. Jesus, look at all those anti-depressants, which decrease libido. Would this workout pill make us neurotic, anxious, jittery? Would we become depressed if we became injured or otherwise missed our workouts? Would it make us want to work out or would it make us want to take up smoking or snort heroin? In the logic of modern pharmacy, the obvious answer to side effects is…more drugs: anti-depressants, anxiolytics, anti-psychotics, etc. A workout pill, then, would mainly benefit the pharmaceutical industry. When a scientist makes a leap from a running mouse to a workout pill, he is floating a business plan, not a healthcare regimen.

And finally, what if it does work as intended? It would be a detriment to society, because, having a pill, it would remove yet another dimension of a healthy lifestyle from the realm of self-discipline, autonomy, and social well-being. It becomes another argument against rebuilding walkable neighborhoods and promoting public transportation and commuting by bicycle. A quarter-mile stroll to an exercise addict would be like a quarter-pill of codeine for a heroin junkie—unsatisfying. Not only is this putative workout pill a long, long stretch and rife with pitfalls, it is not even something worth aspiring to.

And that’s just one article. Scientific American recently ran a piece about how people who lack the “gene for underarm odor” (ABCC11) still buy deodorant (couldn’t possibly have anything to do with culture, could it?). Then there was their jaw-dropping “Jewish gene for intelligence,” which Sci Am had already taken down by the time it appeared in my Google Alert. I’d love to have heard the chewing out someone received for that bone-headed headline. Why do these articles keep appearing?

The best science writers understand and even write about how to avoid determinist language. In 2010, Ed Yong wrote an excellent analysis of how, in the 1990s, the monoamine oxidase A (MAOA) gene became mis- and oversold as “the warrior gene.” What’s wrong with a little harmless sensationalism? Plenty, says Yong. First, catchy names like “warrior gene” are bound to be misleading. They are ways of grabbing the audience, not describing the science, so they oversimplify and distort in a lazy effort to connect with a scientifically unsophisticated audience. Second, there is no such thing as a “gene for” anything interesting. Nature and nurture are inextricable. Third, slangy, catchy phrases like “warrior gene” reinforce stereotypes. The warrior gene was quickly linked to the Maori population of New Zealand. Made sense: “everyone knows” the Maoris are “war-like.” Problem was, the preliminary data didn’t hold up. In The Unnatural Nature of Science, the developmental biologist Lewis Wolpert observed that the essence of science is its ability to show how misleading “common sense” can be. Yet that is an ideal; scientists can be just as pedestrian and banal as the rest of us. Finally, Yong points out that genes do not dictate behavior. They are not mechanical switches that turn complex traits on and off. As sophisticated as modern genomics is, too many of us haven’t moved beyond the simplistic Mendelism that enabled the distinguished psychiatrist Henry H. Goddard to postulate—based on reams of data collected over many years —a single recessive gene for “feeblemindedness.” The best method in the world can’t overcome deeply entrenched preconception. As another fine science writer, David Dobbs, pithily put it in 2010, “Enough with the ‘slut gene’ already…genes ain’t traits.”

As knowledge wends from the lab bench to the public eyeball, genetic determinism seeps in at every stage. In my experience, most scientists working today have at least a reasonably sophisticated understanding of the relationship between genes and behavior. But all too often, sensationalism and increasingly greed induce them to oversell their work, boiling complex behaviors down to single genes and waving their arms about potential therapies. Then, public relations people at universities and research labs are in the business of promoting science, so when writing press releases they strive for hooks that will catch the notice of journalists. The two best hooks in biomedicine, of course, are health and wealth. The journalists, in turn, seek the largest viewership they can, which leads the less scrupulous or less talented to reach for cheap and easy metaphors. And even though many deterministic headlines cap articles that do portray the complexity of gene action, the lay reader is going to take away the message, “It’s all in my genes.”

Genetic determinism, then, is not monocausal. It has many sources, including sensationalism, ambition, poor practice, and the eternal wish for simple solutions to complex problems. Science and journalism are united by a drive toward making the complex simple. That impulse is what makes skillful practioners in either field so impressive. But in clumsier hands, the simple becomes simplistic, and I would argue that this risk is multiplied in journalism about science. Science writing is the delicate art of simplifying the complexity of efforts to simplify nature. This is where the tools of history become complementary to those of science and science journalism. Scientists and science writers strive to take the complex and make it simple. Historians take the deceptively simple and make it complicated. If science and science journalism make maps of the territory, historians are there to move back to the territory, in all its richness—to set the map back in its context.

Studying genetics and popularization over the last century or so has led me to the surprising conclusion that genetic oversell is independent of genetic knowledge. We see the same sorts of articles in 2014 as we saw in 1914. Neither gene mapping nor cloning nor high-throughput sequencing; neither cytogenetics nor pleiotropy nor DNA modification; neither the eugenics movement nor the XYY controversy nor the debacles of early gene therapy—in short, neither methods, nor concepts, nor social lessons—seem to make much of a dent in our preference for simplistic explanations and easy solutions.

Maybe we’re just wired for it.

Why aren’t genome profiles free? A cynic’s view

Charles Seife’s piece over at sciamblogs the other day gave me one of those forehead-slapping, “if-it-was-a-double-helix-it-would-have-bit-me” moments. For 23andMe, the “test” or genome profile is small potatoes. The real product of 23andMe isn’t the saliva test.

It’s the database, stupid.

Think about gmail or Facebook. When you sign up for a “free” account you agree to allow them to send you targeted promotions, in the form of ads that appear in the margins. It’s hilarious how bad their algorithms are. When I rant about some bonehead right-wing politician, I start getting suggestions to follow Mitt Romney. When my wife posted wedding pictures, she got ads for wedding registries. My google profile is amazingly bad–mixed in with a few genuine interests, it lists dozens of things I have no interest in whatsoever (fishing, dolls and accessories, apartments and residential rentals) as well as things so general they say nothing meaningful about me (consumer resources, search engines). I’ve often found that reassuring. Though I know the software is bound to get better, it’s comforting at least for the moment to know that they don’t actually know me that well. I’m not naive enough to think that will last forever.

What 23andMe is really about, says Seife, is doing that same kind of profiling with your genome. Historians and anthropologists of science have long been interested in “biologization” and “medicalization.” Ugly words, useful concepts. Biomedicine tends to shift our gaze from labor to biology (and especially health). Is violence a crime or a disease? Do you identify more strongly as a carpenter, soldier, or professor, vs. as celiac, PTSD, or a breast-cancer survivor? Conceiving some corner of our world in biological terms can have profound implications. If violence is a crime, we treat it with fines, incarceration, or death. If it is an organic disease, we treat it with drugs and counseling. Conceivably, we may someday treat it with gene therapy. As Foucault pointed out, both criminalization and medicalization involve behavior modification–just in different formats. Medicalization can be more humane, but it can also strip away one’s autonomy and subtly and dangerously shift power relationships.

Every time someone sends in their little vial of spit to 23andMe, the company adds to a large-and-growing database of genomic data linked to a broad range of personal tastes and behaviors. Like Google and Facebook, they make you agree to let them send you ads based on the data they collect, which is augmented by their social-media site. It is a genomic version of Google. Welcome to the all-volunteer biological surveillance state.

Seife points out that Anne Wojcicki was married to Google co-founder Sergei Brin, that Google is a heavy investor in 23andMe, and that the price of the saliva test has been dropping steadily and is now below $100.

My cynical view is that the company has an easy end-around available for the FDA letter demanding that they halt marketing their test: give it away. It’s not marketing if they’re not selling, right? Investopedia defines marketing as “The activities of a company associated with buying and selling a product or service.” It does go on to list advertising as one key aspect of marketing, but I wonder whether they could successfully argue that promoting a free product isn’t marketing per se.

Legal definitions aside, if Seife is right, my guess is that before long genome profiling will be like web browsers and email: something almost everyone does, and that except for a few willing to pay for a premium private service, something that provides so many benefits that we tolerate its ads as a necessary trade-off of modern life.

Is this any more insidious than gmail? If we say yes, we risk running headlong into the genetic determinism this blog rails against. We have to be careful about privileging biological information over social information. If Facebook’s suggestions are laughable, they’d likely seem prescient in comparison with what they’d predict based on my genomic profile. The “genes for” most things explain tiny amounts of variance and tend to have low penetrance. Other than a few strongly Mendelian diseases, a genome profile currently says very little about you, simply because it’s based on small probabilities of uncertain precision.

But like the algorithms analyzing your social profile, those combing your genomic profile will improve, and probably at a rate faster than any of us expect. Most importantly, your genomic profile will merge with your social profile, which will greatly enhance the accuracy of both. Your social profile will become biologized–rooted in and interwoven with your DNA.

The gradual way in which 23andMe is heading toward an open-source business model may simply reflect the high cost of getting the biotech version of Google Plus off the ground. As profits increase, they can afford to drop the price. When it hits zero–when they start giving away the test–rest assured that ad revenues will then be enough to keep the shareholders happy.

A blow for personal genome testing

Hey honey–remember when I accidentally left the chicken coop open and they all flew away? Well I think they’ve come back home to roost!

Last summer, we did an analysis of the 23andMe commercial promoting their genetic testing service and the egotistical identity politics it both taps into and contributes to. The ad was all about how your genes were “You” and knowing about them would enable you to predict your genetic future. Genetic profiling can in some cases give robust statistical estimates of likelihood of certain genetic conditions, but it is safe to say that we rarely know what that means. And it’s presented as though we do.

Now we find that FDA is ordering 23andMe to stop marketing their tests.

The 23andMe saliva sample kit, says FDA, is a “medical device,” “intended for use in the diagnosis of disease or other conditions or in the cure, mitigation, treatment, or prevention of disease, or is intended to affect the structure or function of the body.

They cite the company’s claims to allow patients’ genome profiles to help them assess “health risks,” and “drug response,” and specifically as a “first step in prevention” that enables users to “take steps toward mitigating serious diseases” such as diabetes, coronary heart disease, and breast cancer.

23andmescreenshot A blow for personal genome testing

This is not a shot over the bow–it’s the last straw. FDA has warned 23andMe repeatedly, going back to July, 2012, that they were making health claims about their product that they couldn’t back up.

The company offers two types of products: a genealogical “panel” or profile, and a health panel. The genealogical panel is popular but is apparently considered a harmless hobby, or at least outside the purview of the Public Health Service. It is not clear whether FDA (which, like the National Institutes of Health and the Centers for Disease Control) falls under the sprawling PHS will have any concerns about genealogical applications of the saliva test, but that would seem unlikely. The problem for 23andMe is that, as shown by the ad we analyzed earlier, they have been pushing the health panel very hard. Family trees are a hobby; health is where the real money is.

Direct-to-consumer medicine trails an appealing democratic, anti-authoritarian perfume that seems to make people slightly drunk. Mild intoxication can be pleasant, need not be dangerous, and sometimes can be a spur to creativity. But it can also impair your judgment. When you’ve gotta drive the kids home, you may need a couple cups of good strong regulatory coffee and a couple hours to sober up before getting behind the wheel.

A good deal of “preventive, participatory, personalized” medicine is profit-driven, and stockholders don’t necessarily have the public’s health foremost in mind. The FDA warning is a good illustration of why it’s important to balance the goal of stimulating innovation and economic growth with the goal of maximizing health. For the former, the free market can be a powerful tool. But for the latter, sometimes you need a little good old-fashioned meritocratic oversight.

h/t Robert Resta, Mark Largent

 

Neonatal genome screening: preventive medicine or prophylactic profiteering?

Thoughtful blog post over at Nature recently by Erika Check, on a $25M set of 4 studies that will sequence the exomes of 1500 neonates, whether ill or not. Called the Genomic Sequencing and Newborn Screening Disorders program, it is essentially a pilot study for universal newborn genome sequencing. One could see such a study coming down the pike. But if this is a direction in which medicine is heading, we should be moving like a wary cat, not like a bounding puppy.

The dominant rhetoric for whole-genome screening sketches a benevolent world of preventive care and healthier lifestyles. “One can imagine a day when every newborn will have their genome sequenced at birth,” said Alan Guttmacher, director of NICHD, which co-sponsors the program with the genome Institute. In his genotopian vision, a baby’s sequence “would become a part of the electronic health record that could be used throughout the rest of the child’s life both to think about better prevention but also to be more alert to early clinical manifestations of a disease.”

But deeper in her article, Check responsibly quotes a skeptic, Stephen Kingsmore of Children’s Mercy Hospital and Clinics in Kansas City, who estimates that the program is likely to find 20 false positives for every true positive. In other words, only around 5% of what will loosely be called “disease genes” will in fact lead to disease. One of the reasons for that low rate of true positives is that many of the disease alleles we can screen for concern diseases of old people: Alzheimer’s, various cancers, and so on. Life experience plays a large and still imperfectly understood role in such diseases. Sure, we can test at birth or even before for the SNPs we know correlate with those diseases, but, Check asks, what does that really tell us?

In Guttmacher’s sunny scenario about early prevention, the parents and later the child could be regularly reminded of this individual’s elevated risk. This itself has not only direct health risks but potentially a significant inadvertent impact on the patient’s social life. Everything from the child’s temperament (is she anxious by nature?) to family situation (ill siblings? Alcoholic parent? Suicide?) to many other factors could profoundly modulate how this genetic knowledge would affect the child. Social context matters.

But such an individualized, lifelong health-maintenance program is unlikely ever to be accessible beyond medicine’s most elite customers. Personalized medicine has been around since the ancient Greeks, and, logically enough, it’s expensive. Only the rich have ever been able to afford truly individualized care. “Personalized medicine” seems to have almost as many meanings as people who use the term, but if what you mean by personalized medicine is a physician who knows you as an individual and tracks your healthcare over a significant part of your lifetime, you’re talking about elite medicine.

Medicine for the middle and lower classes tends to be much more anonymous and impersonal. Throughout medical history, the headcount–if they can afford a doctor at all–get more routinized, generalized care. Even many in that fortunate segment of the population today who have health insurance attend clinics where they do not see the same doctor every time. In any given visit, their doctor is likely to know them only by their chart. No one asks, “Has your family situation settled down yet? Are you sleeping better? How’s your new exercise program going?” What you get is a 15-minute appointment, a quick diagnosis, and, usually, a prescription. Genomic technology is unlikely to change this situation. If anything, it will enhance it.

For the hoi polloi, then, personalized medicine will likely mean personalized pharmacology. Some of those most excited about personalized medicine are biotech and pharma companies and their investors, because some of the most promising results from genomic medicine have been new drugs and tests. Should neonatal genome screening become part of routine medical care, middle and lower-class parents would likely be given a report of their child’s genome, the associated disease risks, and a recommended prophylactic drug regimen. Given an elevated risk of high cholesterol or other heart disease, for example, you might be put on statins at an early age. A SNP associated with bipolar disease or schizophrenia might prompt preventive anti-depressants or anti-psychotics. And so forth.

Such a program would be driven first by the principles of conservative medical practice. Medicine plays it safe. If there’s a risk, we minimize it. If you go to the ER with a bad gash, you’ll be put on a course of antibiotics, not because you have an infection but to prevent one. Second, it would be driven by economics. Drug companies obviously want to sell drugs. So they will use direct-to-consumer marketing and whatever other tools they have to do so. That’s their right, and in a comparatively unregulated market, arguably their duty.

But now recall Kingmore’s figure of 20 false positives for every true positive. This may sound high, but again, medical practice is conservative: we’d rather warn you of a disease you won’t get than fail to notify you of a disease you will get. False positives, in other words, are preferable to false negatives. Add to that the scanty state of our knowledge of gene-environment interactions. We are rapidly accumulating mountains of data on associations between SNPs and diseases, but we still know little about how to interpret the risks. We needn’t invoke any paranoid conspiracy theory: that kind of data is devilishly hard to acquire. Science is the art of the soluble.

If Kingmore is even in the ballpark, then, the more neonatal genome screening reaches into the population, the more unnecessary drugs people will be taking. Unnecessary medication of course can have negative effects, especially over the long term. Indeed, the long-term and developmental effects of many medications–especially psychiatric medications–are unknown.

The Genomic Sequencing and Newborn Screening Disorders program is purely an investigative study. Parents in this study won’t even be given their children’s genome reports. But the study is obviously designed to investigate the impact of widespread neonatal whole-genome screening. Currently, all 50 states administer genetic screening for phenylketonuria and other common diseases. The historian Diane Paul has written a superb history of PKU screening. It’s not hard to imagine a similar scenario playing out, with one state leading the way with a bold new program of universal newborn exome screening and, in a decade or two, all other states following its lead.

“Personalized medicine” is a term that’s used increasingly loosely. It covers a multitude of both sins and virtues, from old-fashioned preventive regimens to corporate profiteering. From here, widespread neonatal genome screening looks like an idea that will benefit shareholders more than patients.

 

The gene for hubris


A recent post by Jon Entine on the Forbes website leads with a complimentary citation of my book– and then goes on to undermine its central thesis. He concludes:

Modern eugenic aspirations are not about the draconian top-down measures promoted by the Nazis and their ilk. Instead of being driven by a desire to “improve” the species, new eugenics is driven by our personal desire to be as healthy, intelligent and fit as possible—and for the opportunity of our children to be so as well. And that’s not something that should be dismissed lightly.

510938342e29b The gene for hubrisWell, first of all, as the recent revelations of coerced sterilization of prisoners in California shows, “draconian, top-down” measures do still occur. Genetics and reproduction are intensely potent, and wherever we find abuse of power we should be alert to the harnessing of biology in the service of tyranny.

Second, there’s more than one kind of tyranny. Besides the tyranny of an absolute ruler, perhaps the two most potent and relevant here are the tyranny of the commons and the tyranny of the marketplace. The fact that they are more subtle makes them in some ways more dangerous. The healthcare industry does much good in the world, but it is naive to treat it as wholly benign.

Further, putting human evolution in the hands of humans, means accepting long-term consequences for short-term goals. The traits we value–health, intelligence, beauty–are the result of the action of many genes interacting with each other and with a dynamic environment. The entire system is contingent, inherently unpredictable. Yet we treat it as simple and deterministic. Until now, technology has been the major obstacle to guiding human evolution. It may be that now the major obstacle is our reasoning ability, our capacity for grasping contingency and probability and change. We’re tinkering with the machinery of a system whose complexity is still unfolding before us. The probability of unforeseen consequences is 100%. The only question is how severe they will be. We will only know in retrospect.

If we now have the tools to meaningfully guide our own evolution–as eugenicists have always wanted to do–we cannot take a blithe and Panglossian attitude. We have to be alert to the risks and take them seriously. That is not traditionally science’s strong suit. The public face of science is sunny, optimistic, fun. It strides boldly into the future, laughing and making striking promises. The industries behind science and health are wealthy and politically powerful. Not everything they do is benign.

To be a critic of that public-relations machine–of hype, in other words–is not to be a critic of health or knowledge or progress. Genetic science has the potential to bring us enormous benefits in health and well-being, and as they do, I stand in line with my fellow humans for my fair share. But that science also carries huge and unforeseeable risks, the root of which, perhaps, is arrogance. It’s one whose consequences are painfully evident in the historical record.

 

Criminomics: stopping crime before it starts

Criminomics Bears Fruit: 2037 Murder Rate Lowest Since 1964

 

Dylan looks like any normal six-year-old. He is bright and a little mischievous, has many friends, and is praised by his teachers as a model student. But his normalcy is only skin deep. In his cells lies the DNA of a murderer.

Though Dylan has gene variants that give him a more than 90% chance of premeditated mass murder, he will never commit a crime. Thanks to early intervention by doctors, Dylan’s criminal tendencies were identified before birth. Rather than abort the fetus, however, Dylan’s parents agreed to an intensive program of medication and counseling that will all but ensure that Dylan will lead a happy, normal, peaceful life.

Dylan is one of the success stories of the Criminal Genome Project, or CGP, the effort to sequence the complete set of genes involved in murder and other antisocial behaviors. The controversial science on which this project is based—criminomics—is winning converts, now that the latest crime figures are in. Last year, the annual murder rates in 8 American cities dropped to double digits for the first time since the middle of the twentieth century. In Washington, DC, only 90 people died by gunshot last year, down from 103 in 2036. Experts attribute the drop to criminopathy, a medical and public-health approach to crime based on criminomics. The criminomic method uses high-speed genome sequencing to to identify criminal tendencies at birth and begin treatment early in life. Clinical trials for criminal-gene therapy, which would eliminate antisocial tendencies permanently, are underway and, though preliminary, are showing promising early results. The first criminopathic patients are just hitting their 20s now—and the peace is deafening.

The CGP is run by Dr. Bart O’Day, a criminomicist at the Baler Agricultural and Behavioral University in the Republic of Texas. O’Day wrote the grant proposal that funded the project after a tragic shooting at an elementary school in Connecticut in 2012 in which 20 children and 6 adults were killed by a lone gunman. Thankfully, such a crime has since become unimaginable, thanks to the efforts of O’Day and his colleagues.

“It was an obvious thing to do,” O’Day said. “In the years just before the CGP, we had sequenced the cancer genome, the influenza genome, the pseudome, the schizome, and the retardome. The criminome was just lying in wait for us. So the science was there. All we needed was the motivation.”

In fact, the motivation had been there for 150 years. In the 1870s, the Italian criminologist Cesare Lombroso defined a “criminal type,” characterized by distinctive facial features and, ironically, the excessive use of tattooing, which he used in one of the first systematic attempts to prevent crime by biological methods.[1] About the same time, the Victorian polymath Francis Galton developed “composite photography,” in which he superimposed images of faces as a means of identifying the “criminal type.” “If criminals are found to have certain special types of features, that certain personal peculiarities distinguish those who commit certain classes of crime,” observed Edmund DuCane, one of the leading criminologists of Victorian England, “the tendency to crime is in those persons born or bred in them, and either they are incurable or the tendency can only be checked by taking them in hand at the earliest periods of life.”[2]

With the creation of the science of genetics after the turn of the last century, vaguenesses such as “inborn tendencies” and “heredity” hardened into “genes.” In 1914, the American psychologist Henry H. Goddard wrote, “The criminal is not born; he is made.” Goddard traced criminality to mental retardation, or “feeble-mindedness,” in the term of the day. By compassionately treating feeble-mindedness, Goddard believed one could prevent crime. The feeble-minded type, Goddard wrote, was “misunderstood and mistreated, driven into criminality for which he is well fitted by nature. It is hereditary feeble-mindedness not hereditary criminality that accounts for the conditions.”[3] Goddard believed he had found a single Mendelian gene for feeble-mindedness. By breeding it out of the population, he thought he could eliminate crime, as well as poverty, prostitution, and much illness. Though the feeblemindedness gene has been discredited, Goddard’s belief that crime is a genetic disease rather than a perverse exercise of free will has transformed our criminal justice system.

The decisive step was in reframing crime in terms of public health rather than justice. In the early1990s, the National Institute of Alcohol, Drug Abuse, and Mental Health (today subsumed under the National Institute of Genomics) undertook a massive Violence Initiative based on similar principles. It pursued a public health approach to urban crime, which, proponents recognized, was based on biology (and therefore, ultimately, genes).[4] Uncontroversial at first, liberal opposition to the effort mounted, ultimately leading to the canceling of a scientific conference on genetic factors in crime in 1992.[5] This first Violence Initiative died a rather brutal and noisy death. Yet work on the biological basis of crime continued apace. In 1995, a Danish twin study identified the first crime gene, and more were identified shortly after the turn of the century.

But it was high-speed genome sequencing, combined with sophisticated methods of correlating complex behaviors with DNA sequence, that finally provided the technological breakthrough to stop crime before it starts. After the 2012 school shooting, it took a full year for O’Day’s team to sequence the criminal genome (today it could be done in an afternoon). But in 2014, they published paper describing 112 gene variants that together account for more than 99% of predisposition to murder. The genes were patented and licensed to pharmaceutical companies, and seven new targeted therapies were quickly added to the standard psychiatric armamentarium of anti-depressants and anti-psychotics. The federal Violence Initiative was reinstated in 2015 as the Institute of Crime Prevention (ICM), a branch of the National Institutes of Mental Health.

The first mandatory screening for criminal tendencies was put in place in Washington, DC, in 2018. Other states quickly followed; today, only West Dakota and North Virginia lack screening laws. Convicted murderers were the first to be screened. The ICM then tied crime screening to the back-to-school vaccination requirements for students in secondary and primary schools. Most states now test babies at birth, with blood from the standard heel-stick. Babies born with greater than 50% chance of committing murder have their standard RFID chips, implanted in every child at birth, encoded with the designation “Precrim.”

Individuals identified as precriminal are placed under the care of a criminopathic physician, assigned a health care worker, and given criminal prophylaxis: a treatment regimen tailored to their genetic and environmental circumstances. In all cases, this involves a combination of medications and counseling designed to maintain equanimity, promote sociality, and minimize the risk of triggers, including certain music and video games. Teachers and the parents of friends can discretely scan the child and take steps to minimize conflict and quickly intervene should violence erupt. Most states now prohibit the guardians of precrims from keeping firearms in their homes. NRA members oppose such bans, pointing out that since precrims can be dosed so as to ensure docility with a wide margin of safety, prohibiting guns in precrim homes is overkill.

Combined, these methods have proven remarkably effective. Murder rates began dropping as soon as the programs were put in place, but as the first neonatal precrims hit their teens, rates began to plummet. The rates of other violent crimes have also begun to fall, though somewhat more slowly: rapes are down in most states, as are armed robberies and even grafitti and illegal dumping. Scientists at the CGP explain these results by hypothesizing that many criminal behaviors share a common genetic mechanism, possibly related to emotional intelligence.

For all its success, the program has its opponents. Eugene Galton, a member of the Galton dynasty of scientific criminologists, recognizes the benefits of the criminopathy program but thinks the social costs are too high. “Liberty is too high a price to pay for safety,” he says. “We’re ceding our free will to an iatrocracy—a government by the doctors.”

Such philosophical musings carry little weight with inner-city residents who now sleep more peacefully, without the constant pops of gunfire that once punctuated the night. Dylan’s mother sees safety as the best kind of freedom: “I prefer a war with drugs to a War on Drugs,” she says. “I love my son; I’d rather put chemical bars around his mind than steel ones around his body.”


[2] Galton, “Composite portraits,” 143.

[4] Extrapolating slightly from Breggin, Reclaiming our children, p. 52.

[5] New York Times, Sept. 5, 1992, front page. See also Allen, Garland E. “Modern Biological Determinism: The Violence Initiative, the Human Genome Project, and the New Eugenics.” In The Practice of Human Genetics, 1-23, 1999.

 

New findings suggest scientists not getting smarter

Certain critics of rigid genetic determinism have long believed that the environment plays a major role in shaping intelligence. According to this view, enriched and stimulating surroundings should make one smarter. Playing Bach violin concertos to your fetus, for example, may nudge it toward future feats of fiddling, ingenious engineering, or novel acts of fiction. Although this view has been challenged, it persists in the minds of romantics and parents–two otherwise almost non-overlapping populations.

If environmental richness were actually correlated with intelligence, then those who live and work in the richest environments should be measurably smarter than those not so privileged.  And what environment could be richer than the laboratory? Science is less a profession than a society within our society–a meritocracy based on an economy of ideas. Scientists inhabit a world in which knowledge accretes and credit accrues inexorably, as induction, peer review, and venture capital fuel the engines of discovery and innovation. Science has become the pre-eminent intellectual enterprise of our time–and American science proudly leads the world. The American biomedical laboratory is to the 21st century what the German university was to the 19th; what Dutch painting was to the 17th; the Portuguese sailing ship to the 16th; the Greek Lyceum to the minus 5th.

According to this view, then, scientists should be getting smarter. One might measure this in various ways, but Genotopia, being quantitatively challenged, prefers the more qualitative and subjective measure of whether we are making the same dumb mistakes over and over. So we are asking today: Are scientists repeating past errors and thus sustaining and perhaps compounding errors of ignorance? Are scientists getting smarter?

Yes and no. A pair of articles (12) recently published in the distinguished journal Trends in Genetics slaps a big juicy data point on the graph of scientific intelligence vs. time–and Senator, the trend in genetics is flat. The articles’ author, Gerald Crabtree, examines recent data on the genetics of intelligence. He estimates that, of the 20,000 or so human genes, between 2,000 and 5,000 are involved in intelligence. This, he argues, makes human intelligence surprisingly “fragile.” In a bit of handwaving so vigorous it calls to mind the semaphore version of Wuthering Heights, he asserts that these genes are strung like links in a chain, rather than multiply connected, as nodes of a network. He imagines the genes for intelligence to function like a biochemical pathway, such that any mutation propagates “downstream”, diminishing the final product–the individual’s god-given and apparently irremediable brainpower.

IQIQ 1024x449 New findings suggest scientists not getting smarter

Beginning in 1865, the polymath Francis Galton fretted that Englishmen were getting dumber. In his Hereditary Genius (1865) he concluded that “families are apt to become extinct in proportion to their dignity” (p. 140). He believed that “social agencies of an ordinary character, whose influences are little suspected, are at this moment working towards the degradation of human nature,” although he acknowledged that others were working toward its improvement. (1) The former clearly outweighed the latter in the mind of Galton and other Victorians; hence Galton’s “eugenics,” an ingenious scheme for human improvement through the machinations of “existing law and sentiment.” Galton’s eugenics was a system of incentives and penalties for marriage and childbirth, meted out according to his calculations of social worth.This is a familiar argument to students of heredity. The idea that humans are degenerating–especially intellectually–persists independently of how much we know about intelligence and heredity. Which is to say, no matter how smart we get, we persist in believing we are getting dumber.

Galton was just one exponent of the so-called degeneration theory: the counter-intuitive but apparently irresistible idea that technological progress, medical advance, improvements in pedagogy, and civilization en masse in fact are producing the very opposite of what we supposed; namely, they are crippling the body, starving the spirit, and most of all eroding the mind.

The invention of intelligence testing by Alfred Binet just before the turn of the 20th century provided a powerful tool for proving the absurd. Though developed as a diagnostic to identify children who needed a bit of extra help in school–an enriched environment–IQ testing was quickly turned into a fire alarm for degeneration theorists. When the psychologist Robert M. Yerkes administered a version of the test to Army recruits during the first world war, he concluded that better than one in eight of America’s Finest were feebleminded–an inference that is either ridiculous or self-evident, depending on one’s view of the military.

These new ways of quantifying intelligence dovetailed perfectly with the new Mendelian genetics, which was developed beginning in 1900. Eugenics—a rather thin, anemic, blue-blooded affair in Victorian England, matured in Mendelian America into a strapping and cocky young buck, with advocates across the various social and political spectra embracing the notion of hereditary improvement. Eugenics advocates of the Progressive era tended to be intellectual determinists. Feeblemindedness–a catch-all for subnormal intelligence, from the drooling “idiot” to the high-functioning “moron”—was their greatest nightmare. It seemed to be the root of all social problems, from poverty to prostitution to ill health.

And the roots of intelligence were believed to be genetic. In England, Cyril Burt found that Spearman’s g (for “general intelligence”)—a statistical “thing,” derived by factor analysis and believed by Spearman, Burt, and others to be what IQ measures—was fixed and immutable, and (spoiler alert) poor kids were innately stupider than rich kids. In America, the psychologist Henry Goddard, superintendent of the Vineland School for the Feebleminded in New Jersey and the man who had introduced IQ testing to the US, published Feeblemindedness: Its Causes and Consequences in 1914. Synthesizing years of observations and testing of slow children, he suggested–counter to all common sense–that feeblemindedness was due to a single Mendelian recessive gene. This observation was horrifying, because it made intelligence so vulnerable–so “fragile.” A single mutation could turn a normal individual into a feebleminded menace to society.

As Goddard put it in 1920, “The chief determiner of human conduct is the unitary mental process which we call intelligence.” The grade of intelligence for each individual, he said, “is determined by the kind of chromosomes that come together with the union of the germ cells.” Siding with Burt, the experienced psychologist wrote that intelligence was “conditioned by a nervous mechanism that is inborn, and that it was “but little affected by any later influence” other than brain injury or serious disease. He called it “illogical and inefficient” to attempt any educational system without taking this immovable native intelligence into account. (Goddard, Efficiency and Levels of Intelligence, 1920, p 1)

This idea proved so attractive that a generation of otherwise competent and level-headed reserchers and clinicians persisted in believing it, again despite it being as obvious as ever that the intellectual horsepower you put out depends on the quality of the engine parts, the regularity of the maintenance you invest in it, the training of the driver, and the instruments you use to measure it.

The geneticist Hermann Joseph Muller was not obsessed with intelligence, but he was obsessed with genetic degeneration. Trained at the knobby knees of some of the leading eugenicists of the Progressive era, Muller–a fruitfly geneticist by day and a bleeding-heart eugenicist by night–fretted through the 1920s and 1930s about environmental assaults on the gene pool: background solar radiation, radium watch-dials, shoestore X-ray machines, etc. The dropping of the atomic bombs on the Japanese sent him into orbit. In 1946 he won a Nobel prize for his discovery of X-ray-induced mutation, and he used his new fame to launch a new campaign on behalf of genetic degeneration. The presidency of the new American Society of Human Genetics became his bully pulpit, from which he preached nuclear fire and brimstone: our average “load of mutations,” he calculated, was about eight damaged genes–and growing. Crabtree’s argument thus sounds a lot like Muller grafted onto Henry Goddard.

In 1968, the educational psychologist Arthur Jensen produced a 120-page article that asserted that compensatory education–the idea that racial disparities in IQ correlate with opportunities more than innate ability, and accordingly that they can be reduced by enriching the learning environments of those who test low–was futile. Marshaling an impressive battery of data, most of which were derived from Cyril Burt, Jensen insisted that blacks are simply dumber than whites, and (with perhaps just a hint of wistfulness) that Asians are the smartest of all. Jensen may not have been a degenerationist sensu strictu, but his opposition to environmental improvement earns him a data point.

In 1990, Richard Herrnstein and Charles Murray published their infamous book, The Bell Curve. Their brick of a book was a masterly and authoritative rehash of Burt and Jensen, presented artfully on a platter of scientific reason and special pleading for the brand of reactionary politics that is reserved for those who can afford private tutors. They found no fault with either Burt’s data (debate continues, but it has been argued that Burt was a fraud) or his conclusion that IQ tests measure Spearman’s g, that g is strongly inherited, and that it is innate. Oh yes, and that intellectually, whites are a good bit smarter than blacks but slightly dumber than Asians. Since they believed there is nothing we can do about our innate intelligence, our only hope is to “marry up” and try to have smarter children.

The Bell Curve appeared just at the beginning of the Human Genome Project. By 2000 we had a “draft” reference sequence for the human genome, and by 2004 (ck) “the” human genome was declared complete. Since the 1940s, human geneticists had focused on single-gene traits, especially diseases. One problem with Progressive era eugenics, researchers argued, was that they had focused on socially determined and hopelessly complex traits; once they set their sights on more straightforward targets, the science could at last advance.

But once this low-hanging fruit had been plucked, researchers began to address more complex traits once again. Disease susceptibility, multicausal diseases such as obesity, mental disorders, and intelligence returned to the fore. Papers such as Crabtree’s are vastly more sophisticated than Goddard’s tome. The simplistic notion of a single gene for intelligence is long gone; each of Crabtree’s 2,000-5,000 hypothetical intelligence genes hypothetically contributes but a tiny fraction of the overall. If you spit in a cup and send it to the personal genome testing company 23AndMe, they will test your DNA for hundreds of genes, including one that supposedly adds 7 points to your IQ (roughly 6 percent for an IQ of 110).

Thus we are back around to a new version of genes for intelligence. Despite the sophistication and nuance of modern genomic analyses, we end up concluding once again that intelligence is mostly hereditary and therefore also racial, and that it’s declining.

Apart from the oddly repetitious and ad hoc nature of the degeneration argument, what is most disconcerting is this one staring implication: that pointing out degeneration suggests a desire to do something about it. If someone were, say, sitting on the couch and called out, “The kitchen is sure a mess! Look at the plates all stacked there, covered with the remains of breakfast, and ick, flies are starting to gather on the hunks of Jarlsberg and Black Twig apples hardening and browning, respectively, on the cutting board,” you wouldn’t think he was simply making an observation. You’d think he was implying that you should get in there and clean up the damn kitchen. Which would be a dick move, because he’s just sitting there reading the Times, so why the heck doesn’t he do it himself. But the point is, sometimes observation implies action. If you are going to point out that the genome is broken, you must be thinking on some level that we can fix it. Thus, degeneration implies eugenics. Not necessarily the ugly kind of eugenics of coercive sterilization laws and racial extermination. But eugenics in Galton’s original sense of voluntary human hereditary improvement.

And thus, scientists do not appear to be getting any smarter. Despite the enriched environs of the modern biomedical laboratory, with gleaming toys and stimulating colleagues publishing a rich literature that has dismantled the simplistic genetic models and eugenic prejudices of yore, researchers such as Crabtree continue to believe the same old same old: that we’re getting dumber–or in danger of doing so.

In other words, sometimes the data don’t seem to matter. Prejudices and preconceptions leak into the laboratory, particularly on explosive issues such as intelligence and/or race, regardless of how heredity is constructed. Plenty of scientists are plenty smart, of course. But rehashing the degeneracy theory of IQ does not demonstrate it.