Friday, August 30, 2013

The taste of heat

We've posted a few times recently about the ease of identifying genes, but the inherent difficulty of determining their function.  E.g., there are more than 2000 alleles linked with cystic fibrosis, most of them guilty by association rather than by demonstrated function.  Function is often assumed, and has traditionally been unquestioned based on the 'theory' that this is the gene 'for' the trait.  Genome mapping should make us wary, but it doesn't seem to do that as often as it should.  Of course, the theoretical assumption could be true--it's just very hard to tell experimentally or otherwise based on present knowledge.  Many of more than 100 genes associated with X-linked intellectual disability may in fact have nothing to do with the trait.

Now a paper in this week's Nature ("A gustatory receptor paralogue controls rapid warmth avoidance in Drosophila," Ni et al.) reports that response to a steep temperature gradient by Drosophila may be through a taste receptor.  Another legacy of the "gene for" approach to understanding genetics. 

Most organisms prefer a small range of external temperatures, because it helps in body temperature maintenance, but the molecular mechanisms for thermal preference had not been understood.  Now Ni et al. suggest one.
Here we [show] that thermal preference is not a singular response, but involves multiple systems relevant in different contexts. We found previously that the transient receptor potential channel TRPA1 acts internally to control the slowly developing preference response of flies exposed to a shallow thermal gradient. We now find that the rapid response of flies exposed to a steep warmth gradient does not require TRPA1; rather, the gustatory receptor GR28B(D) drives this behaviour through peripheral thermosensors.
That is, fruit flies respond to minor temperature fluctuation with one mechanism, the TRPA1, and to sharp fluctuations with another, a taste receptor. 

Previous work has found conflicting results with respect to thermal sensing in Drosophila -- the anterior cell neurons, inside the head, were thought to be one sensing mechanism, and the hot cell neurons, in the bristly arista outside the head, were thought to be another, among others.  The internal cells are mediated via TRPA1 but the external neurons are not, and Ni et al. sought to determine the molecular basis of thermal sensing by these cells.

Arista, Drosophila; Wikipedia

There are six neurons reside in the arista, three that respond to warmth and 3 to cold.  Ni et al. found three unidentified cells in the arista that resembled thermoreceptors, and were subsequently found to be gustatory receptors. "Gustatory receptors and odorant receptors have been studied extensively as chemoreceptors for sweet and bitter tastants, food odours, carbon dioxide and other chemicals, but have not previously been implicated in thermosensation."

By inhibiting the detection ability of the neurons in the arista one by one, and exposing the flies to rapid temperature changes and then allowing them to choose between two test tubes of different temperatures, Ni et al. determined that when the gustatory receptors were blocked, the flies did not prefer, as usual, the cooler tube. Expressing GR28B(D) ectopically transferred thermodetection to sites that normally don't detect temperature, fairly strong evidence that this gustatory receptor is involved in detecting temperature as well as chemicals.


Gene names and attributed functions can be both enlightening and constraining. Above is a section from an in situ experiment on a 14 day old mouse embryo, for example, taken from the extensive gene expression database, GenePaint.  The purpose of the experiment was to detect expression of a gene called Spata5.  The darker areas in the photo are evidence of that.  In this case, the strongest expression can be seen in the brain and the central nervous system generally, as well as the tongue, thymus, lungs, the incisors, and other tissues. And what does this gene, with such widespread expression, do?  As characterized by Gene Cards, Spata5 "may be involved in morphological and functional mitochondrial transformations during spermatogenesis."  That is, it's only presumed function has to do with the development of sperm.  

Our point is not to suggest that researchers get it wrong with respect to gene function and so forth, but that there's a lot that we don't yet know about genes.  Gene mapping studies, which aim to identify genes 'for' diseases or traits, necessarily rely heavily on previous knowledge about genes to whittle down what can be extensive lists of potential candidate genes.  This spermatogenesis gene apparently does a whole lot more than help make sperm, but if you were interested in genes involved in the developing lungs, or CNS, and you know nothing more about Spata5 than its putative function, you'd reject it as a candidate immediately.

Mapping studies also typically find that many genes and hence many different genotypes contribute to similar traits in different individuals.  Indeed, single-gene causation is usually due to rare strong-effect variants, but often other instances of the same trait don't involve the gene, and successful single-gene discoveries can seduce investigators into thinking that the whole world is a simple Mendelian garden.

Finding temperature sensing in a taste receptor is an unexpected finding, and in fact seems to have been a lucky artifact of the experiment, not something the investigators set out to test.  Good science involves not just doing experiments well, but seeing around corners, questioning everything, assuming there's a lot we still don't understand -- and that there always will be -- and a lot of dumb luck, and humility. 

Wednesday, August 28, 2013

Cystic fibrosis, genetic variation and gene function

We wrote last week about the difficulty in assessing gene function, including determining whether a given genetic variant is in fact responsible for a disease or trait. The ever-decreasing cost of DNA sequencing will add to the rapidly growing set variants of undetermined function -- some will be associated with a disease, but some will have no effect.  This is not only a challenge to human geneticists trying to characterize gene function, but also to clinicians trying to explain, predict or treat disease. Last week's post was in the context of X-linked intellectual disorders, but the problem is ubiquitous.  Today we write about cystic fibrosis.

More than 2000 variants in the cystic fibrosis transmembrane conductance regulator gene (CFTR) have been linked to the disease.  About 70% of Europeans with cystic fibrosis have 2 copies of a 3 base deletion, called ΔF508 because the deletion occurs at position 508 in the gene.  This deletion causes a particular piece of the CFTR protein not be made when the protein is being synthesized, and this leads to an abnormal protein, and disease. That there can be one predominant allele in this 'recessive' disease is consistent with population genetics theory and also with the early discovery of the gene--because a high fraction of cases have two copies of the allele (or at least one copy, along with some other variant) made it findable with the techniques of a generation ago.

Why this deletion causes the particular symptoms of cystic fibrosis is well-understood, but why most of the remaining 2000 variants cause disease has not been demonstrated. Indeed, most of them are quite rare and since this disease is "recessive" cause little if any effect on their own.  Now, a new paper in Nature Genetics ("Defining the disease liability of variants in the cystic fibrosis transmembrane conductance regulator gene," Sosnay et al.) addresses this question.

Data collected for the study; Sosnay et al., Nature Genetics, 8/25/13
Sosnay et al. collected phenotype and genotype data from nearly 40,000 people with cystic fibrosis in Europe and North America.  Phenotypes vary in how severe they are and whether, for example, they involve just lung and breathing issues or also obstruct pancreatic ducts causing other problems, and so on.

In this sample, 159 CFTR variants were at a frequency greater than 0.01%, accounting for 96% of the cases--that is, of people known from symptoms to be affected, even if the severity is not uniform.  Of these, they confirmed that 80% met the clinical and functional definition of cystic fibrosis.  Information from about 2000 fathers of people with CF allowed the investigators to determine that a handful of the remaining variants seemed to have no effect, and the effect of another handful is still undetermined.  That the investigators looked for effect in cases' fathers is somewhat curious if the disease is recessive, because one would expect no effect--but see below.

This study contributes new and useful information to knowledge of the genetics of the disease.  Understanding whether, or better, how, a variant causes disease is important for determining carrier status in prospective parents, and can be an important part of understanding how to treat the disease. But the task is never-ending, as new variants arise with every meiosis -- most will be benign, but some will not.

Some 'meta' explanation of this approach
Once the ΔF508 allele was found, it was shown to have a strong effect in carriers of two copies.  This was consistent with the idea that CF is "recessive", and in a kind of circular strategy this is what was found because the variant is so common.

But then other patients were found who had only one copy of that allele, plus a not-normal sequence in their other copy.  Because the trait was assumed to be "recessive", this other variant gets incorporated into the data base as if it's causal.  This is a kind of circular reasoning gone another step: if the disease is assumed to be recessive, then the variants in both the patient's copies of the gene must be causal!

In some cases, the nature of both mutations in the gene, in terms of where in the protein structure the variant occurred was able to confirm a likely causal effect.  This is consistent with the current paper.  However, it was obvious that the phenotypes were not all the same, and many if not most individual had two different variants.  That is not what "recessive" is classically supposed to mean--that is, two copies of the 'bad' variant.  Instead, we have a quantitative relationship between the diploid genotype (that is, both copies of the gene) and the severity of the trait.

If this assumption is wrong, it could be that one or even both variants the patient has are not themselves causal, but only causal in the context of some other site(s) in the genome that interact with the CFTR gene, or even that have similar effects on their own.  So to continue to use the term we esssentially get from Mendel and his peas ("recessive") when what we really have is a quantitative relationship between genotype and phenotype, that may not even always involve the same gene, is an example of a theory lasting beyond the evidence, because investigators 'want' the trait to fit the simple model.  The persistence of the simple model shows our addiction to it and the historical legacy by which terms and concepts cling on when they should be modified--or abandoned.  This is a common iassue that applies to many purportedly single-gene traits.  It's why we put the word in quotes in this post.

 Of course when a variant is very rare, and is generally seen in patients who also carry one of the strong-effect alleles, we have almost no way to test whether that variant is causal or not.  If it hits a known major part of the gene, some severity correlations can be identified--and this has been known since about 1990.  The current paper provides some larger-scale documentation, but really is confirming what has long been well known.  Naturally, it is no surprise that the authors could not attribute specific mechanism to the almost 2000 genes in their study, many of which will be singletons, and why they'll be so hard to confirm.  This requires enough samples, or some clear form of experimental evidence--difficult to obtain for many different alleles of various types--to get statistical evidence to show tha the specific variant is involved.

So here we see an example of the interplay between sampling, theory, inference, and the challenging problem of understanding genetic causation--even in one of the classical 'simple' diseases.

Tuesday, August 27, 2013

Humility before the facts, and the melanopsin example

Thomas Huxley, Darwin's famous 'bulldog' advocate for the theory of evolution, wrote in an 1860 letter that in science one should “Sit down before fact like a little child, and be prepared to give up every preconceived notion, follow humbly wherever and to whatever abyss Nature leads or you shall learn nothing.”   He wrote this in a context of a discussion (by letter) about religion and related topics with his friend and theologian Charles Kingsley.

This represents Huxley's view about theology and empiricism.  Essentially, he's advising to confront Nature with an open mind, and see what lessons one can learn; don't accept dogma, be humble before the facts.

Science should be the area of human affairs that, above all, is humble before the facts.  But in  many ways science is all too human for that.  Although very empirical, scientists tend like any other group to accept, defend, and stick to a canon of dogma which only changes under the force of substantial new evidence, or pressure from a rival camp.

So, for example, when aspects of today's genomics are questioned, the typical response is that, sure, we don't know everything, but if we can just acquire more data (usually meaning more Big Data) the uncertainties will become clear. It is a view that the fault, dear Brutus, is in our sample size and technology, not in ourselves.

We often feel that while more data, or more forms of data than we currently have, will lead to some new discoveries, and plenty of incremental new knowledge, it won't automatically yield deeper insights.  Rather than being so cock-sure that we have the right theory but only insufficient measurement technologies, we tend not to think that there may be wholly unsuspected factors that could account for things not currently being explained very well.

We tend to call such possibilities "dark matter", in quotes because we're not suggesting that the dark matter of current physics is (necessarily) a missing ingredient that could turn our current theory every which way, as in other major changes in scientific view. We have no particular insight, and have no way to know whether any such transformational theory is possible.  Perhaps the complexity with which we try to wrestle these days really is as complex as it seems, and no new factor will change that..

But there are, occasionally, discoveries that show us how deep convictions can be upset by new findings.  An example that has actually been known for some time, but which we only learned of recently, via BBC Radio 4's 'The Life Scientific' interviewing Oxford's Professor of Circadian Neuroscience Russell Foster, is our 'third' form of vision.

Everyone knew of light-sensing and color-sensing pigments, called opsins, that are coded by genes and expressed in our retinas. Likewise, the pineal gland has other light sensing and time-cycle rhythm monitoring functions.  It isn't directly exposed to light in us or other mammals (that we know of), but it does respond to neural signals of light perception sent from the retina.

Blind people were long thought to have no light perception, because they had no rod or cone cells in the retina or for other related problems these cells did not transmit reception signals to the brain (including the pineal gland).  The idea that blind people could not sense light was very strong dogma and led for example, to the replacement of real eyes with glass eyes in blind people (sometimes for legitimate reasons such as infection of a blind eye), but generally because 'of course' the natural eye was of no use.

But various investigators, especially including Russell Foster, noticed various facts, such as that blind people could sense and maintain diurnal (day/night) cycles.  Controlled experiments were done that showed that while not being able to 'see' as the vernacular and formal theory described that sense, blind people often could report with very high accuracy whether the lights were on or off.  Such findings were received with dismissive hostility until they became clear enough that the older dogma simply had to be adapted.

Since then, a gene coding the protein melanopsin has been found, shown to be expressed in neural cells in the eye.  Melanopsin acts as a cell receptor for light.  It does not participate as retinal cells do in a pattern-recognizing field (so far as we happen to know, at least).  But areas of the brain not involved in image recognition do perceive and 'understand' the incoming information.

This discovery was not a transformative one in terms of genetics or evolution.  Once discovered, melanopsin and relevant mechanisms have been found to have a long evolutionary history and so on.  It's an ordinary gene, and no new basic principles of genetic causation or mechanism are involved.

The point here is that this is a rather mundane example of a finding, even within current working ideas about genomics and evolution, that was totally unexpected and whose implications were rejected because there was too much acceptance of dogma.

We have to have accepted theory from which to work, and we have to resist random or wild-seeming ideas, which are always being proposed.  Investigators design studies based on theories they have about life generally or their particular topic.  Balancing the need for a framework for daily activity, and to remain humble before the facts is not easy in science.  Scientists may be better at self-challenge than religious or political ideologues, but we're not as good at it as we like to think.  It's a lesson we need to learn on a regular basis.

Monday, August 26, 2013

GM plants: when the evolutionary issues trump all others

We discussed GM technology and some of the debate about it on Friday.  Today we're addressing some other aspects of the debate that may not be as clear even to many of the disputants over the issues. These are issues that are relevant to things we often talk about here on MT.

Flavr Savr tomatoes engineered to delay ripening; Wikipedia
Why not GM?
There are numerous reasonable objections to genetic modification, some of which we touch on below.  We ignore the ill-informed reactions -- such as that tomatoes with a fish gene inserted will taste fishy, or that orange trees with a spinach gene will produce green fruit.

One objection is economic.  It is a resistance to the greedy monopoly of economic control of farming, and the exploitation of domestic and (especially) developing-world farmers.  It offends a sense of equity. 

Another source of resistance is epidemiological.  It is the distrust of the produce itself, such as that it might trigger unexpected toxic or allergic reactions. We always eat foreign proteins (we're not corn or chickens, after all), and unless we're allergic, we readily digest them, and reap their nutritional benefits.  Still there are concerns that some GMOs may engender reactions.  There have been various claims, few if any strongly as yet substantiated, for other sorts of effects (such as on fertility). 

A third is sociological, involving resistance to the displacement of more putatively sustainable multicrop, multispecies, mixed, smaller scale farms by huge monocropping.  For long-term sustainability, mixed crops save soil from runoff, require less fertilizer, fewer pesticides and are more natural.  Small, mixed farms are run by communities of local farmers, not itinerant workers, and so on. 

And, there is the evolutionary objection, which is related to the sociological objection.  The latter is in some ways the most scientifically inflammatory, or so it would seem.  It is this that is most relevant to MT and our personal interests in evolutionary genetics. The idea is that if you make a crop inedible, or make it resistant to pesticide, you put heavy selection pressure on pests, animal and plant, eventually favoring those with resistance mutations--and then you've got a real problem on your hands. Wild crops may gain transgenes through cross pollination (see Friday's post), and crop diversity, a major evolutionary asset, will be reduced. Further, as with antibiotic resistance, every herbicide or pesticide that can no longer be used is an economic jolt to farmers practicing conventional agriculture.  Evolutionary biologists opposing the Monsantoizing of the world point this out, and have done, in quite strong terms--and their warnings have been born out repeatedly.

Still....
However, this objection is not always warranted; again, not all GM is alike. Many GM changes should not have particular evolutionary implications. Transgenes that keep fruit from spoiling, or that affect its taste or color seem like examples. While such changes dramatically affect the plant or animal's reproductive fitness through the intense artificial selection that is agriculture, it's harder to envision these transgenes conferring a fitness advantage for related species if they escaped into the wild, such as is seen with pesticide or herbicide resistance.

The transgene would perhaps be more likely to have negative implications (just because it's a random gene tossed into the genomic mix of the recipient wild species). But most likely it would be selectively neutral--that is, it would have no effect on the recipient species reproductive chances.  In this case, the most likely fate of the transgene is to be lost just by chance--because this is the typical fate of any mutational change in any species.  But even if it persisted, it would have no implications for agriculture.

This seems to be the story of golden rice, with the transgenic modification to make vitamin A in rice, a potential boon to many underdeveloped countries where vitamin A deficiency is highly prevalent and rice is heavily the staple food.  A Sunday NY Times commentary discusses the issues.  The article doesn't describe the transgenic engineering that was done, but it does note that it's not a particularly strange modification, even for plants, it's developed by a non-profit institution, and the farmers can replant seeds they've grown.  So many of the ethical objections would seem to be avoided.  Still, there appear to be many reflex objections of a generic type.

Could something go awry with this plant, densely used in very widespread and global areas?  Any new gene could, in principle, interact somehow with local pests from bacteria to insects, and it is plausible in principle that some allergic or other reactions could occur.  However, this seems far less likely compared to GM species engineered in relation to strong selection involved with agents like antibiotics, pesticide and herbicide resistance.  So such objections do, at this point, seem rather a stretch, unless one is just nostalgic for some supposed good old days in agriculture.  After all, artificial selection has been tinkering with all our domesticates' genomes for millennia.  On the other hand, if someone really is pining for older ways, that's a legitimate viewpoint (even if it may ignore the real-world plight of our heavily populated globe).

Whatever one feels about GM plants, a productive debate can only happen if people actually understand the issues.  It is perfectly legitimate to object to economic, epidemiological, sociological, or evolutionary aspects of GM foods.  But many have opposite views, and are unconcerned about the industrialization aspects, for example (after all, many have jobs related to industrial ag).  In any case, it is not legitimate to mix issues.  That can't move the debate forward.

Whether GM crops are here to stay or will be displaced for either scientific or political reasons, only time will tell.  That there are very serious evolutionary issues is by now very clear indeed.  To many, they trump all the other issues.  In any case, while the epidemiological and sociopolitical sides of GM technology's impact are absolutely suitable for the policy arena, they are largely separate from the issues stemming from the evolutionary problems.

If the evolutionary issues prove to be intractable--if agribusiness can't keep one step ahead of natural selection, then we will eventually, maybe even rapidly, revert to other means of maintaining high global agricultural output.  If the problems can be addressed by big-science methods, then the overall economics--whether you like it or not--may well leave them in the driver's seat of future agriculture.  However the issues play out, billions of peoples' lives will be affected.

Friday, August 23, 2013

Are GMOs safe? Define 'safe'

GMO: thumbs up or thumbs down?
People tend to feel strongly about genetically modified organisms, all good or all bad, but of course the story is too complex for a simple up or down. Organisms are genetically modifed for many reasons including to protect against disease, or to protect against pesticides or herbicides, but also to alter some characteristic such as ability to withstand freezing temperatures (tomatoes) or to speed their growth (salmon).  Each case needs to be evaluated on its own merits.

This undated 2010 handout photo provided by AquaBounty Technologies shows two 18 month old salmon, a genetically modified salmon, rear, and a non-genetically modified salmon, foreground.  (AP Photo/AquaBounty Technologies, Huffington Post)

GMO oranges
Amy Harmon had a piece in the New York Times a few weeks ago reporting on sour orange disease affecting oranges in Florida and around the world, and efforts to genetically modify trees to be resistant to the bacterium that's killing them.  No citrus that is naturally resistant has been found, so no 'natural' solution to the problem was possible.  Instead, after a number of transgenic trials, a gene from the spinach plant that attacks the bacterium has been introduced into orange trees for testing.  It's too early to know whether or not it will save oranges, but it seems more likely than any other transgene tried to date.

Harmon notes in her piece that, "Leading scientific organizations have concluded that shuttling DNA between species carries no intrinsic risk to human health or the environment, and that such alterations can be reliably tested."  The link there is to a 2012 statement by the American Association for the Advancement of Science on GMOs, which begins:
There are several current efforts to require labeling of foods containing products derived from genetically modified crop plants, commonly known as GM crops or GMOs. These efforts are not driven by evidence that GM foods are actually dangerous. Indeed, the science is quite clear: crop improvement by the modern molecular techniques of biotechnology is safe. Rather, these initiatives are driven by a variety of factors, ranging from the persistent perception that such foods are somehow “unnatural” and potentially dangerous to the desire to gain competitive advantage by legislating attachment of a label meant to alarm. Another misconception used as a rationale for labeling is that GM crops are untested.
So, broadly speaking, science says GMOs are safe, it's lay people who say it's not.  But not so fast.  Leading food reporter Michael Pollan took issue with Harmon's treatment of the GMO question.  He said on Twitter in a now famous 140 characters, that the piece included too many industry talking points, and he then went on vacation without following up.  He has now explained himself to a reporter for the Columbia Journalism Review, and Harmon responds. Their differences seem primarily to be differences in worldview, rather than arguments over facts -- Pollan critiques Harmon for not challenging the monoculture approach to orange growing, among other things, for example and Harmon responds that she was talking about a single problem, not the general issue.  And so on. 

Harmon's piece does pick up on the widespread misconceptions about GMOs.  She quotes someone wondering if their orange juice will now be green, with the spinach gene introduced into the trees.  It's certainly true that much of the mistrust of transgenics is based on faulty knowledge about genetics and the technology itself, but there is also some well-grounded mistrust, fueled by science.

In the US most of our corn, soy, canola oil and others is transgenic.  We've been eating it for years, to apparently no ill effect, or at least none that has been clearly documented. But it's not only human health that is of concern.  There are also environmental issues.  Glyphosate resistant weeds, for example.  Monsanto's Roundup Ready transgenic crops, bred to be resistant to glyphosate-containing herbicides, have led to "superweeds", weeds that are also resistant to glyphosate, and thus require more toxic herbicides to kill. We've written about this subject before (here, e.g.), but it's also all over the web.

Similarly, transgenic crops, bred to be resistant to pesticides, have naturally lead to resistant pests, and the increased use of more toxic pesticides, and other sequelae (we wrote about that here, but this issue, too, is all over the web). Transgenic orange trees bred to kill bacteria could potentially lead to a similar kind of problem, with intense selection for bacteria that are able to resist.  That's evolution.  Only time will tell if this becomes a problem. 

GMO rice 
So, even if Monsanto's Roundup Ready crops are benign for human consumption, there are environmental consequences.  And the mistrust of GM is likely to spread, given new evidence.  A paper published in New Phytologist (Wang et al.) and described in Nature last week, addresses the concern of gene flow from transgenic crops to their wild relatives.  The paper describes the results of introduction of the Monsanto RoundUp Ready transgene into weedy rice.  Wang et al. introduced the gene experimentally, but it also does happen in the field naturally.

Weedy rice can pick up transgenes from genetically modified crop rice through cross-pollination.
XIAO YANG; Nature Aug 16 2013
They. found that the transgenic weedy rice, a noxious weed that infests rice fields everywhere, "produced 48-125% more seeds per plant than nontransgenic controls in monoculture- and mixed-planting designs without glyphosate application."  That is, the weedy rice with the transgene seems to have substantial fitness advantages, even when not exposed to the herbicide.  (This still needs to be replicated -- see Andrew Kniss's blog post for discussion.  But if it's not true in this particular case, everything we know about evolution suggests it could be, and it's only a matter of time.)  And, hybridization, "pollen-mediated gene flow from rice to weedy rice has been reported."  If this is so, Wang et al. can't say whether it's also true of other plants or whether weedy rice is unique, but if increasing herbicide resistance in a crop increases the hardiness of an already noxious weed, this is potentially a huge problem.

And this is aside from the problem of the spontaneous evolution of resistance to glyphosate that has been documented in at least 24 weed species.  The superweeds.  Wang et al. write, "Our findings have broad implications for plant biology, crop breeding, weed management, biotech risk assessment, and the ongoing evolution of herbicideresistant weeds."

So, are genetically modified organisms safe?  It depends on what you mean by 'safe'. 

Thursday, August 22, 2013

The recreation of creationism; a sporting event

Here's a piece from 2009, which we repost with very little revision.  We were reminded that it's still timely, particularly in light of the acknowledgement by Ken Ham, one of the leading young earth creationists, that Intelligent Design is not a science (HT to Holly Dunsworth). At the end, we'll have a comment or two about this....

Recreating Creationism
Teaching evolution in the schools is still a big issue. Workshops, forums, debates and so forth are frequently held to discuss how it should be done, and they draw much interest.  They sound like they are serious, sober academic discussions about policy and the like. But since this is anything but a new phenomenon, and the issues have been aired countless times, and science invariably wins in the courts, why is it still going on?

It is not really that we have to arm or prepare ourselves in any technical sense to present compelling new facts--the evidence that evolution is a truth about life is solid, and has been for a long, long time. The facts are out there, in public and on daily television. Instead, these meetings are a kind of recreation: again and again it's the re-creation of creationism as a sporting event.

Revival meeting at the Olympic Theatre in Charters Towers, Queensland, 1912; Wikimedia Commons
And they are generally preaching to the converted. The audience already knows the facts, and likes to cite them aggressively, needing no cue cards to know when to give a derisive snort. In fact, as sincere as they may be, and as much public service as they may be doing, some advocates for teaching evolution don't really understand evolution at a very deep level relative to the knowledge we now have in genetics and population biology.  Nor is sufficient appreciation given to aspects about life that we don't know or even may not know that we don't know.

These meetings are not being held because creationism has amassed legitimate arguments that threaten evolution and require careful rebuttal. Anyone who is even halfway aware of biology and geology knows that literal creationism is simply bunk, and truthfully, most Intelligently Designed Creationists know this too, in their heart of hearts. They know the Devil didn't plant Ardipithecus to mislead us into lives of sin.

But bashing these benighted people has become something of a self-congratulatory blood sport. Indeed, many biology blogs--that get thousands of hits--cover this subject tirelessly, usually with great glee, thumbing their noses at the benighted creationists, but rarely is anything new being said. For that reason we try to avoid evolution-creation 'lectures', and don't regularly follow the food-fight blogs. Our open minds are closed to the point that we've never been to an Intelligent Design-sponsored event, and we'd be willing to bet that anyone reading this who has didn't go to honestly weigh the evidence in favor of intelligent design.

Indeed, we all come at this "debate" with our preconceived notions safely intact--and leave with them untouched. At these events it is repeatedly said that the organizers want 'dialog', not confrontation, but that's insider-code for "we want to educate these poor souls, so they'll convert to our point of view."

It's 154 years since the publication of The Origin of Species but the evolution-creation debate, with all of its current vituperation, goes back well before Darwin. Darwin in fact tapped into discussions that had been fueled (in the UK) by an 1844 book called Vestiges of the Natural History of Creation, by an amateur naturalist named Robert Chambers, and as Darwin acknowledged in his Preface, it made the environment more receptive to his own book. And even Chambers basically rested on already known facts and ideas.

People often cite figures about the large proportion of the US population that doesn't 'believe in' evolution as evidence that science education is failing in this country. But, this is not about educating the uneducated. If our kids, or your kids, were taught ID in school, we, and you, would unteach what they'd learned as soon as they got home. Symmetrically, when evolution is taught, this is what fundamentalists do when their kids get home. If they sincerely feel that a belief in evolution will jeopardize their child's chance of life everlasting, they must do such reconditioning.  Every child is largely home-schooled, or at least home-acculturated--in spite of the number of hours they spend in class.

And even if science education demonstrably is lacking, and should be seriously upgraded, the divide is not about the facts anyway. Most creationists readily accept that animal breeds can be molded by artificial selection, that bacteria can evolve antibiotic resistance, and so on--it's not about whether evolution per se can happen. It's partly at least about not being able to accept that humans aren't above Nature, and weren't landed intact on Earth by a Creator.

But, really, how much difference does it make? Theodosius Dobzhansky famously said, in the context of the school debate in 1973, that nothing in biology makes sense except in the light of evolution. This statement is cited almost religiously by evolution proponents. But it's not really true--much of biology can be done very productively with no reference to evolution at all. Many life scientists, especially for example in biomedical research, have little more than a cardboard cutout understanding of evolution and do their work perfectly, or imperfectly, well without it.  We write about these shortcomings regularly.

And whether or not creationists accept the truth of evolution is not an issue with many practical implications to speak of--it's not like not accepting vaccinations or the importance of clean water.

We said above that this was only 'partly' about the fact of human origins. No matter how manifestly uneducated the uneducated truly are in this area, this is a clash of world views, a conflict over cultural and even economic power, a battle against fear (of death), for feelings of belonging to a comforting tribe, and so on. And the often vehement intolerance of biologists, equally convinced about their own tribal validity, is of a similar nature.

Perhaps it's easy for us to be cavalier about this because science always wins in court in the long run.  In the industrialized world the practical facts and empirical results of their application are far too powerful to be stymied, at least at present.  Indeed, like many of you, we followed the 2005 Dover trial here in Pennsylvania with interest, and we like others found Judge Jone's decision to be brilliantly argued, and routinely assign it to our students, generally when we also teach about the somewhat symmetric misconceptions about the 1926 Tennessee Scopes Trial (see Ken's article in his Crotchets & Quiddities columns: the trial was less of a straightforward win for Truth than it is often portrayed).

Many if not most senior evolutionary biologists alive today (including us) were trained with essentially evolution-free textbooks in their high school biology, in public schools in which they (we) recited a morning prayer, every school day from grades 1 through 12. Evolution had deliberately been purged by the textbook publishers to prevent loss of sales to states in certain regions of our country. Lack of early evolutionary training clearly doesn't hamper later learning. And evolutionary training manifestly doesn't generate understanding. Other factors are involved.

What the answer is, is unclear, if the question is "Why can't they be like us???" Maybe it's a selfish one: let those in the Creationist world have their schools as they want them. That will mean those with correct knowledge, your children and ours, will face fewer competitors for the desirable jobs in science and and other areas where properly educated people are required.

At least, those who can't stay away from the Creationist food fight should recognize that they are largely exercising their egos--and recreating Creationism over and over, to knock it down again and again. It's their tribal totem-dance, war paint and all. Serious resolution will come at a much higher price than blogging, no matter how many followers anti-creationist bloggers pull in, and it will come in a form as yet unknown. But that form will almost surely have to involve increased teacher salaries, higher education-major SATs, and education certificates given to those whose major was science and not education.

Some afterthoughts (August 2013)
These points, edited in minor ways for clarity but basically as written in 2009, are still relevant.  The anti-creationist forces in the country are doing an important job in resisting mindless creationism, but at the same time they often promulgate simplistic views of evolution.  Too many scientists in this arena are afraid to acknowledge all that we don't know.  Here, 'evolution' is a word with unclear connotations, and that can be important.

Does 'evolution' just mean descent from a common materially induced ancestry, without divine intervention, through historical processes?  Or does it imply some particular role for natural selection, and if so, what is that role?  What role does it allow for chance?  How deterministic are its assumptions about genes and the nature of organisms?

We can never know when and whether we will discover some fundamentally different facts about how life works, not just adjustments in our current knowledge (which is the incremental way that much of biology deals with the 'unknowns', with perhaps more confidence than is warranted).  But what seems entirely secure is the descent from common origins and a role for differential proliferation related to the nature of environments.  We can't know as much about the details of events in the distant past, but new discoveries never suggest that the basic idea is wrong.

On the other hand, some people simply believe that we are wrong, in some morally important ways, and they cling to non-materialistic views.  Unfortunately, as we said in 2009 and it's still relevant, these are struggles for cultural and resource control, not just discussions about whether sabre-toothed cats lived and were related to today's cats.

Wednesday, August 21, 2013

The beetle age

If the way of ushering in modern science in the 1600s was largely based on instruments such as lenses and clocks, the 1700s became an age of exploration, as recently characterized by a BBC radio 4 series on the history of science (7 Ages of Science, episode 2).  Extensive commercial, imperial, and military shipping (thanks to some better ship-making and navigational instruments) including gathering of plant, animal, and rock specimens from around the world. Darwin and many others were examples. In biology that time is now often dismissively referred to as the age of beetle collecting, to denote the unconstrained and often unsystematic assembly of specimens.

As part of this, in the mid-1700s the development of better telescopes led to huge advances in understanding the stars and their motion.  Among the pioneers of this work were William Herschel and his wife Caroline.  They discovered Uranus as the then most-distant planet known, and galaxies, and began to reveal to us for the first time how immense the universe is. 

However, other questions also arose, or so one would think.  But when Herschel said to the Astronomer Royal "I want to know what the stars made of," the latter replied, somewhat annoyed, "What we're interested in, is mapping."

Replica of the telescope the Herschels used to discover Uranus; Wikipedia
Modern mapping
Once, about 15 years ago, I was on a site visit to evaluate and make a funding decision on whether to award the requested grant.  The investigator was describing how the trait, call it Disease X, would be investigated with the latest GWAS tools.  This was that one required greater sample sizes than had been available before to be able to identify spots in the genome that might be potential causes of X, as the investigator proposed to do.

There had already been smaller studies that had rather convincingly identified about 5 or so genes that seemed to lead to high risk of the disease.  The mechanism and reason that these genes might be causal were, however, not at all clear nor could anything be done about the problem for persons carrying the seemingly causal variation in these genes.  By analogy to Herschel's question to the Astronomer Royal (What were these genes made of?), when I asked why the investigator was proposing to search for even weaker signals than the ones already known, rather than working on understanding how the latter worked, the investigator replied: "Because mapping is what I do."

In the 15 years since then, lots of mapping has been done in this spirit, and a modest number of additional genes or possible-genes have been found, but no real progress has been made in the treatment or prevention of X, nor has there been much increased understanding of the mechanisms of the previously known genes, nor any gene-based therapy.  

This reveals the attitude of so many in science today.  "What I do is collect [specify some form of] Big Data!"  'Big Data' has become a fashionable term that, when dropped, may suggest gravitas or insight.  But we aren't doing astronomy and while we certainly are highly ignorant about much of the nature of genomes, when it comes to funding the study of disease, for purposes of public health, this is a lame excuse for business as usual. What we need are more instances of what is (also fashionably) called proof of principle: proof that knowing about risk-conferring genetic variants leads to doing something about it.  That's very tough, and we have precious few instances of such principle in genetics, but we do have enough to suggest, to us, that we should not be funding further gene-gazing expeditions, into the astronomical realm of genomic complexity and the astronomical costs of the studies.  We should be focusing and intensifying the efforts to know how to do something to alleviate the problems caused by the genes, and there are many, that we know about, or the problems that seem to be truly 'genetic'. 

We are not supposed to be just star gazing with public health funds; society expects and is promised benefits (whereas only NASA expects benefits, in the form of more funding, for star-gazing; for the rest of us, it's basically no different from watching science fiction on television).

Science requires data, and we never have enough.  There are areas where modern style beetle collecting is still very much worth doing, because our knowledge is sufficiently rudimentary.  But there are areas in which we've done enough of that, and the challenge is to concentrate resources on mechanism and intervention.  In many instances, that really has nothing to do with technology, but has everything to do with lifestyle, because we have clear enough understanding to know that the diseases are largely the result of environmental exposure (too many calories, smoking, and so on).  Funding for public health should go there, in those important instances.

From a genome point of view there are certainly areas where primary data-collection is still crucial.  To take one example, it has been clear for decades that we don't yet have nearly a clear enough idea of how mutations arise in cells during development and subsequent life, nor how those mutations lead to disease and its distribution by age and sex, including interacting with environmental components and each other.  But identifying mutations and their patterns in our billions of cells, as they arise by age, and how they affect cells is a major technological challenge, far harder than collecting larger case-control studies for traits we've already studied that way before.

Tuesday, August 20, 2013

Adam's sin and Next Generation Sequencing

Before the Enlightenment, in western scholarship at least, our understanding of the world was based on data we could detect with our senses, without much of a systematic sampling framework (except, perhaps, in astronomy).  Thinkers generalized from this with inborn reasoning power.  Aristotle, Plato, and many others worked basically in this way.  The idea was, in essence, that there were truths to be found, and our brains and reason were structured in ways that would lead us to them.  There were general principles of natural organization as well, such as were represented by mathematics.

Adam's sin
What else could we expect?  While not part of Aristotelian or classical Greek scholarship, the Judeo-Christian understanding of the cosmos was that when Eve fell to temptation and ate from the forbidden fruit of the tree of knowledge of good and evil, and in turn tempted Adam, mankind's vision was blurred: in our sin, we lost the ability to see Nature clearly.  Forever benighted, then, we had to turn to the religious authorities, and they (that is, mainly the Catholic authorities) had accepted Aristotelian worldviews as basically correct--and hence unchallengeable.

Rubens' Adam and Eve
However by about the 17th century, as discussed in the first in a seven-part radio series on the BBC, "Seven Ages of Science", a variety of activities in Europe led to an increase in instrument-making, for various purposes.  But the most relevant examples, so far as we know at least, were the making of lenses and of mechanisms for measuring time.  Machines to generate vacuums were another.  These led rapidly over the next century or two to all sorts of things, including steam engines, accurate navigation instruments, and so on.

Telescopes, microscopes, and clocks opened huge new frontiers, and their major impact was on what became modern science as we know it.  The iconic instances are Galileo's use of telescopes on the moon and planets, and Newton's use of mathematics and more accurate astronomical data, together showing us that the accepted Aristotelian wisdom about the nature of the universe was inaccurate (or, sometimes, just plain wrong). Meanwhile, as astronomers gazed and measured things big, the likes of Robert Hooke looked at things small.  Various devices like the vacuum pump (Hooke helped make one for Robert Boyle) contributed to a new sense of chemistry and physiology.

Hooke microscope; 1665

The Enlightenment period that these activities helped usher in, was one in which empirical observation of Nature rather than mere thought and dogma, became watchwords.  We could understand--and control--the world through what we now would call technology.  The ideas were extended to the belief that by acting rationally, by being reasonable, we would understand.  This became the belief in what science should be about, and of course we still share that view today.

At the time, there was no substitute for technology, when so much became quickly discernible to us that our biological senses could not detect.  Chemistry, physics, astronomy, and biology all became formalized disciplines through various uses of technology.  Geology and evolution owed their modern origins in part to better navigation and global economies, and so on.

In those times, technology drove science because, in a sense, we were seeing things for the very first time, and there was so much to see.  But in the process, technology became part of the scientific method: if you did not document something with appropriate detection and measurement technology, your 'theory' about Nature would be viewed as mere speculation.  Even Darwin used microscopes, global travel technology, and his era's version of 'Big Data', along with globally expanded measures of geological formation and change, to develop his theory of evolution.

Instrumentation became more elaborate and expensive, and science gradually morphed from private rich-men's basement labs to university and other institutionalized research centers.  In a cultural sense instrumentation became what was needed for cachet, as well as the vital tool for understanding the problems of the day from physics to medicine.  Generally, this was a correct understanding of the state of things, though one could argue that investigators thought up the ideas they did because they had instrumentation.

The clockwork universe
The Enlightenment idea was to find through such instrumentation that God's creation was a clockwork universe that, like the instruments being developed, worked in universally, according to specific--and discoverable!--principles that gave certainty to cause-and-effect patters if but we knew the laws of Nature that drove them.

But the laws of the creation had been kept from our vision by the serpent who had tricked Eve into her forbidden taste test.  All it showed to Adam and Eve was that they were naked and needed to find the nearest fig leaf.  But now, thousands of years later, us flawed humans had developed ways to take the scales off our eyes and see what we would otherwise have been able to see.

The astounding realization, one might say, was that the universe was a clockwork place, a place of laws that applied always and everywhere and thus were ordained by God when the universe was created.  What else could it mean?  The success of the many great and lesser luminaries who avidly pursued instrument-based observations, and the exciting new ethos of empiricism before theory, livened up a basically new area of human endeavor: organized science and technology.

Next-Generation sequencing as an icon of current thinking
This has evolved rapidly and it integrated well with commercially based economies, so that multiple inter-dependencies grew.  We not only depended on technology for finding things not findable before, but we used technology to drive what we would study, even as a rationale.  That is currently the system today.  We drop various terms, knowingly, to justify or give gravitas to our work.  Thus 'next generation' DNA sequencing becomes what you have to propose if you expect to get grant funding, even if the old technology could have answered your question.  Indeed, that's how you get more DNA than the next fellow, so you can study what s/he cannot 'see', and then getting more becomes part of the point.

Older generation, Sanger sequencing; Wikipedia

The problem, for those of us who see it as a problem, is not the value of the technology where appropriate, but that it first of all determines the very questions we ask (or that being in the science tribe will let you get resources for), and secondly leads to repetitive me-too, ever-larger but often not better thought-out work.  The costly imitative nature will reach, if it hasn't already reached, a point where the incremental new knowledge is as trivial as collecting yet another tropical flower or beetle would have been to Victorians.

When the point of diminishing returns is reached, so is a kind of conceptual stagnation.  When and whether we're at such a point, or where we are, is certainly widely debated.  Is another, bigger GWAS, with sequence rather than marker data, going to deliver a body blow to, say cancer or heart disease?  Or will it just add some pin-striping to what we already know?  Will identifying every neural connection in a brain be worth billions?  Or will it be mainly another form of very costly beetle collection?

Where and how can we focus the incredibly powerful tools that we now have to things that we truly have never been able to see before.  Are we pressing Enlightenment science to its useful limits, in need of better kinds of insights than we can get from ever more Big Data?  Or is the priority given to sexy instrumentation still worth the central or even driving role it plays in today's science?

It is a cultural fact that we are so assured of, believe in, depend on, or hide behind (?) technology in our industrial-scale science system, that technology has pride of place these days.  That cultural fact gives a lot of us jobs and a feeling of importance.  In many areas it's also a feeling of profound discovery.  Some of that really is profound.  But just as profoundly, much of it is a waste of creative energy. Finding a right balance is something that will have to happen on its own, probably involve the chaos of the modern form of disputation sphere (such as blogs), since it can't be ordered up by some sort of administrative decision.

Monday, August 19, 2013

Genes 'for' XLID -- the replication/falsifiability fallacy

How do we know when a particular gene variant actually causes a trait?  
A recent paper in the American Journal of Human Genetics (XLID-Causing Mutations and Associated Genes Challenged in Light of Data From Large-Scale Human Exome Sequencing, Piton et al.) raises some important questions about identifying genes for variants associated with X-linked intellectual disabilities (XLID).  But in doing so, the paper raises questions about identifying causal variants in general.

The incidence of intellectual disability (ID) is 1-2% in children.  Patterns of occurrence have long suggested X chromosome-linked causation, and numerous X-linked variants have been found to be associated with ID.  FMR1, associated with fragile X syndrome, was one of the first found, but the number has grown since then, "from only 11 in 1992 to 43 in 2002 and over 100 genes now". That is, 100 genes on the X chromosome thought to be associated with intellectual disability.

Some XLID is syndromic, and some not -- identifying causal gene variants is easier for syndromic forms because unrelated individuals with some or all of a specific phenotype can be more easily matched, and the assumption made that it's the same mutation that causes multiple symptoms.  But, as Piton et al. point out, variation in the phenotype, with milder forms of a syndrome, or incomplete penetrance, not to mention that there can be multiple genetic pathways to a given trait, can mask -- or create -- similarities, thus making it harder to match unrelated probands (people affected with the trait you're interested in explaining).

Gene: Wikimedia Commons
Function is hard to demonstrate
But, let's say a potentially causal mutation has been identified in one or more people or families.  Now it has to be validated, and its function deciphered.  Mark Wanner at Jackson Labs recently interviewed leading human geneticist Aravinda Chakravarti on determining gene function. Finding a gene is the easy part, Aravinda said.
But while finding and localizing genes isn’t a big challenge any more, it is still very difficult to figure out what they actually do and how their malfunctions lead to disease. Clearly, one aim is to understand their normal function and another to figure out how changes in that normal genetic program leads to disease. For chronic diseases of humankind—cancer, heart disease, neurological diseases, anything—we still are largely ignorant of how genetic abnormalities lead to disease. That is still mostly a black box. That’s the part that many labs, including my own, are focusing on more closely and looking to solve. Unfortunately, this may require a disease-by-disease solution.
When the variant has been found in only one or a handful of individuals, this makes it even more difficult.  And, as Chakravarti said,
...typically a single gene has many alternative functional forms with many different functions. Understanding these functions across development and aging remains challenging since the universe of possibilities is so large. Moreover, few genes function by themselves and many functions are only evident by one gene interacting with another . . . figuring out this aspect is still in its infancy since the universe of these possibilities is even larger. One has then also to consider that the gene may have different functions in different cell types and tissues. That's why it’s hard.
Piton et al. note that even if a functional study shows that a candidate variant has an effect at the protein or cellular level, this doesn't necessarily demonstrate that this effect is responsible for the trait. This is further complicated by the observation that people may carry a supposed causal gene variant but not have the trait.  Does that mean the variant is not causal after all? 

Given the difficulties in validating a gene variant's involvement in a trait, Piton et al. wanted to assess the likelihood that the genes currently reported to be associated with XLID in fact are.  They used the National Heart, Lung and Blood Institute's database of 6,500 sequenced exomes, collected for genetic studies of traits unrelated to ID, as a set of controls in which to search for the 104 variants currently thought to be causal. 

Most of the validating is done statistically, though this can be ambiguous.  E.g., when a variant is present in a single affected individual, this can mean either that it's a false positive or that it's a rare causal variant, but it's impossible to determine which with a single observation.  It can also be impossible to determine which with multiple observations, because some people can have the trait and not be affected, or multiple affected related individuals might have the variant simply because they have shared ancestry.

If a variant is also found in the NHLBI database of individuals of whom perhaps 50 males might have XLID, based on population prevalence coupled with the terms of inclusion in the database (being able to sign informed consent forms, e.g.), it becomes statistically more likely that it's a false positive, though there are numerous reasons why it might not be. 

So, even addressing the question head on, aware of the issues, in many cases it's still hard if not impossible to definitively determine function.  Piton et al. found 22 of the many mutations identified in the 104 genes associated with XLID in the NHLBI control group.  But, if the variants are truly causal, they might have expected to find at most 1 or two given the expected prevalence of XLID in the sample.  So, there were more than expected, and all but two were detected in males and females, which means they were indeed unlikely to be causal, given that XLID is overwhelmingly seen in males, as are most X-linked traits.

They further determined that 10 of the genes previously considered to be associated with XLID are most likely not, and another 15 are doubtful. But their results are only suggestive.  They ranked other genes on the list from 'highly questionable' to 'likely' to 'needs verification', based on what is known about the gene, the variants and traits.

The more we know about genetics, the more it seems we don't know.  As with many other traits, candidate genes and variants thought to be associated with XLID keep getting added to the list of possible causal genes, but they are rarely validated.  Why are they considered causal?  Most often for statistical reasons, but also including the sometimes deceptive fact that if a variant is found in a gene already on the list of genes associated with a trait, it's considered to be a very likely candidate.

Cystic fibrosis is another example -- thousands of variants in the CFTR gene have been found in patients with CF -- again as with many traits, most of the rare variants are assumed to be causal, not demonstrated, based on knowledge that other variants in the gene are causal.  Similarly, Piton et al. have determined that some variants in genes known to be associated with XLID are apparently innocuous.  To their surprise, they also found a mutation in unaffected males that had previously been demonstrated to affect the protein encoded by the gene and therefore was assumed to be associated with XLID.

Another cautionary tale appeared in Cell not long ago.  A paper called "Exome Sequencing of Ion Channel Genes Reveals Complex Profiles Confounding Personal Risk Assessment in Epilepsy", Klassen et al., from Jeffrey Noebels' lab, reported the results of sequencing the exomes of 237 ion channel genes in people with and without sporadic idiopathic epilepsy.  They found that "rare missense variation in known Mendelian disease genes is prevalent in both groups at similar complexity, revealing that even deleterious ion channel mutations confer uncertain risk to an individual depending on the other variants with which they are combined." That is, the same ion channel variants assumed to cause epilepsy are found in people without the trait. 

But causation is genomic!
This all raises an important issue that is often overlooked.  Perhaps the important issue, as the hunt continues for genes for disease.  Causation is genomic, and background and environment matter.  Marginal gene effects really may often depend more on the sampled backgrounds, therefore, than on the candidate gene itself. 

This means that if we assume genes are causal, but that they don't always act that way, neither 'positive' nor 'negative' findings can undermine that theory.  It allows us to keep our genomic causation axiom, and impedes conceptual progress.  The underlying reason, consistent with the above sentiments, is that we do not have a theory for how genes affect traits.  We would say that the nature of life is an evolutionary historical process that proliferates ad hoc diversity, with no prior constraint on which or how many or how genes produce which trait.  In this sense, we can say that genomics is not like chemistry, in which every carbon atom is identical.

This is also why genomic risk rather than single-gene risk may be better, and why we've argued that single-site GWASsy results are suspect, but also even if one uses background, the backgrounds are so complex and varied they really don't help much except in relatively unusual circumstances.

Friday, August 16, 2013

How I Became Sick



by Jim Wood

I am a patient.  I used to be a regular human being, but that was a long time ago.  Back in the 1980s, as a result of a routine office visit and blood work, I was found to have slightly elevated blood pressure and blood sugar.  Nothing to worry about, merely in the upper range of normal, but probably worth keeping an eye on in the future.  Then, a few years later, I was diagnosed with two conditions called prediabetes and prehypertension, neither of which I had ever heard of before; so I was put on a lifetime’s worth of medication.  A few years after that, I was diagnosed with full-blown type 2 diabetes and essential (if mild) hypertension.  So my meds were augmented and the dosages upped.  By now I was a definite patient, requiring frequent monitoring and more aggressive treatment.

Sphygmomanometer; Wikimedia Commons
 But here’s the curious thing: my numbers (blood pressure and blood sugar) had not become worse.  In fact, they had improved a bit, which presumably had something to do with the meds.  Moreover, I had not developed any symptoms.  I did not have peripheral neuropathy, I had never experienced a coma, my feet and kidneys were just dandy, I had never had a stroke or any sign of heart disease.  In fact, the only major health problem I had turned out to be the result of one of the medications I was on, which actually caused serious hypotension, at least in me.  The scrip was withdrawn and now I’m fine.  I remain a patient, but a patient without a disease.

If there was no deterioration in my physical condition, how did I go from having minor not-quite-symptoms to having two life-threatening illnesses?  The answer is simple: the magic trick was performed by a series of Expert Medical Panels – well-known physicians and epidemiologists who were aided by representatives from the pharmaceutical industry.  These worthies decided on several occasions to lower the threshold values used to diagnose hypertension and type 2 diabetes.  Were these changes inspired by new evidence that even mild elevations in blood pressure or blood sugar caused serious disease and a greater risk of death?  No.  They were inspired by a longstanding commitment to “preventive medicine” – the well-intentioned idea that we can intervene in people’s lives before they develop overt disease (i.e. before they display real symptoms) and delay or prevent those diseases from ever happening.  I remember that, when I was a callow youth, the idea of preventive medicine was fairly new and the public reaction to it was overwhelmingly positive.  This was something that just plain made sense.  And, in fact, some preventive measures worked really well: don’t smoke, get some exercise, watch your diet, and get a bunch of vaccinations.  But other preventive measures had effects that were more equivocal, improving people’s health only at the margin and mostly turning a whole bunch of perfectly normal people into patients.

Take hypertension as an example.  The thresholds (both systolic and diastolic) for diagnosing hypertension were reduced in 1997 as a result of the Sixth Report of the Joint National Committee on Detection, Evaluation, and Treatment of High Blood Pressure, which appeared in Archives of Internal Medicine.  It has been estimated that this change increased the number of people in the U.S. diagnosed each year with the disease by about 13 million.  (This figure comes from the excellent book by Dr. Gilbert Welch et al., Overdiagnosed, Beacon 2011.)  That’s a lot of new customers, er, patients.  Did it help any of those new patients?  Well, some no doubt, but while the number of people being treated for hypertension skyrocketed, the mortality rate from causes of death associated with hypertension changed pretty much not at all.  What the change in diagnostic criteria did was greatly increase the number of normal people who have to be treated without benefit in order to increase the chance that you help just one person who really needs it (given that you can’t empirically distinguish the normals from the ones needing help).  In the past, when the diagnostic criteria were set high, nearly everyone who was diagnosed and treated probably benefitted to some degree (if they didn’t die from some other cause or indeed from hypertension meds first).  Using the current standards, however, a large fraction, perhaps a majority, of people being diagnosed and treated probably have no underlying disease and are unlikely to develop it in the future even without treatment.  Am I one of those people?  I have no way of knowing, short of going off all my medications and waiting to see if I die in short order.  Do I want to do that?  Not really (being both a moral and a physical coward).  Do I suspect that I may be receiving treatment that either is unnecessary or at best will have only trivial benefits?  Well, yes.

The basic underlying epistemological questions here are (1) can you ever make valid predictions about future disease risks; (2) how can you turn continuous biological variation in, say, blood pressure into a simple, dichotomous (yes/no) criterion for clinical decision-making; and (3) how can you capture inter-individual physiological variation with a single set of numbers?  The answers are: (1) no, (2) you can’t – at least not without injecting a big dose of arbitrariness and error into the process – and (3) you can’t, period.  In assessing the value of some diagnostic criterion like a blood pressure cut-off, we speak of sensitivity and specificity.  Sensitivity is the probability that your criterion correctly identifies everyone with the condition of interest, including people who are not yet displaying clear-cut pathologies (one minus the probability of a false negative).  Specificity is the probability of correctly diagnosing only the people with the condition of interest and not those with some other condition or no condition at all (one minus the probability of a false positive).  Both sensitivity and specificity are virtues, and you’d like to design your diagnostic methods so as to maximize both of them simultaneously.  But, generally speaking, you can’t because they compete with each other.  To guarantee that you catch all criminals, jail everyone (a rather Javert-like solution, I admit).  By the same token, to guarantee that you treat all cases or potential cases of a disease, diagnose and treat everyone.  Sure, most people won’t actually have the specific disease you’re trying to diagnose (the specificity will be as low as it could ever get), but that’s the price you pay for perfect sensitivity.  Although what I’ve just described is, of course, a gross exaggeration of current clinical practice, there’s no doubt that things have been moving in that general direction.  The whole goal of preventive care is to maximize sensitivity (in this case the ability to detect a disease even when there’s no overt sign of it) at the expense of specificity in order to “improve” prediction.  But that’s a mug’s game because a prediction with a high false positive rate is not a good prediction.

Is all this the result of an evil conspiracy of greedy drug manufacturers and health-care providers?  I honestly don’t know (even if I sometimes have my suspicions).  I think it’s entirely possible that it’s simply the result of misdirected good intentions and a fervent if wrongheaded faith in preventive medicine.  I genuinely like and respect my primary care physician, who assures me that I’m going to die if I go off my meds.  (I’m pretty sure I’m going to die anyway, but that’s another story.)  I truly think she has my best interests at heart.  But do I think she’s bought into a mind-set that seriously needs to be rethought?  Absolutely.  Do I enjoy her treating me like a patient in the absence of any discernible disease?  Not a bit.  Do I have the courage to tell her so?  See the comment about moral cowardice above.

There are many reasons that unnecessary patienthood is annoying and costly (to me, never mind the general public) and maybe sometimes even dangerous.  As already noted, I’ve had a non-trivial medical emergency (which landed me in the hospital) caused by one of my medications.  And I have to do “co-pays” for all my exams, treatments, and prescriptions – even though, by U.S. standards, I’m ridiculously well-insured.  And God knows I’ve experienced my fair share of annoyance scheduling frequent physical exams and blood tests and taking the time to refill my many prescriptions.  But my real objection to being a patient runs deeper than any of that – being treated like a patient is infantilizing.  I bristle at that subtle hint of moral disapproval when my doctor tells me my numbers have “worsened” (often within the range of measurement error).  I bristle even more at that slight tone of “What a good boy!” that accompanies any “improvement” in my numbers.  I was treated like a child for the first several years of my life, and I got thoroughly fed up with it.  Why is it happening to me again?

There are those in our society who need health care and are not getting it.  I am not one of them.  If we want to improve the health of the U.S. population in general, the single best thing we can do is eliminate disparities in access to health care.  My being treated is not furthering that goal at all, although it is increasing the costs of health care for everyone.  Let’s face it: the lovely idea of preventive medicine comes in a poisoned chalice.