Thursday, May 28, 2015

Discovering the "laws of life": where are we?

For several years, I've been in an off-and-on discussion about topics relevant to MT readers, with a thoughtful undergraduate student, named Arjun Plakett.  He has now graduated, but we're still in at least occasional contact.  He is a chemical engineering graduate, but is interested in the wider issues of science.

I mention this because Arjun recently brought to my attention an old, but thought-provoking and, I think, relevant book.  It is by the late Milton Rothman, and called Discovering the Natural Laws (1972, reprinted by Dover, 1989).  This book is a very interesting one, and I wish I had known of it long ago. It concerns the history by which some of the basic 'laws' of physics were shown to be just that: laws. And, while its specific topic is physics, it is very relevant to what is going on in genetics at present.

Source: my copy's cover

Rothman deals with many presumed laws of Nature, including Newton's laws of motion, conservation, relativity, and electromagnetisim.  For example, we all know that Newton's universal law of gravitation is that the gravitational attraction between two objects, of mass M and N, a distance r apart, is

F = G MN/r^2

(the denominator is r-squared).  G is the universal gravitational constant.  But how do we know this?  And how do we know it is a 'universal law'?  For example, as Rothman discusses, why do we say the effect is due to the exact square (power 2) of the distance between the objects?  Why not, for example, the 2.00005 power?  And what makes G a, much less the, constant? Why constant? Why does the same relationship apply regardless of the masses of the objects, or what they're made of? And what makes it universal--how could we possibly test that?

The fact that these laws are laws, as Rothman details for this and other laws of physics, was not easy to prove.  For example, what two objects would be used to show the gravitational law, where measurement errors and so on would not confound the result?  The Earth and Moon won't do, despite Newton's use of them, because they are affected by the pull of the Sun and other planets etc.  Those effects may be relatively small, but they do confound attempts to prove that the law is in fact a true, exact law.

A key to this demonstration of truth is that as various factors are accounted for, and measurement becomes more accurate, and different approaches triangulate, the law data become asymptotically close to the predictions of the theory: predictions based on theory get ever closer to what is observed.

In fact, and very satisfyingly, to a great approximation these various principles do appear to be universal (in our universe, at least) and without exception.  Only when we get to the level of resolution of individual fundamental particles, and quantum effects, do these principles break down or seem to vary.  But even then the data approach a specifiable kind of result, and the belief is that this is a problem of our incomplete understanding, not a sign of the fickleness of Nature!

Actually if a value, like G or the power 2 were not universal but instead were context-specific, but replicable in any given context, we could probably show that, and characterize those situations in which a given value of the parameter held, or we could define with increasing accuracy some functional relationship between the parameter's value and its circumstances.

But Mermaid's Tale is about evolution and genetics, so what is the relevance of these issues?  Of course, we're made of molecules and have to follow the principles--yes, the laws--of physics.  But at the scale of genes or organisms or evolution or disease prediction, what is its relevance?  The answer is epistemological, that is, about the nature of knowledge.

Theory and inference genetics and evolution: where are the 'laws'?  Are there laws?
When you have a formal, analytic or even statistical theory, you start with that, a priori one might say, and apply it to data either to test the theory itself, or to estimate some parameter (such as F in some setting such as a planet's orbital motion).  As I think it can be put, you use or test an externally derived theory.

Rothman quotes Karl Popper's view that "observation is always observation in the light of theories" which we test by experiments in what Rothman calls a 'guess and test' method, otherwise known as 'the scientific method'.  This is a hypothesis-driven science world-view.  It's what we were all taught in school.  Both the method and its inferential application depend on the assumption of law-like replicability.

Popper is most known for his idea that replication of observations never prove an hypothesis because the next observation might falsify it, but it only takes one negative observation to do that.  There may be some unusual circumstance that is valid but you hadn't yet encountered.  In general, but perhaps particularly in biomedical sciences, it is faddish and often self-serving to cite 'falsifiability' as our noble criterion, as if one has a deep knowledge of epistemology.  Rothman, like almost everyone, quotes Popper favorably.  But in fact, his idea doesn't work.  Not even in physics!  Why?

Any hypothesis can be 'falsified' if the experiment encounters design or measurement error or bad luck (in the case of statistical decision-making and sampling).  You don't necessarily know that's what happened, only that you didn't get your expected answer.  But what about falsifiability in biology and genetics?  We'll see below that even by our own best theory it's a very poor criterion in ways fundamentally beyond issues of lab mistakes or bad sampling luck.

We have only the most general theories of the physics-like sort for phenomena in biology.  Natural selection and genetic drift, genes' protein-coding mechanisms, and so on are in a sense easy and straightforward examples. But our over-arching theory, that of evolution, says that situations are always different.  Evolution is about descent with modification, to use Darwin's phrase, and this generates individuals as non-replicates of each other.  If you, very properly, include chance in the form of genetic drift, we don't really have a formal theory to test against data.  That's because chance involves sampling, measurement, and the very processes themselves.  Instead, in the absence of a precisely predictive formal theory, we use internal comparisons--cases vs controls, traits in individuals with, and without, a given genotype, and so on.  What falsifiability criterion should you apply and, more importantly, what justifies your claim of that criterion?

Statistical comparison may be an understandable way to do business under current circumstances that do have not provided a precise enough theory, but it means that our judgments are highly dependent on subjective inferential criteria like p-value significance cutoffs (we've blogged about this in the past).  They may suggest that 'something is going on' rather than 'nothing is going on', but even then only by the chosen cutoff standard.  Unlike the basic laws of physics, they do not generate asymptotically closer and closer fits to formally constructed expectations.

What is causal 'risk'?
We may be interested in estimating 'the' effect of a particular genetic variant on some trait.  We collect samples of carriers of one or two copies of the variant, and determine the risk in them, perhaps comparing this to the risk in samples of individuals.  But this is not 'the' risk of the variant!  It is, at best, an estimate of the fraction of outcomes in our particular data.  By contrast, a probability in the prediction sense is the result of a causal process with specific properties, not an empirical observation.  We usually don't have the evidence to assert that our observed fraction is an estimate of that underlying causal process.

Collecting larger samples will not in the appropriate sense lead asymptotically to a specific risk value.  That's because we know enough about genes to know that an effect is dependent on the individual's genomic background and life-experience.  Each individual has his/her own, unique 'risk'.  In fact, the term 'risk' itself, applied to an individual, is rather meaningless because it is simply inappropriate to use such individual concepts in this context, when the value is based on a group sample.  Or, put another way, the ideas about variance in estimates and so on just don't apply, because individuals are not replicable observations.  The variance of group estimates are different from what we might want to know about individuals.

These are the realities of the data, not a fault with the investigators.  If anything, it is a great triumph that we owe to Darwin whatever his overstatement of selection as a 'law' of nature, that we know these realities!  But it means, if we pay attention to what a century of fine life science has clearly shown, that fundamental context-dependence deprives us of the sort of replicability found in the physical sciences.  That undermines many of the unstated assumptions in our statistical methods (the existence of actual probability values, proper distributions from which we're sampling, replicability, etc.).

This provides a generic explanation for why we have the kinds of results we have, and why the methods being used have the problems they have.  In a sense, our methods, including statistical inference, are borrowed from the physical sciences, and may be inappropriate for many of the questions we're asking in evolution and contemporary genetics, because our science has shown that life doesn't obey the same 'laws' at the levels of our observation.

Darwin and Mendel had Newton envy, perhaps, but gave us our own field--if we pay attention
Rothman's book provides, to me, a clear way to see these issues. It may give us physics envy, a common and widespread phenomenon in biology.  I think Darwin and Mendel were both influenced indirectly by Newton and the ideas of rigorous 'laws' of nature.  But they didn't heed what Newton himself said in his Principia in 1687: "Those qualities of bodies that . . . belong to all bodies on which experiments can be made should be taken as qualities of all bodies universally." (Book 3, Rule 3, from the Cohen and Whitman translation, 1999)  That is, for laws of nature, if you understand behavior of what you can study, those same laws must apply universally, even to things you cannot or have not yet studied.

This is a cogent statement of the concept of laws, which for Newton was "the foundation of all natural philosophy."  It is what we mean by 'laws' in this context, and why better designed studies asymptotically approach the theoretical predictions.  But this principle is one that biomedical and evolutionary scientists and their students should have tattooed on so they don't forget.  The reason is that this principle of physics is not what we find in biology, and we should be forced to keep in mind and take seriously the differences in the sciences, and the problems we face in biology.  We do not have the same sort of replicability or universality in life unless non-replicability is our universal law.

If we cannot use this knowledge to ferret out comparable laws of biology, then what was the purpose of 150 years of post-Darwinian science?  It was to gain understanding, not to increase denial.  At least we should recognize the situation we face, which includes stop making misleading or false promises about our predictive, retrodictive, or curative powers, that essentially assume as laws what our own work has clearly proven are not laws.

Wednesday, May 27, 2015

Does Mendel fit in contemporary high school curriculum?

Ken and I met Tuomas Aivelo in person last August when we were in Finland teaching Logical Reasoning in Human Genetics, though we'd known each other on Twitter for a while.  He's a PhD student in Ecology and Evolutionary Biology at the University of Helsinki, with a keen interest in how genetics is taught in secondary schools.  He frequently speaks and writes on this topic (e.g., here), and has been active in efforts to revise genetics textbooks in Finland and elsewhere.  He has a blog of his own, with a very large following (which I know because occasionally he links to our blog and it seems that most of Finland follows the link), but unfortunately for those of us who don't speak Finnish, it's in Finnish.  Google Translate has a long way to go to make sense of Finnish.  We're pleased today to have Tuomas's thoughts in this guest post on teaching genetics.

Does Mendel fit in contemporary high school curriculum?

By Tuomas Aivelo

It’s not too often one gets real eureka moments, but I had one of those last August. I participated on Logical Reasoning in Human Genetics course taught by Ken Weiss, Anne Buchanan and others in University of Helsinki. I promised to write this up for Mermaid's Tale (as Anne has been continuously complaining it's difficult to read my own blog Kaiken takana on loinen via Google Translate).

One of my self-set objectives for the course was to figure out what kind of curriculum high school (or as it's called in Nordic context – upper secondary school) biology should have. Finland is about to update its national core curriculum and I have a soft spot for genetic education so I have been involved in curriculum design.

The problems with genetics education are well-known: students have poor understanding of what genes actually are, how genes function and students are not able to perceive how 'gene' means different things in different fields of biology. Lack of understanding of scientific models is very pervasive in biology. While our peers in chemistry and physics education have done better work in explaining what scientific models are (i.e., “Bohr model”, “Standard model”), we have been overwhelmed by many different models for ‘gene’.

I remember when I for the first time understood that gene can actually mean multiple things. It wasn’t that long ago, probably in the beginning of my PhD studies. I wish somebody would have earlier told me this explicitly. Suddenly I understood most of the debates which transcend the fields of biology and involve genes are related to different meanings of genes in different fields rather than anything based on reality.

Scientific models are most of all helpful in making the reality more easily approachable for scientific inquiry. They are very powerful heuristic tools. The problem lies in genetic textbooks having several different scientific models of ‘gene’ which are not explicitly outlined as scientific models. A standard high school biology textbook can contain several, contradictory, gene models while not explaining why they are contradictory.

This all should cause worry as genetics education is actually very important. The most often discussed result of misconceptions in genetics is genetic determinism. Here I refer to the idea that traits are fixed by genes as genetic determinism. Genetic determinism excludes meaningful impact of environment out from the genotype-to-phenotype relation and it has never been scientifically sound theory.  (Genetic determinism can also refer to broader idea that genes have an effect on phenotype. This is obviously widely accepted idea. I’ve suggested the use of term “hard genetic determinism” to differentiate from “scientific genetic determinism."

School curriculum, teachers and textbooks are not the only problem, but also science journalism is rife with ”gene for” news. “A gene for a trait” is very deterministic view of gene function. The school should at least equip students with the tools to assess these news. The real problem lies in genetic determinism being implied to lead racists views and lack of intestest in personal behaviour in disease risk. In Logical Reasoning in Human Genetics course, Joseph Terwilliger also gave a good reason why every geneticist should be worried of genetic determinism. He attributed the people's worry of genetic privacy (for example, related to the insurance companies and job applications) to the overselling of genetic determinism: people feel unsafe sharing their genome because they think a meaningful understanding of them as humans can be simply read from DNA. Furthermore, this worry could lead to too strict laws which prohibit the use of samples and lack of consent for genetic studies.


So, the question is: what should we teach the students? At the moment, genetics education is dominated by Mendelian genetics. The canonical learning sequence in genetics starts with Mendel and his peas and works out pea pedigrees and segregation analysis. One of the central exercises is to figure out from pedigrees which kind of genotypes and phenotypes individuals have and which probabilities there are for a given phenotype to be expressed. This approach has been strongly attacked in last years. For example Michael Dougherty and Rosie Redfield (here and here) have attacked the Mendelian approach in genetics education in high school and introductory university courses, respectively.

There has also been defensive effort on the importance of the Mendelian approach. Mike Smith and Niklas Gericke argued that Mendel's work belongs to the general scientific literacy as it is a central part of our culture. Furthermore, they suggested that the chronological approach to genetics increases student motivation and understanding of the nature of the science. They also suggested using Mendel as an example that has heuristic value as a simple model of inheritance.

I'm not impressed with these arguments. If we have a culture of learning genetic determinism in school, purging Mendel from contents would only make it easier to change this culture. Historical or chronological approaches to different fields of biology are rare: normally genetics and evolution (e.g., Mendel and Lamarck-Darwin) are the only ones widely used. Furthemore, it is highly contentious if the Mendelian ”laws of inheritance” are actually a simple model of inheritance which could be used for a more detailed view of genetics. Simple, yes, but they work in very limited context. In fact, in contemporary teaching, while “the historical” aspect is often included, the contemporary part of the genetics is rather limited. In oursurvey of gene models used in Finnish high school textbooks, we did not find any modern models: all the models used were formulated before 1960s. We have an obvious problem: history is taking space from the contemporary understanding of genetics.

This is the point, when eureka comes to help: it’s not about Mendel or history. It’s not about the contents. It’s about what we are teaching at a more fundamental level.

Until now, Mendel and his peas have been used to answer how traits are inherited. In fact, the current Finnish curriculum notes that the course contents should address “the laws of inheritance”. The right question, as I see, is how the inheritance of the traits is studied. Let’s not focus on how inheritance happens but rather how we can study this phenomenon. Here we are at the heart of one of the central problems in science education in general. In school, we learn how things are, rather than how we know.

When we present Mendel as an example of how early inheritance patterns were studied, it brings all the contentious concepts into the right context. 'Dominance' and 'recessiveness' are highly useful concepts in the Mendelian context. This approach would bring a natural possibility to discuss why Mendel used peas for his studies and how useful these studies are to infer the inheritance patterns of other traits. This would highlight what differences there are in inheritance of traits in humans and peas.

Obviously, this approach would lead to less dominant place of Mendelian inheritance in biology classes as then Mendel would be only one of the many approaches to study the inheritance of traits. In any case, it would still make it possible to have a historical approach or other innovative approaches to the teaching. It allows for an easy approach to the models in genetics and what they mean without making Mendelian inheritance the model, but rather one of many models.

The current trend in science education is to put the emphasis on the Nature of Science (e.g., what science is and how it is done) and socioscientific issues (e.g., how knowledge of science can be used by active citizens). The question of how things are studied obviously suits the Nature of Science emphasis nicely and it shouldn't be too much of a stretch to expect that the understanding of how inheritance is studied makes it easier to have informed decisions on inheritance.

One of the real problems in biology education is that the teaching strategies are rarely studied in classroom context. Only empirical testing can answer if this emphasis on why more than what and how leads to better learning in genetics.

Tuesday, May 26, 2015

Medical research ethics

In today's NYTimes, there is an OpEd column by bioethicist Carl Elliott about biomedical ethics (or its lack) at the University of Minnesota.  It outlines many what sound like very serious ethical violations and a lack of ethics-approval (Institutional Review Board, or IRB) scrutiny for research.    IRBs don't oversee the actual research, they just review proposals.  So, their job is to identify unethical aspects, such as lack of adequate informed consent, unnecessary pain or stress to animals, control of confidential information, and so on, so that the proposal can be adjusted before it can go forward.

As Elliott writes, the current iteration of IRBs, that each institution set up a self-based review system to approve or disapprove any research proposal that some faculty or staff member wishes to do, was established in the 1970's. The problem, he writes, is that this is basically just a self-monitored, institution-specific honor system, and honor systems are voluntary, subjective, and can be subverted.  More ongoing monitoring, with teeth, would be called for if abuses are to be spotted and prevented.  The commentary names many in the psychiatry department at Minnesota alone that seem to have been rather horrific.

But there are generalizable problems.  Over the years we have seen all sorts of projects approved, especially those involving animals (usually, lab mice).  We're not in our medical school, which has a distant campus, so we can't say anything about human subjects there or generally, beyond that occasionally one gets the impression that approval is pretty lax.  We were once told by a high-placed university administrator at a major medical campus (not ours), an IRB committee member there, that s/he regularly tried to persuade the IRB to approve things they were hesitant about....because the university wanted the overhead funds from the grant, which they'd not get if the project were not approved.

There are community members on these boards, not just the institution's insiders, but how often or effective they are (given that they are not specialists and for the other usual social-pressure reasons) at stopping questionable projects is something we cannot comment on--but should be studied carefully (perhaps it has been).

What's right to do to them?  From Wikimedia Commons

The things that are permitted to be done to animals are often of a kind that the animal-rights people have every reason to object to.  Not only is much done that does cause serious distress (e.g., making animals genetically transformed to develop abnormally or get disease, or surgeries of all sorts, or monitoring function intrusively in live animals), but much is done that is essentially trivial relative to the life and death of a sentient organism.  Should we personally have been allowed to study countless embryos to see how genes were used in patterning their teeth and tooth-cusps?  Our work was to understand basic genetic processes that led to complexly, nested patterning of many traits of which teeth were an accessible example.  Should students be allowed to practice procedures such as euthanizing mice who otherwise would not be killed?

The issues are daunting, because at present many things we would want to know (generally for selfish human-oriented reasons) can't really be studied except in lab animals. Humans may be irrelevant if the work is not about disease, and even for disease-related problems cell culture is, so far, only a partial substitute.  So how do you draw the line? Don't we have good reason to want to 'practice' on animals before, say, costly and rare transgenic animals are used for some procedure that may take skill and experience (even if just to minimize the animal's distress)?  With faculty careers depending on research productivity and, one must be frank, that means universities' interest in getting the grants with their overhead as well as consequent publication productivity their office can spin about, how much or how often is research on humans or animals done in ways that, really, are almost wholly about our careers, not theirs?

We raise animals, often under miserable conditions, to slaughter and eat them.  Lab animals often have protected, safe conditions until we decide to end their lives, and then we do that mostly without pain or terror to them.  They would have no life at all, no awareness experience, without our breeding them.  Where is the line to be drawn?

Similar issues apply to human subjects, even those involved in social or psychological surveys that really involve no risk except, perhaps, possible breach of confidentiality about sensitive issues related to them. And medical procedures really do need to be tested to see if they work, and working on animals can only take this so far. We may have to 'experiment' on humans in disease-related settings by exploring things we really can't promise will work, or that the test subjects will not be worse off.

More disturbing to us is that the idea that subjects are really 'informed' when they sign informed consent is inevitably far off the mark.  Subjects may be desperate, dependent on the investigator, or volunteer because they are good-willed and socially responsible, but they rarely understand the small print of their informedness, no matter how educated they are or how sincere the investigators are. More profoundly, if the investigators actually knew all the benefits and risks, they wouldn't need to do the research.  So even they themselves aren't fully 'informed'.  That's not the same as serious or draconian malpractice, and the situation is far from clear-cut, which is in a sense why some sort of review board is needed.  But how do we make sure that it works effectively, if honor is not sufficient?

What are this chimp's proper civil 'rights'?  From the linked BBC story.

Then there are questions about the more human-like animals.  Chimps have received some protections.  They are so human-like that they have been preferred or even required model systems for human problems.  We personally don't know about restrictions that may apply to other great apes. But monkeys are now being brought into the where-are-the-limits question.  A good journalistic treatment of the issue of animal 'human' rights is on today's BBC website. In some ways, this seems silly, but in many ways it is absolutely something serious to think about.  And what about cloning Neanderthals (or even mammoths)?  Where is the ethical line to be drawn?

These are serious moral issues, but morals have a tendency to be rationalized, and cruelty to be euphemized.  When and where are we being too loose, and how can we decide what is right, or at least acceptable, to do as we work through our careers, hoping to leave the world, or at least humankind, better off as a result?

Monday, May 25, 2015

QC and the limits of detection

It’s been a while since I’ve blogged about anything – things have been quite busy around SMRU and I haven't had many chances to sit down and write for fun [i] (some press about our current work here, here, here and here). 

Today I’m sitting in our home, listening to waves of bird and insect songs, and the sound of a gentle rain.  It is the end of another hot season and the beginning of another rainy season, now my 5th in a row here in Northwestern Thailand and our (my wife Amber and son Salem, and me) 3rd rainy season since moving here full time in 2013.  The rains mean that the oppressive heat will subside a bit.  They also mean that malaria season is taking off again. 


Just this last week the final chapter of my dissertation was accepted for publication in Malaria Journal.  What I’d like to do over a few short posts is to flesh out a couple of parts of this paper, hopefully to reach an audience that is interested in malaria and infectious diseases, but perhaps doesn’t have time to keep up on all things malaria or tropical medicine.  This also gives me a chance to go into more detail about specific parts of the paper that I needed to condense for a scientific paper. 


The project that I worked on for my dissertation included several study villages, made up of mostly Karen villagers, on the Thai side of the Thailand-Myanmar border. 

This particular research had several very interesting findings. 

In one of the study villages we did full blood surveys, taking blood from every villager that would participate, every 5 months, for a total of 3 surveys over 1 year.  These blood surveys included making blood smears on glass slides as well as taking blood spots on filter papers that could later be PCRd to test for malaria.  Blood smears are the gold standard of malaria detection (if you're really interested, see here and here).  A microscopist uses the slides to look for malaria parasites within the blood.  Diagnosing malaria this way requires some skill and training.  PCR is generally considered a more sensitive means of detecting malaria, but isn’t currently a realistic approach to use in field settings[ii]

Collecting blood, on both slides (for microscopic detection of malaria)
and filter papers (for PCR detection of malaria)


The glass slides were prepared with a staining chemical and immediately read by a field microscopist, someone who works at a local malaria clinic, and anyone who was diagnosed with malaria was treated.  The slides were then shipped to Bangkok, where an expert microscopist, someone who has been diagnosing malaria using this method for over 20 years and who trains other to do the same, also read through the slides.  Then, the filter papers were PCRd for malaria DNA.  In this way we could look at three different modes of diagnosing malaria – the field microscopist, the expert microscopist, and PCR. 

And basically what we found was that the field microscopist missed a whole lot of malaria.  Compared to PCR, the field microscopist missed about 90% of all cases (detecting 8 cases compared to 75 that were detected by PCR).   Even the expert microscopist missed over half of the cases (34 infections). 

What does this mean though? 

Lets start with how this happens.  To be fair, it isn’t just that the microscopists are bad at what they do.  There are at least two things at play here: one has to do with training and quality control (QC) while the other has to do with limits of detection.

Microscopy requires proper training, upkeep of that training, quality control systems, and upkeep of the materials (regents etc.)  In a field setting, all of these things can be difficult.  Mold and algae can grow inside of a microscope.  The chemicals used in microscopy, in staining the slides, etc. can go bad, and probably will more quickly under very hot and humid conditions.  In more remote areas, retraining workshops and frequent quality control testing are more difficult to accomplish and therefore less likely to happen.  There is a brain drain problem too.  Many of the most capable laboratory workers leave remote settings as soon as they have a chance (for example, if they can get better salary and benefits for doing the same job elsewhere – perhaps by becoming an “expert microscopist”?)

Regarding the second point, I think that most people would expect PCR to pick up more cases than microscopy.  In fact, there is some probability at play here.  When we prick someone’s finger and make a blood smear on a glass slide, there is a possibility that even if there are malaria parasites in that person’s blood, there won’t be any in the blood that winds up on the slide.  The same is true when we make a filter paper for doing PCR work.  However, the microscopist is unlikely to look at every single spot on the glass slide and there is also some probability at play here too.  There could be parasites on the slide, but they may only be in a far corner of the slide where the microscopist doesn’t happen to look.  These are the ones that would hopefully be picked up through PCR anyway. 

Presumably, many of these infected people had a low level of parasitemia, meaning relatively few parasites in their blood, making it more difficult to catch the infection through microscopy.  Conversely, when people have lots of parasites in their blood, it should be easier to catch regardless of the method of diagnosis.  


These issues lead to a few more points. 

Some people have very few parasites in their blood while others have many.  The common view on this is that in high transmission[iii] areas, people will be exposed to malaria very frequently during their life and will therefore build up some immunity.  These people will have immune systems that can keep malaria parasite population numbers low, and as a result should not feel as sick.  Conversely, people who aren’t frequently exposed to malaria would not be expected to develop this type of “acquired” (versus inherited or genetic) immunity.  Here in Southeast Asia, transmission is generally considered to be low – and therefore I (and others) wouldn’t normally expect high levels of acquired malaria immunity.  Why then are we finding so many people with few parasites in their blood? 

Furthermore, those with very low numbers of parasites may not know they’re infected.  In fact, even if they are tested by the local microscopist they might not be diagnosed (probably because they have a “submicroscopic” infection).  From further work we’ve been doing along these lines at SMRU and during my dissertation work, it seems that many of these people don’t have symptoms, or if they do, those symptoms aren’t very strong (that is, some are “asymptomatic”).

It also seems like this isn’t exactly a rare phenomenon and this leads to all sorts of questions:  How long can these people actually carry parasites in their blood – that is, how long does a malaria infection like this last?  In the paper I’m discussing here we found a few people with infections across multiple blood screenings.  This means it is at least possible that they had the same infection for 5 months or more (people with no symptoms, who were only diagnosed by PCR quite a few months later, were not treated for malaria infection).  Also, does a person with very few malaria parasites in her blood, with no apparent symptoms, actually have “malaria”?  If they’re not sick, should they be treated?  Should we even bother telling them that they have parasites in their blood?  Should they be counted as a malaria case in an epidemiological data system?

For that matter, what then is malaria?  Is it being infected with a Plasmodium parasite, regardless of whether or not it is bothering you?  Or do you only have malaria when you're sick with those classic malaria symptoms (periodic chills and fevers)?  

Perhaps what matters most here though is another question: Can these people transmit the disease to others?  Right now we don’t know the answer to this question.  It is not enough to only have malaria parasites in your blood – you must have a very specific life stage of the parasite present in your blood in order for an Anopheles mosquito to pick the parasite up when taking a blood meal.  The PCR methods used in this paper would not allow us to differentiate between life stages – they only tell us whether or not malaria is present.  This question should, however, be answered in future work. 




*** As always, my opinions are my own.  This post and my opinions do not necessarily reflect those of Shoklo Malaria Research Unit, Mahidol Oxford Tropical Medicine Research Unit, or the Wellcome Trust. For that matter, it is possible that absolutely no one agrees with my opinions and even that my opinions will change as I gather new experiences and information.  

    





[ii]  PCR can be expensive, can take time, and requires machinery and materials that aren’t currently practical in at least some field settings
[iii] In a high transmission area, people would have more infectious bites by mosquitoes per unit of time when compared to a low transmission area.  For example, in a low transmission area a person might only experience one infectious bite per year whereas in a high transmission area a person might have 1 infectious bite per month.  

Thursday, May 14, 2015

Coffee - a guilt-free pleasure?

A lot of research money has been spent trying to find the bad stuff that coffee does to us.  But Monday's piece by Aaron Carroll in the New York Times reviewing the literature concludes that not only is it not bad, it's protective against a lot of diseases.  If he's right, then something that's actually pleasurable isn't sinful after all!  

The piece had so many comments that the author was invited to do a follow-up, answering some of the questions readers raised.  First, Carroll reports that a large meta-analysis looking at the association between coffee and heart disease found that people who drank 3-5 cups a day were at the lowest risk of disease, while those who drank 5-10 had the same risk as those who drank none.

And, by 'coffee', Carroll notes that he means black coffee, not any of the highly caloric, fat and sugar laden drinks he then describes.  But, it can't be that all 1,270,000 people in the meta analysis drink their coffee black, so it's odd that he brings this up.  Fat and sugar are our current food demons, yes (speaking of pleasurable!), but really, does anyone have "a Large Dunkin’ Donuts frozen caramel coffee Coolatta (670 calories, 8 grams of fat, 144 grams of carbs)" (Carroll's words) 5-10 times a day?

A lifesaving cup of black coffee; Wikipedia

So, two to six cups of coffee a day are associated with lower risk of stroke, 'moderate' consumption is associated with lower risk of cardiovascular disease, in some studies (but not others) coffee seems to be associated with a lower risk of cancers, including lung cancer -- unless you're a smoker, in which case the more coffee you drink, the higher your risk.  The more coffee you drink, the lower your risk of liver disease, or of your current stage of liver disease advancing; coffee is associated with reduced risk of type 2 diabetes. And, Carroll reports, two meta-analyses found that "drinking coffee was associated with a significantly reduced chance of death." Since everyone's chance of dying is 100%, this isn't quite right -- what he means, presumably, is that it's associated with lowered risk of death at a given age, and by implication, longer life (though whether that means longer healthy life or not is unclear).

In the follow-up piece, he was asked whether this all applies to decaffeinated coffee as well.  Decaf isn't often studied, but when it is the results are often the same as for caffeinated coffee, though not always.  And sometimes true for tea as well.  So, is it the caffeine or something else in these drinks?

This is an interesting discussion, and it raises a lot of questions, but to me they have more to do with epidemiological methods than the idea that we now should all now feel free to drink as much coffee as we'd like, guilt-free (though, frankly, among 'guilty pleasures', for me coffee isn't nearly pleasurable enough to rank very high on the list!).  Indeed, after presenting all the results, Carroll notes that most of the studies were not randomized controlled trials, the gold standard of epidemiological research.  The best way to actually determine whether coffee is safe, dangerous, protective would be to compare the disease outcome of two large groups of people randomly assigned to drink 1, 2, 3....10 cups of (black) coffee a day for 20 years.  This obviously can't be done, so researchers do things like ask cancer patients, or people with diabetes or heart disease how much coffee they drank for the last x years, and compare coffee drinkers with non-drinkers.

So right off the bat, there are recall issues.  Though it's probably true that many people routinely have roughly the same number of cups of coffee every day, so the recall issues won't be as serious as, say, asking people how many times they ate broccoli in the last year.  But still, it's an issue.

More importantly, there are confounding issues, and the lung cancer association is the most obvious one.  If the risk of lung cancer goes up with the number of cups of coffee people drink a day, that's most likely because smokers have a cigarette with their coffee.  Or, have coffee with their cigarette.

Or, less obviously, perhaps people who drink a lot of coffee don't drink, I don't know, diet soda, and diet soda happens to be a risk factor for obesity, and obesity is associated with cancer risk (note: I made that up, more or less out of whole cloth, to illustrate the idea of confounding).

Una tazzina di caffè; Wikipedia

And what about the idea that decaffeinated coffee, and black but not green tea, can have the same effect?  If there really is something protective about these drinks, and we're going to get reductionist about it and finger a single component, what about water?  Could drinking 10 cups of water a day protect against liver disease, say?  True, not all the studies yield the same results, and the black but not green tea association suggests it's not the water, but not all studies show a protective affect of coffee, either.  But this idea would be easy to test -- Italians drink espresso by the teaspoon and on the run.  Do Italian studies of coffee drinking show the same protective effect?

Remember when drinking drip coffee was associated with increased cholesterol levels?  Carroll writes:
[A]s has been reported in The New York Times, two studies have shown that drinking unfiltered coffee, like Turkish coffee, can lead to increases in serum cholesterol and triglycerides. But coffee that’s been through a paper filter seems to have had the cholesterol-raising agent, known as cafestol, removed. 
High blood pressure and high cholesterol would be of concern because they can lead to heart disease or death. Drinking coffee is associated with better outcomes in those areas, and that’s what really matters.
So, high blood pressure and high cholesterol aren't in fact associated with heart disease or death?  Or, only in non-coffee drinkers? A word about methods is in order.  The results Carroll reviews are based on meta-analyses, that is, analysis that combines the results of sets of independently done studies.  As even Carroll said, some individual studies found an association between coffee and cancer at a particular site, but the effect of meta-analysis was to erase these.  That is, what showed up in a single study was no longer found when studies were combined.  In effect, this sort of pooling assumes homogeneity of causation and eliminates heterogeneity that may be present when considering individual studies, for whatever reason.  Meta-analysis allows far greater total sample studies, and for that reason has become the final word in studying causation, but it in fact can introduce its own issues.  It gains size by assuming uniformity, and that is a major assumption (almost always untested or untestable), which can amount to a pragmatic way of wishing away subtleties that may exist in the causal landscape.

I'm not arguing here that coffee is actually bad for us, or that it really isn't protective.  My point is just that these state-of-the-art studies exhibit the same methodological issues that plague epidemiological studies of asthma, or obesity, or schizophrenia, or most everything else.  Epidemiology hasn't yet told us the cause of these, or many diseases, because of confounding, because of heterogeneity of disease, because of multiple pathways to disease, because it's hard to think of all the factors that should be considered, because of biases, confounding of correlated factors, because meta-analysis has its own effects, and so on. 

One should keep in mind that last year's True Story, and many every year before that, had coffee -- a sinful pleasure by our Puritanical standards -- implicated in all sorts of problems, from heart disease to pancreatic cancer, to who knows what else.  Why should we believe today's Latest Big Finding?


Even if drinking coffee is protective to some extent, the effect can't be all that strong, or the results would have been obvious long ago.  And, the protective effects surely can't cancel out the effects of smoking, say, or overeating, or 5-10 coffee Coolatas a day.  The moral, once again, must be moderation in all things, not a reductive approach to longer life.

Wednesday, May 13, 2015

Just-So Babies

If you've ever watched a baby eat solid foods, that's DuckFace.

If you've ever seen a shirtless baby, that's DadBod.


Why are we so into these things, whatever they are, right now?


Because whether we realize it or not, they're babylike, which means they're adorable. And all things #adorbs are so #totes #squee right now for the millions (billions?) of social media users in our species. And if they're babylike, they're especially adorable to women and women are more frequently duckfaces than men. And women are increasingly open to embracing, maritally, the non-chiseled men of the world...who knew?


Well, anyone and everyone who's spent a damn second raising a baby, that's who. Especially those with mom genes.


Understanding babies, how they develop, and our connections to them while they do so is key to explaining just about everything, and perhaps literally eh-vuh-ray-thing, about humanity. 
How can I be so sure? Well aren't you?

We all know that the most attractive women are the ones that look like babies. 

source
And to help Nature out, makeup, lasers, and plastic surgery neotenize us temporarily or permanently, making our skin smooth, our eyes big, our lips pouty, our cheeks pinchable and rosy, and our noses button-y.

That stuff about beauty is common knowledge isn't it? We do these things to ourselves because of our evolved preferences for babies. We find them to be so extremely cute that this adaptive bias for babies affects much of the rest of our lives. Beauty is just the tip of the iceberg because, like I said, babies explain everything: DuckFace, DadBod, ...


And, yes, I do have more examples up my sleeve.


All that weight we gain while pregnant? You think it's to stockpile fat for growing a superhuge, supercharged baby brain both before and then after it barely escapes our bipedal pelvis?

Me and Abe, with hardly an inkling that there's still a whopping five more weeks ahead of us... suckers.
Or maybe I gained 20 pounds above and beyond the actual weight of the pregnancy so that I could protect my baby from calorie lulls from disease or food shortage, especially when those things happened more frequently to my ancestors.

Nope. And nope.


Pregnant women gain all that weight so that its lightning fast loss while lactating leaves behind a nice saggy suit of skin for the baby to grab and hold onto--not just on our bellies, but our arms and legs too. Our ancestors were dependent on this adaptation for quite a while, but over time mothers and infants became less dependent on it when they started crafting and wearing slings. Slings reduced selection on a baby's ability to grasp, you know.


Before slings, selection would have been pretty intent on favoring baby-carrying traits in both mothers and babies. For example, t
he way that our shoulder joints are oriented laterally, to the side, is unlike all the apes' shoulders which are oriented more cranially, so they're always kind of shrugging. You think we have these nice broad shoulders for swinging alongside us while running, for seriously enhancing our throwing ability, and, of course, for making stone tools? 

No. No. No.

All that's great for later in our lives, but our lateral facing shoulder joints are for being picked up and carried around while we're helpless babies. Our sturdy armpits are necessary for our early survival. And, biomechanically, those shoulder joints are oriented in the optimal way for carrying babies too. It's a win win. Combine that with the shorter hominin forearm, oh, and that itty-bitty thing called hands-free locomotion and it's obvious that we're designed to carry our babies and also to be carried as babies.


Bums come into play here too.


You probably think your big bum's for bipedal endurance running don't you? Or you might assume it evolved to give a stone-tipped spear a lot of extra oomph while impaling a wooly rhino hide.


Wrong. And wrong again.


Our big bums develop early in life because, like armpits, they build grab'n'go babies as well as well-designed grown-up baby carriers.

source
Bums plop nicely on a forearm and most certainly give babies and moms an advantage at staying together. Bums on moms (if not completely liquefied and fed to baby) steady her while holding such a load and also provide something for a baby slung on her back to sit on. Once babies lost the ability to grasp onto moms, babies' bodies had to adapt to be portable objects and moms bodies had to adapt to never drop those portable objects (at least not too far). No doubt big bums, like sturdy armpits, evolved before slings and home bases were ubiquitous in our species. 

Here's another one: The pregnancy "mask."


All those pigmentation changes that we describe as a side effect of the hormones are much more than that. Those new brown and red blotches that grow on a mother's chest and face, those are functional. They're fascinators. A mother's body makes itself more interesting and loveable for the busy, brainy baby on its way. Once we started decorating our bodies with brown and red ochre and pierced shell, bones and teeth, selection on these biological traits was relaxed. But they still persist. Why not? A human baby can't be over-fascinated, can it?


Oh, and fire. That was the best thing that ever happened to babies which means it was the best thing that ever happened to everyone living with babies. Quiet, serene, fascination, those flames... which also happen to process food for toothless babies whose exhausted, stay-at-foraging parents, would much rather swallow the food they chew up for themselves.


The baby also grows fascinators of its own. The big long hallux. Yep. Our big toe is long compared to other apes'. This is where you say it's an adaptation for bipedalism but you'd be only half right.



© naturepl.com / Ingo Arndt / WWF
The length makes it easier to reach with our mouths, as babies. And we teethe on that big toe. Imagine a world with no Sophies! That's what our ancestors had to deal with. Toes as teething toys doesn't seem so ridiculous when you remember that our long thumbs evolved for sucking.

Anyway, this long hallux was a bit unwieldy so thanks to a lucky mutation we stuck it to the rest of the foot and this turned out to work rather well for bipedalism.


Now that it's been a few minutes into this post, you must be sitting there at your computer thinking about boobs.


Yep, babies explain those too! The aesthetic preference for large breasts, by both males and females, is just nostalgia and allometry. You know how when you go back to visit your old kindergarten it looks so tiny compared to your memory? While you're a small human, you spend quite a lot of time with breasts, focused intently on them. But grow your early impression of breasts up in proportion to your adult body's sense of the world and, well, that's quite a big silicon kindergarten!


Your desires, your preferences, your tastes, your anatomy now, your anatomy when you were a baby... everything is babies, babies, babies. Even bipedalism itself.


Gestating a large fetus would not be possible if we were not bipedal. Think about it. All apes are bipedal to a significant degree. What pressured us into being habitual bipeds? Growing big fat, big-brained babies, that's what. Can you imagine a chimpanzee growing a human-sized fetus inside it and still knuckle-walking? I doubt the body could handle that. The spine alone! If you walk upright and let your pelvis help to carry that big fetus, you're golden. Obviously it worked for us.


I could go on forever! But I'll just give you one more example today. It's one you didn't see coming.


Women live longer than men, on average, and a large portion of that higher male mortality rate (at older ages) is due to trouble with the circulatory system. Well, it's obvious why. I'm looking at my arms right now and, complementing these brown and red fascinators, another part of my new mom suit is this web of ropy blue veins. Is this because my baby's sucked up all my subcutaneous fat from under my saggy skin, or... Or! Is it because my plumbing's stretched after housing and pumping about 50% more blood than normal by the third trimester. If my pipes are now, indeed, relatively larger for my blood volume and my body size then, all things being equal, that should reduce my risk of clogging and other troubles. Most women experience a term pregnancy during their lives. I'm sure this explains most if not all of the differences in mortality between men and women.


Like I said, that's just the start. And although I haven't provided evidence for many of the things I wrote, that shouldn't matter. These are just-so stories and they're terribly fun to think about. They're nothing close to approximating anything as lovely as Kipling's but they're what we humans do. If you're not a fan of today's post, hey, it's not like it passed peer review!

Tuesday, May 12, 2015

N=1 drug trials: yet another form of legerdemain?

The April 30 issue of Nature has a strange commentary ("Personalized medicine: time for one-person trials," by Nicholas Schork) arguing for a new approach to clinical trials, in which individuals are the focus of entire studies, the idea being that personalized medicine is going to be based on what works for individuals rather than on what works for the average person.  This, we would argue, shows the mental tangles and gyrations being undertaken to salvage something that is, for appropriate reasons, falling far short of expectation, and threatening big business as usual.  The author is a properly highly regarded statistical geneticist, and the underlying points are clearly made.

A major issue is that the statistical evidence shows that many important and costly drugs are now known to be effective in only a small fraction of those patients who take them.  That is shown in this figure from Schork's commentary.  For each of 10 important drugs, the blue icons are persons with positive results, the red icons are the relative number of people who do not respond successfully to the drug.


Schork calls this 'imprecision medicine', and asks how we might improve our precision.  The argument is that large-scale sampling is too vague or generic to provide focused results.  So he advocates samples of size N=1!  This seems rather weird, since you can hardly find associations that are interpretable from a single observation; did a drug actually work, or would the person's health have improved despite the drug, e.g.? But the idea is at least somewhat more sensible: it is to measure every possible little thing on one's chosen guinea pig and observe the outcome of treatment.

"N-of-1" sounds great and, like Big Data, is sure to be exploited by countless investigators to glamorize their research, make their grant applications sound deeply insightful and innovative, and draw attention to their profound scientific insights.  There are profound issues here, even if it's too much yet another PR-spinning way to promote one's research.  As Schork points out, major epidemiological research, like drug trials, uses huge samples with only very incomplete data on each subject.  His plea is for far more individually intense measurements on the subjects.  This will lead to more data on those who did or didn't respond.  But wait.....what does it mean to say 'those'?

In fact, it means that we have to pool these sorts of data to get what will amount to population samples.  Schork writes that "if done properly, claims about a person's response to an intervention could be just as well supported by a statistical analysis" as standard population-based studies. However, it boils down to replication-based methods in the end, and that means basically standard statistical assumptions.  You can check the cited reference yourself if you don't agree with our assessment.

That is, even while advocating N-of-1 approaches, the conclusion is that patterns will arise when a collection of such person-trials are looked at jointly.  In a sense, this really boils down to collecting more intense information on individuals rather than just collecting rather generic aggregates. It makes sense in that way, but it really does not get around the problem of population sampling and the statistical gerrymandering typically needed to find signals that are strong or reliable enough to be important and generalizable.

While better and more focused data may be an entirely laudable goal, if quality control and so on can in some way be ensured, but beyond this, N-of-1 seems more like a shell game or an illusion in important ways.  It's a sloganized way to get around the real truth, of causal complexity, that the scientific community (including us, of course) simply have not found adequate ways of understanding--or, if we have, then we've been dishonorably ignoring what we know in making false promises to the public who support our work and who seem to believe what scientists say.

It's a nice idea, or perhaps one should say  'nice try'?  But it really strikes one as more wishful than novel thinking, ways to keep on motoring along with the same sorts of approaches to look for associations without good theoretical or prior functional knowledge.  And, of course, it's another way to get in on the million genome, 'precision medicine'© gravy train. It's a different sort of plea for the usual view that intensified reductionism, enumeration of every scrap of data one can find, will lead to an emerging truth.  Sometimes, for sure, but how often is that likely?

We often don't have such knowledge, but whether there is or isn't a conceptually better way, rather than a kind of 'trick' to work around the problem, is the relevant question.  There will always be successes, both lucky and because of appropriately focused data.  The plea for more detailed knowledge, and treatment adjustments, for individual patients goes back to Hippocrates and should not be promoted as a new idea.  Medicine is still largely an art and still involves intuition (ask any thoughtful physician if you doubt that).

However, retrospective claims usually stress the successes, even if they are one-off rather than general, at the neglect of the lack of overall effectiveness of the approach--as an excuse to avoid facing fully up to the problem of causal complexity.  What we need is not more slogans, but better ideas, questions, more realistic expectations, or really new thinking.  The best way of generating the latter is to stop kidding ourselves by encouraging investigators, especially young investigators, to dive into the very crowded reductionist pool.

Monday, May 11, 2015

Disenfranchisement by differential death rates

reader sent us a pdf the other day of a paper with an interesting interpretation of black/white differential mortality, hoping we'd write about it.  The paper is called "Black lives matter: Differential mortality and the racial composition of the U.S. electorate, 1970–2004," Rodriguez et al.  The authors analyzed the effects of excess mortality in marginalized populations on the composition of the electorate in the US between 1970 and 2004, and conclude that mortality differentials mean fewer African American voters, and that this can turn elections, as they demonstrate for the election of 2004.

The authors used cause of death files for 73 million US deaths spanning 34 years to calculate:
(1) Total excess deaths among blacks between 1970 and 2004, (2) total hypothetical survivors to 2004, (3) the probability that survivors would have turned out to vote in 2004, (4) total black votes lost in 2004, and (5) total black votes lost by each presidential candidate.
This is straightforward demography: what was the death rate for whites in a given age group in a given year, and how does the black death rate compare?  This allows Rodriguez et al. to estimate excess deaths (relative to whites) at every age, and then, knowing the proportion of every age group that votes historically, and the proportion of those votes that go to each major party, they can estimate how many votes were lost, and how they might have changed election outcomes.  Clever, and important.
We estimate 2.7 million excess black deaths between 1970 and 2004. Of those, 1.9 million would have survived until 2004, of which over 1.7 million would have been of voting-age. We estimate that 1 million black votes were lost in 2004; of these, 900,000 votes were lost by the defeated Democratic presidential nominee. We find that many close state-level elections over the study period would likely have had different outcomes if voting age blacks had the mortality profiles of whites.  US black voting rights are also eroded through felony disenfranchisement laws and other measures that dampen the voice of the US black electorate. Systematic disenfranchisement by population group yields an electorate that is unrepresentative of the full interests of the citizenry and affects the chance that elected officials have mandates to eliminate health inequality.
Throughout the 20th century, they write, black mortality was 60% higher than white mortality, on average.  Why?  "Predominantly black neighborhoods are characterized by higher exposure to pollution, fewer recreational facilities, less pedestrian-friendly streets/sidewalks, higher costs for healthy food, and a higher marketing effort per capita by the tobacco and alcohol industries."  And, black neighborhoods have less access to medical facilities, the proportion of the black population that's uninsured is higher than whites, and exposure to daily racism takes a toll on health, among other causes.

Age distributions of the deceased by race, Rodriguez et al, 2015

The resulting differential mortality, the authors suggest, influenced many local as well as national elections in 2004.  And, with the deep, insidious effects Rodriguez et al. report, it may well be that excess mortality begets excess mortality, as blacks overwhelmingly vote Democratic, and the Republicans who win when black voter turnout isn't what it would be without the effects of differential mortality are the politicians who are less likely to support, among other things, a role for government in health care.  (To wit, all those serial Republican votes against the Affordable Care Act, and the budget passed just last week by Republicans in the Senate that would do away with the ACA entirely.)  And, it has long been known that the uninsured disproportionately die younger, and of diseases for which the insured get medical care.

"Lyndon Johnson and Martin Luther King, Jr. - Voting Rights Act" by Yoichi Okamoto - Lyndon Baines Johnson Library and Museum. Image Serial Number: A1030-17a. Wikipedia

This isn't the only form of disenfranchisement that the African American population is subject to, of course.  Felony disenfranchisement is a major cause, and, recent election cycles have seen successful attempts by multiple Republican-led state governments to "control electoral fraud" (the existence of which has been hard to impossible to prove) by limiting the voting rights of blacks, Hispanics, the young and the elderly, people who are more likely to vote Democratic.

In the May 21 New York Review of Books, Elizabeth Drew wrote a scathing review of these ongoing attacks.  It's a must-read if you're interested in the systematic, planned, fraudulent co-option of voting rights in the US.  This, coupled with the 2013 Supreme Court decision on the Voting Rights Act, along with the role of big money in politics, is having an impact on democracy in the US.  As Drew wrote,
In 2013 the Supreme Court, by a 5–4 vote, gutted the Voting Rights Act. In the case of Shelby v. Holder, the Court found unconstitutional the sections requiring that states and regions with a history of voting discrimination must submit new voting rights laws to the Justice Department for clearance before the laws could go into effect. Congressman John Lewis called such preclearance “the heart and soul” of the Voting Rights Act. No sooner did the Shelby decision come down than a number of jurisdictions rushed to adopt new restrictive voting laws in time for the 2014 elections—with Texas in the lead.
Republicans can be rightly accused of a lot of planned disenfranchisement of minority, young and elderly voters, given the extensive gerrymandering of the last 10 years or so, and the recent spate of restrictive voter ID laws.  Can they be accused of planning disenfranchisement by differential mortality?  Probably not, though it's pretty likely this report won't spur them into action to address the causes of mortality inequality.  Though, Rodriguez et al. write:
The current study findings suggest that excess black mortality has contributed to imbalances in political power and representation between blacks and whites. Politics helps determine policy, which subsequently affects the distribution of public goods and
services, including those that shape the social determinants of health, which influence disenfranchisement via excess mortality. In the United States, especially after the political realignment of the 1960s, policy prescriptions emanating from government structures and representing ideologically divergent constituencies have influenced the social determinants of health, including those that affect racial disparities. And given the critical role of elected politicians in the policy-making apparatus, the available voter pool is an essential mechanism for the distribution of interests that will ultimately be represented in the policies and programs that affect us all.
It's hard to imagine that the Democrats in power today would champion the kind of Great Society programs Johnson pushed through in the 1960's, but Obama did give us the Affordable Care Act, and whatever you think of it, it's possible that this could have an impact, even if slight, on differential black/white mortality.

As we edge further away from universal enfranchisement in this democracy, for obvious and, as Rodriguez et al. report, less obvious reasons, Ken always points out that we should step back and ask how different the kind of minority rule we've got now (in effect, 1% rule, with wealthy legislators passing laws that favor the wealthy) is from the minority rule that has been the order of the day throughout history.  It's a dispiriting thought, given that we saw a flurry of action in favor of equal rights through the latter half of the 1900's.  But equality is not a major theme in political discourse these days.

One way or another, hierarchies of power and privilege are always established.  The minority at the societal top always find ways to keep the majority down, and retain inequitable privilege for themselves.  Differential mortality of the kind reported here is just one current way that the privilege hierarchy is maintained.  Even the communists, not to mention Christians, whose formal faiths have been against inequity, are perpetrators.  Maybe there's room for optimism somewhere, but where isn't obvious. One argument is that, at least, the living conditions of those at the socioeconomic bottom (in developed countries) are better than their societal ancestors.  Of course equity is itself a human concept and not one about which there is universal agreement in favor.  But this new study is food for thought about ideal societies, and how difficult they are to achieve.

Friday, May 8, 2015

Captain FitzRoy knew best!

Robert Fitzroy (1805-1865), Captain of HMS Beagle, is a famous personage.  He's mainly famous by association with his one-time very famous passenger, the naturalist Charles Darwin on the famous voyage to map the coastline of South America.  Fitzroy was a prominent Royal Navy officer who held many other positions during his career, but he is most widely known for his religious fundamentalism and opposition to Darwin's heretical theory of evolution by natural selection.

Among other things, it was Fitzroy who had gathered a collection of Galapagos finches, that some years after the return to England, Darwin asked to borrow so that he could help flesh out his evolutionary ideas in ways that are now quite famous.  Fitzroy bitterly resented that he had helped Darwin establish this terrible idea about life.

FitzRoy has paid the price for his stubborn biblical fundamentalism, a price of ridicule.  But that is quite unfair.  Fitzroy was a sailor and his hydrographic surveys of South America on the Beagle were an important part of the Navy's desire to understand the world's oceans.  His meticulous up-and-down, back-and-forth fathoming of the ocean floor and coastline gave Darwin months of free time, to roam the adjacent territory in South America, doing his own kind of geological surveying, that led him to see physical and biogeographic patterns that accounted for the nature and origin of life's diversity.  That it was heretical to biblical thought was not Darwin's motive at the time, nor is it unreasonable to think that a believer, like FitzRoy, should easily accept this challenge to the meaning of existence lightly.




But FitzRoy, who went on to other distinguished  positions in the British government, made a contribution perhaps more innovative and at least as important as his hydrographic surveys:  he was the pioneer of formal meteorology, and of weather forecasting.

This contribution of FitzRoy's is important to me, because I was at one time a professional meteorologist (and, indeed, in Britain).  FitzRoy developed extensive material on proper weather-measuring instrumentation (weather vanes, thermometers, barometers), and ways of collecting and analyzing meteorological data.  His main objective was the safety of sailors and their ability to anticipate and avoid dangerous weather.

FitzRoy's work led to the systematic collection of weather data, the systematic understanding of the association of storms with pressure and temperature changes, the large-scale flow of air and its weather implications.  And he considered the nature of the sorts of data that, in the 1800s, could be collected.  He developed some basic forecasting rules based on these sorts of data, understood the importance of maps plotting similar observations over large areas and what happened after the time of the map, and of global wind, weather, and pressure patterns, among other things.

Cover page of FitzRoy's book.  Google digitization from UCalifornia


In 1863, just four years after Darwin had made a big stir with his Origin of Species, then Rear Admiral FitzRoy wrote a popular book, such as things were at the time, called The Weather Book: A Manual of Practical Meteorology.  This is a very clearly written book that shows the state of this new science at the time.  It was an era of inductionism--the wholesale collection of lots and lots of data--based on the Enlightenment view that from data laws of Nature would emerge.  But FitzRoy, a meticulous and careful observer and thinker, also noticed something important.  As he wrote in The Weather Book:
Objects of known importance should take precedence of any speculative or merely curious observations. However true may be the principle of accumulating facts in order to deduce laws, a reasonable line of action, a sufficiently apparent cause for accumulation, is surely necessary, lest heaps of chaff or piles of unprofitable figures should overwhelm the grain-seeker, or bewilder any one in his search after undiscovered laws. 
Definite objects, a distinct course, should be kept in mind, lest we should take infinite pains in daily registration of facts scarcely less insignificant for future purposes than our nightly dreams.
Does this perhaps ring any relevant bells about what is going on today, in many areas of science, including a central one spawned by Mendel: genetics?  Maybe we should be paying a bit more heed, rather than ridicule, to another of our Victorian antecedents.

Yes, I'm taking impish advantage of a fortuitous quote that I came across, to make a point.  But it is a valid point in today's terms nonetheless, when anything one chooses to throw in the hopper is considered 'data' and applauded.

In fact, modern meteorology is a field in which huge amounts of data collection are, indeed, quite valuable.  But there are legitimate reasons: first, patterns repeat themselves and recognizing them greatly aids forecasting, both locally and on a continental scale; second, we have a prior theory, based on hydrodynamics, into which to fit these data.  Three, using the theory and the data from the past and present, we can reasonably accurately predict specific future states.  These are advantages and characteristics hardly shared with anything comparably rigorous in genetics and evolution, where, nonetheless, raw and expensive inductionism prevails today.