Absolute and relative measures – what’s the difference?

I’m re-working a paper on risk relativism in response to some reviewer comments, and also preparing a talk on the topic for Friday’s meeting at KCL, “Prediction in Epidemiology and Healthcare”. The paper originates in Chapter 8 of my book, where I identify some possible explanations for “risk relativism” and settle on the one I think is best. Briefly, I suggest that there isn’t really a principled way of distinguishing “absolute” and “relative” measures, and instead explain the popularity of relative risk by its superficial similarity to a law of physics, and its apparent independence of any given population. These appearances are misleading, I suggest.

In the paper I am trying to develop the suggestion a bit into an argument. Two remarks by reviewers point me in the direction of further work I need to do. One is the question as to what, exactly, the relation between RR and law of nature is supposed to be. Exactly what character am I supposing that laws have, or that epidemiologists think laws have, such that RR is more similar to a law-like statement than, say, risk difference, or population attributable fraction?

The other is a reference to a literature I don’t know but certainly should, concerning statistical modelling in the social sciences. I am referred to a monograph by Achen in 1982, and a paper by Jan Vandebroucke in 1987, both of which suggest – I gather – a deep scepticism about statistical modelling in the social sciences. Particularly thought-provoking is the idea that all such models are “qualitative descriptions of data”. If there is any truth in that, then it is extremely significant, and deserves unearthing in the age of big data, Google Analytics, Nate Silver, and generally the increasing confidence in the possibility of accurately modelling real world situations, and – crucially – generating predictions out of them.

A third question concerns the relation between these two thoughts: (i) the apparent law-likeness of certain measures contrasted with the apparently population-specific, non-general nature of others; and (ii) the limitations claimed for statistical modelling in some quarters contrasted with confidence in others. I wonder whether degree of confidence has anything to do with perceived law-likeness. One’s initial reaction would be to doubt this: when Nate Silver adjusts his odds on a baseball outcome, he surely does not take himself to be basing his prediction on a law-like generalisation. Yet on reflection, he must be basing it on some generalisation, since the move from observed to unobserved is a kind of generalising. What more, then, is there to the notion of a law, than generalisability on the basis of instances? It is surprising how quickly the waters deepen.

A taste of my own medicine

Yesterday I briefed the media on my work and recent book on philosophy of epidemiology, ahead of next week’s launch event at the University of Johannesburg, and today one piece appeared in the Times (here) and two (here and here) in the Star. All the pieces are reasonably fair, and the latter two in particular are more conceptually focused, and thus quite a nice reflection of what I try to do. But it’s interesting for me that what grabbed the most attention were largely empirical claims. A couple of radio stations picked up on the claim that the vitamin supplements industry is a “con”, appearing in the Times piece, and I was interviewed at lunchtime today by Talk 702 and RSG. Both homed in on my claims about vitamins. Talk 702 asked if I expected any defamation actions. I guess this is how the media works – you never quite know which part of what you say is going to be amplified over the rest. That said, I am very grateful to the Times journalist that the context of my “con” claim was included in the piece.

For interest, I thought I would upload the presentation I gave yesterday. Not much about vitamins in there, you will see: 2013-09-10 Media Briefing – Philosophy of Epidemiology

Is being overweight good for you?

There has recently been a dispute about obesity and mortality, briefly as follows. A paper reporting a systematic review was published in the Journal of the American Medical Association (JAMA) earlier this year by Katherine Flegal of the US National Center for Health Statistics, in which Flegal and colleagues suggested that being overweight was associated with lower all-cause mortality than being normal weight. They confirmed that being obese, as opposed to merely overweight, is associated with higher mortality. This provoked a public response from Walter Willett, Chair of Epidemiology and Nutrition at Harvard, who called the paper “a pile of rubbish” and suggested that “no one should waste their time reading it”. The reasons that have received widest circulation are that such results undermine public health efforts, which require simple messages.

There is a thoughtful editorial in Nature here, siding with Flegal against Willett:

http://www.nature.com/news/the-big-fat-truth-1.13039

The argument developed by the editors of Nature is that by simplifying public messages about science, we open the door to easy refutation. They draw an interesting comparison with messages about climate change. The line that climate change is settled science makes the life of opposing lobbyists easier, they suggest, because all those lobbyists need to do is point to any kind of scientific controversy about any detail of climate change science, and they will have demonstrated that the public message is false, and perhaps even known to be false by the very people pushing it. Likewise, the editors argue, over-simple messages on obesity and mortality will be easy to falsify. In both cases, this leads to disputes about details and obscures the big picture, which is clear, even if the exact point at which bodyweight becomes a problem (or the exact rate of rise in sea levels) is still a topic of active research.

A couple of thoughts struck me as I read this editorial, and the coverage thereof. First, most obviously, none of the parties seem that interested in truth. The question seems to be: what is the most effective communication strategy for improving public health? – and not: for communicating the truth of the matter in question? The editors of Nature claim that communicating the truth of the matter in question is the most effective strategy for improving public health, but they do not argue (here) for the intrinsic value of promoting scientific knowledge.

Perhaps that shouldn’t surprise me, but it does. Science derives its claim to a special voice in social and political matters from its impartiality and transparency, and not merely from the fact that it gets the right answers (when indeed it does).

My second thought is that this is, perhaps, a point at which medical and scientific traditions conflict. In science, there is no “patient”, and scientific authority is based on reason, at least in the Enlightenment ideal. (HPS colleagues, you will please forgive my simplification here, but perhaps you will enjoy the associated irony.) There is no corollary of the doctor-patient relationship in science, or between science and society more generally. I wonder if disputes of this kind about public health messages illustrate a conflict between two ways of thinking about how technical information ought to be conveyed: one broadly educational, as a physicist might seek to communicate her findings to a wider audience; the other broadly manipulative, in the (intended-as-neutral) sense of seeking to influence the behaviour of the public, as if it were a patient being treated.

There are of course a number of other things that might be said about this particular study, and which Willett may well have been referring to in his comments. Body-mass index is an outdated and crude measure of adiposity, devised when people were on average slighter, and it puts even mildly athletic persons with little body fact but a solid build in the “overweight” category. Then there are various criticisms of systematic reviews. Finally, there is the fact that a causal inference on the basis an association like this would be excessively hasty. Of course, the conclusion of the study is not that there is a causal link between being overweight and living longer; but being coy about causal talk is a poor substitute for being clear that the evidence is compatible with a range of very different causal stories.

It remains surprising that such high profile studies remain open to such basic objections. If public health messages are to be more honest and more complicated, then perhaps objections could also be more widely communicated, and in particular causal euphemisms and hints could be replaced by a franker admission of ignorance. Willet is not doing anything special by advocating a simplified message; he is just advocating a more explicit, less euphemistic kind of simplification, and a different emphasis – less kindly, spin – from that adopted by the authors. To this extent, the rebuke in Nature is, I suggest, not entirely fair.

The Myth of Translation

Next week I am part of a symposium at EuroEpi in Porto, Portugal with the title Achieving More Effective Translation of Epidemiologic Findings into Policy when Facts are not the Whole Story.

My presentation is called “The Myth of Translation” and the central thesis is, as you would guess, that talk of “translating” data into policy, discoveries into applications, and so forth is unhelpful and inaccurate. Instead, I am arguing that the major challenge facing epidemiological research is assuring non-epidemiologists who might want to rely on those results that they are stable, meaning that they are not likely to be reversed in the near future.

I expect my claim to be provocative in two ways. First, the most obvious reasons I can think of for the popularity of the “translation” metaphor, given its clear inappropriateness (which I have not argued here but which I argue in the presentation), are unpleasant ones: claiming of scientific authority for dearly-held policy objectives; or blaming some sort of translational failing for what are actually shortcomings (or, perhaps, over-ambitious claims) in epidemiological research. This point is not, however, something I intend to emphasize; nor am I sure it is particularly important. Second, the claim that epidemiological results are reasonably regarded by non-epidemiologists as too unstable to be useful might be expected to raise a bit of resistance at an epidemiology conference.

Given the possibility that what I have to say will be provocative, I thought I would try my central positive argument out here.

(1) It is hard to use results which one reasonably suspects might soon be found incorrect.

(2) Often, epidemiological results are such that a prospective user reasonably suspects that they will soon be found incorrect.

(3) Therefore, often, it is hard to use epidemiological results.

I think this argument is valid, or close enough for these purposes. I think that (1) does not need supporting: it is obviously true (or obviously enough for these purposes). The weight is on (2), and my argument for (2) is that from the outside, it is simply too hard to tell whether a given issue – for example, the effect of HRT on heart disease, or the effect of acetaminophen (paracetamol) on asthma – is still part of an ongoing debate, or can reasonably be regarded as settled. The problem infects even results that epidemiologists would widely regard as settled: the credibility of the evidence on the effect of smoking on lung cancer is not helped by reversals over HRT, for example, because from the outside, it is not unreasonable to wonder what the relevant difference is between the pronouncements on HRT and the pronouncements on lung cancer and smoking. There is a difference: my point is that epidemiology lacks a clear framework for saying what it is.

My claim, then, is that the main challenge facing the use of epidemiological results is not “translation” in any sense, but stability; and that devising a framework for expressing to non-epidemiologists (“users”, if you like) how stable a given result is, given best available current knowledge, is where efforts currently being directed at “translation” would be better spent.

Comments on this line of thought would be very welcome. I am happy to share the slides for my talk with anyone who might be interested.