Is breast best?

Following my post on the recent debate about being overweight, my attention was drawn to a recent systematic review of the long-term effects of breastfeeding published by the World Health Organisation (WHO). The report raises some related issues, and some that are interesting in their own right.

The review suggests the long-term benefits of breast-feeding do not seem especially dramatic. This is in contrast to the attitude to breast-feeding in some, but by no means all, developed countries, where breast-feeding is strongly encouraged. This raises the question as to why it is so strongly encouraged, and why parents should feel “guilt” (as one news report puts it) if they do not breast-feed.

Two points strike me about this. First, as in the discussion about the correct message on being overweight, there may be an argument that the simple message is the right one even if it isn’t accurate. On this line of argument – which I don’t endorse – the message “breast is best” ought to be put about even if the reality is more complicated. Maybe high quality formula made with sterilised water will not be significantly worse for a healthy baby than breast milk (goes this line of thought); however, there are many situations where breast-feeding can make the difference between life and death – where water quality is dubious, and where high quality formula is too expensive, for example. So better to stick with “breast is best”, and let some conscientious persons endure guilt that is strictly unnecessary. Besides, there is no evidence that breast is not best – only that it not make as much of a difference as that slogan may suggest.

As in the case of the message on being overweight, I think this argument is not a good one. A distinction must be drawn between necessary simplification for communication purposes, which we all do, all the time; and making assertions that go beyond the evidence. Perhaps we all do that too, but we shouldn’t. The worrying possibility that this systematic review raises is that the breast-feeding drive in some countries – the UK is one I know a little about – has gone well beyond the evidence that would justify it, which presumably means it is driven by ideology or conviction more than by evidence. The persistence and dangers of medical convictions, albeit deeply and honestly held, are well-known, most famously associated with the practice of blood-letting, and most recently highlighted by the evidence-based medicine movement.

The second point occurring to me is that there is no study that can settle the question of what is best for a given child. The slogan “breast is best” is ambiguous, because “best” can be read in more than one way. It could mean nutritionally best, in which case, the study seems to confirm that the slogan is true. Or it could mean best for the child, all things considered. In many situations, the two go together, especially in low-resource settings for reasons previously mentioned. But they are not the same. In higher-resource settings, it is easy to imagine breast-feeding having considerable costs for a child, all things considered. A breast-feeding mother may be less able to pursue her career, which may affect household income; and that appears to be an important social determinant of health. The psychological effect on the mother of forsaking professional opportunities might also have an impact on the child and the family as a whole. This will obviously depend on the particular mother and on the particular family. Some mothers might be in a position to balance demands; some may not have or want jobs or careers of a sort that would be impacted. But there are at least some women who see motherhood as forcing them to make sacrifices in their professional lives. Bottle-feeding may reduce the extent of these sacrifices, making it possible for other people to look after the child during the day, and indeed the night. In a nutshell, one size doesn’t fit all, contrary to what the “breast is best” slogan suggests. It could be that bottle, not breast, is best for a particular child, when all the circumstances are considered.

The review raises this possibility because the size of the long-term advantage of breast-feeding appears to exist, but to be small. For example, there appears to be a positive causal link between breast-feeding and IQ. But the size is small, and as the report points out, it isn’t clear how much of an advantage a small improvement in performance on intelligence tests really is (or really indicates, perhaps). It is easy to imagine a small advantage in intelligence being outweighed by factors such as having a higher household income, or simply having a happier mother. And of course if we expand the scope of outcomes beyond the strictly health-related then the imaginative task becomes even easier.

These issues are complex. Nonetheless it seems to me that there is a difference between simplifying and distorting. Breast-feeding makes significant demands of a particular individual (the mother), which are in part justified by the health benefits for another (the child). Because these demands are so significant, and because the moral compulsion involved is so strong, it cannot be right to overstate the advantages of breastfeeding relative to nearly-as-good alternatives, where these are available. Nor can it be right to ignore the possibility of socially mediated health effects of breastfeeding (household income, family happiness, etc.) and focus exclusively on nutritional effects. The slogan “breast is best” may be appropriate in areas riddled with cholera and dysentery (I make no judgement about that) but I doubt that it is justified in, for example, the UK, and relevantly similar countries.

Is being overweight good for you?

There has recently been a dispute about obesity and mortality, briefly as follows. A paper reporting a systematic review was published in the Journal of the American Medical Association (JAMA) earlier this year by Katherine Flegal of the US National Center for Health Statistics, in which Flegal and colleagues suggested that being overweight was associated with lower all-cause mortality than being normal weight. They confirmed that being obese, as opposed to merely overweight, is associated with higher mortality. This provoked a public response from Walter Willett, Chair of Epidemiology and Nutrition at Harvard, who called the paper “a pile of rubbish” and suggested that “no one should waste their time reading it”. The reasons that have received widest circulation are that such results undermine public health efforts, which require simple messages.

There is a thoughtful editorial in Nature here, siding with Flegal against Willett:

http://www.nature.com/news/the-big-fat-truth-1.13039

The argument developed by the editors of Nature is that by simplifying public messages about science, we open the door to easy refutation. They draw an interesting comparison with messages about climate change. The line that climate change is settled science makes the life of opposing lobbyists easier, they suggest, because all those lobbyists need to do is point to any kind of scientific controversy about any detail of climate change science, and they will have demonstrated that the public message is false, and perhaps even known to be false by the very people pushing it. Likewise, the editors argue, over-simple messages on obesity and mortality will be easy to falsify. In both cases, this leads to disputes about details and obscures the big picture, which is clear, even if the exact point at which bodyweight becomes a problem (or the exact rate of rise in sea levels) is still a topic of active research.

A couple of thoughts struck me as I read this editorial, and the coverage thereof. First, most obviously, none of the parties seem that interested in truth. The question seems to be: what is the most effective communication strategy for improving public health? – and not: for communicating the truth of the matter in question? The editors of Nature claim that communicating the truth of the matter in question is the most effective strategy for improving public health, but they do not argue (here) for the intrinsic value of promoting scientific knowledge.

Perhaps that shouldn’t surprise me, but it does. Science derives its claim to a special voice in social and political matters from its impartiality and transparency, and not merely from the fact that it gets the right answers (when indeed it does).

My second thought is that this is, perhaps, a point at which medical and scientific traditions conflict. In science, there is no “patient”, and scientific authority is based on reason, at least in the Enlightenment ideal. (HPS colleagues, you will please forgive my simplification here, but perhaps you will enjoy the associated irony.) There is no corollary of the doctor-patient relationship in science, or between science and society more generally. I wonder if disputes of this kind about public health messages illustrate a conflict between two ways of thinking about how technical information ought to be conveyed: one broadly educational, as a physicist might seek to communicate her findings to a wider audience; the other broadly manipulative, in the (intended-as-neutral) sense of seeking to influence the behaviour of the public, as if it were a patient being treated.

There are of course a number of other things that might be said about this particular study, and which Willett may well have been referring to in his comments. Body-mass index is an outdated and crude measure of adiposity, devised when people were on average slighter, and it puts even mildly athletic persons with little body fact but a solid build in the “overweight” category. Then there are various criticisms of systematic reviews. Finally, there is the fact that a causal inference on the basis an association like this would be excessively hasty. Of course, the conclusion of the study is not that there is a causal link between being overweight and living longer; but being coy about causal talk is a poor substitute for being clear that the evidence is compatible with a range of very different causal stories.

It remains surprising that such high profile studies remain open to such basic objections. If public health messages are to be more honest and more complicated, then perhaps objections could also be more widely communicated, and in particular causal euphemisms and hints could be replaced by a franker admission of ignorance. Willet is not doing anything special by advocating a simplified message; he is just advocating a more explicit, less euphemistic kind of simplification, and a different emphasis – less kindly, spin – from that adopted by the authors. To this extent, the rebuke in Nature is, I suggest, not entirely fair.

Call for Registration: Evidence in Healthcare Reform

Symposium at the Brocher Foundation, Geneva, 4-5 July 2013

Speakers: Alex Broadbent, Nancy Cartwright, Michael Marmot, Alfredo Morabia, Justin Parkhurst, Anya Plutinski, Jacob Stegenga, and Sridhar Venkatapuram.

Organised by Alex Broadbent (abbroadbent@uj.ac.za)and Sridhar Venkatapuram (svenkatapuram@gmail.com)

Register here: http://www.brocher.ch/en/events/evidence-in-healthcare-reform/

ABOUT THE SYMPOSIUM

Health care financing and provision is undergoing a crisis around the world. In Europe, the cost of medical care are increasing, along with levels of national spending on healthcare. Moreover the rate of increase exceeds the rate of regional economic growth. Something must be done, but it is far from clear what is the right political or social response. In much of the developing world, on the other hand, the situation is the reverse: increases in prosperity, particularly in the BRICS countries, have not been accompanied by significant healthcare investment; or else significant healthcare investment has benefited only a small portion of society. South Africa, for example, has some of the best medical care in the world, but it is not available to the majority of the population, and preventable morbidity and mortality remains shockingly high. And in North America, there are both high medical costs and highly unequal access, something which the present government has spent considerable political capital attempting to remedy. In short, there is very little apparent agreement on how a healthcare system should be organized in order to be effective, efficient, and equitable, despite a near-universal acceptance that health is both morally and economically important to individual and national wellbeing.

Against this backdrop, this symposium is convened to examine the philosophical underpinnings of effectiveness, efficiency and equity. Public and political debate about healthcare reform inevitably focuses on who should pay and who should provide. This workshop, however, seeks to address the prior question of what works: what healthcare measures are effective for improving population health, how we know they have been effective, and what evidence we need before confidently deploying them in a given sociopolitical setting.  Indeed, much of the tumult surrounding health care reform can only be understood when health policy is seen to share important common elements with other public policies. It is not determined only by scientific evidence, nor must it answer only to that evidence. It is also variously influenced by legal rights, bureaucratic norms, political negotiations, and market mechanism, and it must balance these forces against the scientific evidence for effectiveness. In this workshop we focus on the way scientific evidence fits into this complex sociopolitical setting: how it can, how in fact it does, and how it ought to influence healthcare reform.

In particular, the symposium has the following goals:

  1. To understand the notions of effectiveness, efficiency and equity as they are and ought to be employed in healthcare reform. Especially, to identify the normative implications of the first two, and to clarify the third.
  2. To assess the use of systematic reviews to drive healthcare reform. Especially, to bring together the various criticisms of their use, to identify evidence (if any) for their effectiveness, and to arrive at a clear “best practice” recommendation for the use of evidence in healthcare reform.
  3. To highlight the challenges facing developing countries attempting healthcare reform. Especially, to identify novel ways in which social determinants of health and disease might be managed as part of healthcare reform, and to specify the evidence necessary for such measures.

To register, visit http://www.brocher.ch/en/events/evidence-in-healthcare-reform/

Cancers, Viruses, and the Contrastive Model of Disease

If there is any value in the idea that disease is something other than the mere absence of health then that value must lie in the way that diseases are defined. Once we move from the knowledge that someone lacks health in respect of their fever and diarrhoea to the knowledge that they have cholera, we are able to make a number of inferences about the likely progression of the disease, the effectiveness of various possible treatments, and the relevance or otherwise of various circumstances (e.g. age, sex) to both these. These inferences depend on the way the disease “cholera” has been defined. They hold across most cases of cholera, but they are not reliable for diarrhoea or fever more generally. And they hold across most cases of cholera because cholera has been defined in terms of a cause that is, by definition, shared by every case of cholera, and is such that, by definition, no case of diarrhoea and fever can be a case of cholera without the cause, however symptomatically similar.

That cause is the presence of vibrio cholerae in the small intestine. K Codell Carter has done an excellent job of bringing out the extent to which the definition of diseases in terms of infectious agents really was an act of definition – a conceptual feat, not something to be settled by empirical evidence (Carter 2003). But this conceptual exercise has fallen out of fashion. The diseases of primary interest to modern medical science are “multifactorial”, meaning that, for whatever reason, they have not been classified according to particular causes. I have argued that multifactorial thinking has been endorsed too enthusiastically, and that there are merits to the old “monocausal” way of thinking about disease – even if the obsession with identifying just one explanatory cause is wrong. My “contrastive” model of disease is intended to retain the benefits of the monocausal model without the implausible commitment to classification in terms of just one cause (Broadbent 2009; a better-expressed version is forthcoming in Broadbent 2013, Ch 10).

An obvious difficulty for my account is that some kinds of ill health, such as instances of particular cancers, seem to be fruitfully treated as belonging together. Yet on my account they cannot be called instances of a disease unless a classificatory constellation of causes is known or at least suspected. (Of course one might prefer to mark the distinction with a word other than “disease”; but my hope is to get at an important distinction without getting tangled in semantic disputes.) This raises an objection of irrelevance: the objection would be that my account lays down distinctions that are irrelevant both to clinical practice and to scientific understanding.

A particular instance of this difficulty arises in the case of cancers that are caused by viruses. For example, cervical cancer seems to be caused by the human papillomavirus in over 90% of cases (Kumar et al. 2007) – but not 100%. According to the logic of my contrastive model of disease, we ought to say that those cases of cervical cancer caused by the virus form a disease – let’s call it “HPV-cancer” – while the small remainder do not. But is this really a good idea, either from a clinical perspective, or from the perspective of advancing our conceptual grasp on the health-condition(s) in question? Conversely, the circumstances in which the virus is sufficient to produce the cancer are not known: HPV infection does not always result in cervical cancer, and it is not known why. According to my contrastive model this means that the notional disease “HPV-cancer” is not well-defined, and that to call it a disease is to express the hope or conviction that some further causes can be found which are jointly sufficient for the development of cervical cancer. This raises questions, again, as to whether such a hope or conviction is very helpful from either a clinical or a scientific perspective; or whether, rather, we are better off thinking of cervical cancer as a multifactorial disease arising from various constellations of causes, and sometimes failing to arise from the very same constellations of causes.

In short, the question is whether a contrastive approach to disease classification really would help either scientific understanding or medical effectiveness, as I have claimed that it would. Cancers caused by viruses present a good case for studying this question because if the recommendations of the contrastive model can be followed for any multifactorial diseases, these are among the most viable candidates.

I’m starting work on a paper in which I attempt to get to grips with some of these questions and to answer this irrelevance objection to the contrastive model of disease. My gut feeling remains that there is an important distinction to be drawn between disease and mere lack of health (thus my starting point is the opposite of that of Boorse 1975). Yet I am not satisfied that I have made clear exactly what this importance is; nor have I given enough thought to exactly what one is supposed to do with cases that do not fit the model. And for the distinction to be important, as I claim, there must always be cases that do not fit the model – that is, there must be cases of ill health that are not disease, otherwise the distinction between them cannot be important. I hope to get further into these issues and I would welcome any thoughts.

 

Boorse, Christopher. 1975. “On the Distinction Between Disease and Illness.” Philosophy of Public Affairs 5: 49–68.

Broadbent, Alex. 2009. “Causation and Models of Disease in Epidemiology.” Studies in History and Philosophy of Biological and Biomedical Sciences 40: 302–311.

———. 2013. Philosophy of Epidemiology. New Directions in the Philosophy of Science. London and New York: Palgrave Macmillan.

Carter, K. Codell. 2003. The Rise of Causal Concepts of Disease. Aldershot: Ashgate.

Kumar, Vinay, Abul K. Abbas, Nelson Fausto, and Richard Mitchell. 2007. Robbins Basic Pathology. 8th ed. Philadelphia: Saunders Elsevier.

 

Prediction – readings?

I’m working on a paper on prediction. It has a general philosophy of science focus and is not specific to epidemiology, but my interest in prediction was provoked by epidemiology. If anyone knows of existing work on prediction, that is philosophical in focus, I would be very interested to hear about it. I realise that there is a statistical literature on the topic but what strikes me is that there is very little treatment of prediction in the philosophy of science. There seems to be no “standard treatment”, as there is for explanation or confirmation; and nor are there clearly articulated competing positions on prediction, as there are concerning causation or laws of nature, for example – or so it seems to me. I would appreciate any confirmation or disconfirmation of these claims.

Synopsis

The final version of my manuscript Philosophy of Epidemiology is off to the publishers, and I hope it counts as self-promotion rather than self-plagiarism if I post the Synopsis here:

The content of the book can be summarised as follows. Causation and causal inference are over-emphasized in epidemiology and in epidemiological methodology, and explanation and prediction deserve greater emphasis. Explanation is a much more useful concept for understanding measures of strength of association (Chapter 3) and the nature of causal inference (Chapters 4 and 5) than causation itself is, because causation is really only part of what we seek to measure and infer respectively. What epidemiologists really seek to do is explain, and their practices are seen much more clearly when described as such. Explanation is also the central notion to an adequate theory of prediction, a topic which has been sadly neglected by both philosophers and epidemiologists (Chapter 6). To predict, one must explain why what one predicts is going to happen, rather than some – but certainly not all – other possible outcomes (Chapter 7). Just like an adequate explanation, a good prediction must be sensitive to the alternative hypotheses it needs to dismiss and those it can safely ignore.

These themes are developed in Chapters 3-7. The remaining chapters tackle more specific problems, and apply the lessons learned where appropriate. Thus Chapter 8 concerns measures of attributability, which are not conceptually straightforward; and the lesson is that an outcome is attributable to an exposure to the extent that it is explained by it. Chapter 9 concerns “risk relativism”, an unfortunate degenerative tendency of thought some epidemiologists have identified in their peers. Risk relativism is diagnosed in Chapter 9 as a case of physics envy, exacerbated by a tendency to seek a univocal measure of causal strength, rather than a context-appropriate explanatory measure. Chapter 10 examines “multifactorialism”, another modern epidemiological ailment – though considered by most epidemiologists to be a profitable mutation. Multifactorialism is criticised for dropping the requirement that diseases be defined in relation to explanatory causes. Chapter 11 discusses the various embarrassments that lawyers have inflicted upon themselves in trying to use epidemiological evidence. Again, the lack of attendance to explanatory questions in favour of causal ones is partly to blame for the mess. Lawyers reasonably resist the explanation of particular trains of events by general statistical facts; but to refuse to admit those facts as evidence for the truth of particular causal explanations is plainly unreasonable. The theme, then, is that explanation deserves more attention epidemiological attention, and causation less. We will revisit the theme in the concluding Chapter 12.

[Philosophy of Epidemiology is being published by Palgrave Macmillan in the series New Directions in the Philosophy of Science and will appear in 2013.]

Getting Over Hill’s Viewpoints

In 1965, Austin Bradford Hill identified nine “viewpoints” from which an association may be viewed in seeking to decide whether it is causal (Hill 1965). Hill’s nine viewpoints, often wrongly called “criteria”, have been repeated and rehashed very often indeed, with both approval and disapproval. Despite the profusion of developments in technical and non-technical literatures on causal inference since then, Hill’s viewpoints remain a starting point for discussions of causal inference in epidemiology.

There are good reasons for this. Technical advances do not eliminate the need for human judgement, and Hill’s nine viewpoints provide some structure for arriving at these judgements. And it is fair to say that the non-technical literature has not substantially advanced, at least in what it offers for practical purposes. There are other similar lists of guidelines, but it is hard to identify any clear advance, in the non-technical sphere, beyond the basic idea of identifying a few things to bear in mind when trying to decide if an association is causal. For example, Jon Williamson and Federica Russo suggest that, in the health sciences, evidence for causality must come from both population-level studies and experimental studies of underlying mechanisms (Russo and Williamson 2007). This claim may be theoretically interesting (for criticism see Broadbent 2011) but it is clearly intended as a theoretical analysis, and adds little from a practical perspective. Both items in question are covered by Hill’s list; the difference is that Hill does not think any item on his list is necessary, and the claim that evidence concerning underlying mechanisms, in particular, is necessary for a causal inference is highly doubtful in an epidemiological context, and identified as such by Hill. But however that difference is settled, as long as the debate is about what kind of evidence is or is not desired or required for a causal inference, we are not offering anything substantially more useful than what Hill has already offered.

Where should we start, if we wish to move beyond Hill-style lists? Lists of guidelines like Hill’s suffer from notable defects, despite their usefulness. They are open to misinterpretation as criteria, or as primitive algorithms for causal inference. They are a magnet for fruitless debate about exactly what should make the list, what should not, what order they should appear in, what weights to give the various components, and so forth. But most importantly – and this, I think, should be our starting point – they do not provide any clear bar that evidence must clear. The crucial question that making a decision imposes is: is the evidence good enough to act on?

A list of guidelines such as Hill’s has some heuristic value, but it does not tell us, in even the broadest terms, what constitutes enough of each item on the list. The guidelines tell us what the bar is made of, but they do not tell us how high it is. One way we might advance beyond Hill’s viewpoints, then, is to ask how good evidence needs to be before it warrants a causal inference, where that causal inference is important for a practical decision.

 

Broadbent, A. 2011. ‘Causal Inference in Epidemiology: Mechanisms, Black Boxes, and Contrasts’. In Causality in the Sciences, ed. Phyllis McKay Illari, Federica Russo, and Jon Williamson, 45–69. Oxford: Oxford University Press.

Hill, AB. 1965. ‘The Environment and Disease: Association or Causation?’ Proceedings of the Royal Society of Medicine 58: 259–300.

Russo, F and Williamson, J. 2007. ‘Interpreting Causality in the Health Sciences’. International Journal of the Philosophy of Science 21 (2): 157–170.

EuroEpi thoughts

One thing that has struck me listening to talks at the European Congress of Epidemiology is the incredible weight given to the phrase “statistically significant”. This is an old chestnut among theoreticians in the area, so my surprise perhaps indicates more about my selective contact with epidemiology to date than anything else. It is nonetheless interesting to see the work this strange concept does.

The most striking example was in an interesting talk on risk factors for colorectal cancers. A slide was displayed showing results of a case control study. For every one of the 8 or so risk factors, incidence among cases was higher than controls. However, the speaker pointed out that only some of these differences were statistically significant.

This struck me as very strange. The level of statistical significance is more or less arbitrary – perhaps not entirely, but arbitrary in the same way as specifying a certain height for “short”. In this context, that means that the choice of risk factors to ignore was also, in the same sense, arbitrary. Moreover, the fact that the difference was the same way in all the risk factors (ie higher exposure in cases than controls) also seemed, to my untutored eye, to be the sort of unlikely coincidence one might wish to investigate further.

In a way, that is exactly what came next. One of the “insignificant” factors turned out – and I confess I did not follow how – to interact significantly with another (the two being fibre and calcium intake).

I am not sure that any of this is problematic, but it is certainly puzzling. The pattern is not unique to this talk. I have seen more than one table presented of variables potentially associated with an outcome, with the non significant ones then being excluded. On many occasions this must surely be a good, quick way to proceed. It seems like a strange exercise, to my untutored eye, if some non significant differences are studied further anyway. But that is surely an artefact of my lack of understanding.

I am less sure that my lack of understanding is to blame for other doubts, however. Where a number of risk factors are aligned, it seems arbitrary to ignore the ones that fail a certain level of statistical significance. The fact of alignment is itself some evidence of a non chance phenomenon of some kind. And, of course, the alignment might indicate something important, for example an as yet unthought of causal factor. The non significant factors could be as useful as the significant ones in detecting such a factor, by providing further means of triangulation.

The Myth of Translation

Next week I am part of a symposium at EuroEpi in Porto, Portugal with the title Achieving More Effective Translation of Epidemiologic Findings into Policy when Facts are not the Whole Story.

My presentation is called “The Myth of Translation” and the central thesis is, as you would guess, that talk of “translating” data into policy, discoveries into applications, and so forth is unhelpful and inaccurate. Instead, I am arguing that the major challenge facing epidemiological research is assuring non-epidemiologists who might want to rely on those results that they are stable, meaning that they are not likely to be reversed in the near future.

I expect my claim to be provocative in two ways. First, the most obvious reasons I can think of for the popularity of the “translation” metaphor, given its clear inappropriateness (which I have not argued here but which I argue in the presentation), are unpleasant ones: claiming of scientific authority for dearly-held policy objectives; or blaming some sort of translational failing for what are actually shortcomings (or, perhaps, over-ambitious claims) in epidemiological research. This point is not, however, something I intend to emphasize; nor am I sure it is particularly important. Second, the claim that epidemiological results are reasonably regarded by non-epidemiologists as too unstable to be useful might be expected to raise a bit of resistance at an epidemiology conference.

Given the possibility that what I have to say will be provocative, I thought I would try my central positive argument out here.

(1) It is hard to use results which one reasonably suspects might soon be found incorrect.

(2) Often, epidemiological results are such that a prospective user reasonably suspects that they will soon be found incorrect.

(3) Therefore, often, it is hard to use epidemiological results.

I think this argument is valid, or close enough for these purposes. I think that (1) does not need supporting: it is obviously true (or obviously enough for these purposes). The weight is on (2), and my argument for (2) is that from the outside, it is simply too hard to tell whether a given issue – for example, the effect of HRT on heart disease, or the effect of acetaminophen (paracetamol) on asthma – is still part of an ongoing debate, or can reasonably be regarded as settled. The problem infects even results that epidemiologists would widely regard as settled: the credibility of the evidence on the effect of smoking on lung cancer is not helped by reversals over HRT, for example, because from the outside, it is not unreasonable to wonder what the relevant difference is between the pronouncements on HRT and the pronouncements on lung cancer and smoking. There is a difference: my point is that epidemiology lacks a clear framework for saying what it is.

My claim, then, is that the main challenge facing the use of epidemiological results is not “translation” in any sense, but stability; and that devising a framework for expressing to non-epidemiologists (“users”, if you like) how stable a given result is, given best available current knowledge, is where efforts currently being directed at “translation” would be better spent.

Comments on this line of thought would be very welcome. I am happy to share the slides for my talk with anyone who might be interested.