Unknown's avatar

About Alex Broadbent

Director of the Institute for the Future of Knowledge and Professor of Philosophy, University of Johannesburg

Synopsis

The final version of my manuscript Philosophy of Epidemiology is off to the publishers, and I hope it counts as self-promotion rather than self-plagiarism if I post the Synopsis here:

The content of the book can be summarised as follows. Causation and causal inference are over-emphasized in epidemiology and in epidemiological methodology, and explanation and prediction deserve greater emphasis. Explanation is a much more useful concept for understanding measures of strength of association (Chapter 3) and the nature of causal inference (Chapters 4 and 5) than causation itself is, because causation is really only part of what we seek to measure and infer respectively. What epidemiologists really seek to do is explain, and their practices are seen much more clearly when described as such. Explanation is also the central notion to an adequate theory of prediction, a topic which has been sadly neglected by both philosophers and epidemiologists (Chapter 6). To predict, one must explain why what one predicts is going to happen, rather than some – but certainly not all – other possible outcomes (Chapter 7). Just like an adequate explanation, a good prediction must be sensitive to the alternative hypotheses it needs to dismiss and those it can safely ignore.

These themes are developed in Chapters 3-7. The remaining chapters tackle more specific problems, and apply the lessons learned where appropriate. Thus Chapter 8 concerns measures of attributability, which are not conceptually straightforward; and the lesson is that an outcome is attributable to an exposure to the extent that it is explained by it. Chapter 9 concerns “risk relativism”, an unfortunate degenerative tendency of thought some epidemiologists have identified in their peers. Risk relativism is diagnosed in Chapter 9 as a case of physics envy, exacerbated by a tendency to seek a univocal measure of causal strength, rather than a context-appropriate explanatory measure. Chapter 10 examines “multifactorialism”, another modern epidemiological ailment – though considered by most epidemiologists to be a profitable mutation. Multifactorialism is criticised for dropping the requirement that diseases be defined in relation to explanatory causes. Chapter 11 discusses the various embarrassments that lawyers have inflicted upon themselves in trying to use epidemiological evidence. Again, the lack of attendance to explanatory questions in favour of causal ones is partly to blame for the mess. Lawyers reasonably resist the explanation of particular trains of events by general statistical facts; but to refuse to admit those facts as evidence for the truth of particular causal explanations is plainly unreasonable. The theme, then, is that explanation deserves more attention epidemiological attention, and causation less. We will revisit the theme in the concluding Chapter 12.

[Philosophy of Epidemiology is being published by Palgrave Macmillan in the series New Directions in the Philosophy of Science and will appear in 2013.]

Getting Over Hill’s Viewpoints

In 1965, Austin Bradford Hill identified nine “viewpoints” from which an association may be viewed in seeking to decide whether it is causal (Hill 1965). Hill’s nine viewpoints, often wrongly called “criteria”, have been repeated and rehashed very often indeed, with both approval and disapproval. Despite the profusion of developments in technical and non-technical literatures on causal inference since then, Hill’s viewpoints remain a starting point for discussions of causal inference in epidemiology.

There are good reasons for this. Technical advances do not eliminate the need for human judgement, and Hill’s nine viewpoints provide some structure for arriving at these judgements. And it is fair to say that the non-technical literature has not substantially advanced, at least in what it offers for practical purposes. There are other similar lists of guidelines, but it is hard to identify any clear advance, in the non-technical sphere, beyond the basic idea of identifying a few things to bear in mind when trying to decide if an association is causal. For example, Jon Williamson and Federica Russo suggest that, in the health sciences, evidence for causality must come from both population-level studies and experimental studies of underlying mechanisms (Russo and Williamson 2007). This claim may be theoretically interesting (for criticism see Broadbent 2011) but it is clearly intended as a theoretical analysis, and adds little from a practical perspective. Both items in question are covered by Hill’s list; the difference is that Hill does not think any item on his list is necessary, and the claim that evidence concerning underlying mechanisms, in particular, is necessary for a causal inference is highly doubtful in an epidemiological context, and identified as such by Hill. But however that difference is settled, as long as the debate is about what kind of evidence is or is not desired or required for a causal inference, we are not offering anything substantially more useful than what Hill has already offered.

Where should we start, if we wish to move beyond Hill-style lists? Lists of guidelines like Hill’s suffer from notable defects, despite their usefulness. They are open to misinterpretation as criteria, or as primitive algorithms for causal inference. They are a magnet for fruitless debate about exactly what should make the list, what should not, what order they should appear in, what weights to give the various components, and so forth. But most importantly – and this, I think, should be our starting point – they do not provide any clear bar that evidence must clear. The crucial question that making a decision imposes is: is the evidence good enough to act on?

A list of guidelines such as Hill’s has some heuristic value, but it does not tell us, in even the broadest terms, what constitutes enough of each item on the list. The guidelines tell us what the bar is made of, but they do not tell us how high it is. One way we might advance beyond Hill’s viewpoints, then, is to ask how good evidence needs to be before it warrants a causal inference, where that causal inference is important for a practical decision.

 

Broadbent, A. 2011. ‘Causal Inference in Epidemiology: Mechanisms, Black Boxes, and Contrasts’. In Causality in the Sciences, ed. Phyllis McKay Illari, Federica Russo, and Jon Williamson, 45–69. Oxford: Oxford University Press.

Hill, AB. 1965. ‘The Environment and Disease: Association or Causation?’ Proceedings of the Royal Society of Medicine 58: 259–300.

Russo, F and Williamson, J. 2007. ‘Interpreting Causality in the Health Sciences’. International Journal of the Philosophy of Science 21 (2): 157–170.

EuroEpi thoughts

One thing that has struck me listening to talks at the European Congress of Epidemiology is the incredible weight given to the phrase “statistically significant”. This is an old chestnut among theoreticians in the area, so my surprise perhaps indicates more about my selective contact with epidemiology to date than anything else. It is nonetheless interesting to see the work this strange concept does.

The most striking example was in an interesting talk on risk factors for colorectal cancers. A slide was displayed showing results of a case control study. For every one of the 8 or so risk factors, incidence among cases was higher than controls. However, the speaker pointed out that only some of these differences were statistically significant.

This struck me as very strange. The level of statistical significance is more or less arbitrary – perhaps not entirely, but arbitrary in the same way as specifying a certain height for “short”. In this context, that means that the choice of risk factors to ignore was also, in the same sense, arbitrary. Moreover, the fact that the difference was the same way in all the risk factors (ie higher exposure in cases than controls) also seemed, to my untutored eye, to be the sort of unlikely coincidence one might wish to investigate further.

In a way, that is exactly what came next. One of the “insignificant” factors turned out – and I confess I did not follow how – to interact significantly with another (the two being fibre and calcium intake).

I am not sure that any of this is problematic, but it is certainly puzzling. The pattern is not unique to this talk. I have seen more than one table presented of variables potentially associated with an outcome, with the non significant ones then being excluded. On many occasions this must surely be a good, quick way to proceed. It seems like a strange exercise, to my untutored eye, if some non significant differences are studied further anyway. But that is surely an artefact of my lack of understanding.

I am less sure that my lack of understanding is to blame for other doubts, however. Where a number of risk factors are aligned, it seems arbitrary to ignore the ones that fail a certain level of statistical significance. The fact of alignment is itself some evidence of a non chance phenomenon of some kind. And, of course, the alignment might indicate something important, for example an as yet unthought of causal factor. The non significant factors could be as useful as the significant ones in detecting such a factor, by providing further means of triangulation.

The Myth of Translation

Next week I am part of a symposium at EuroEpi in Porto, Portugal with the title Achieving More Effective Translation of Epidemiologic Findings into Policy when Facts are not the Whole Story.

My presentation is called “The Myth of Translation” and the central thesis is, as you would guess, that talk of “translating” data into policy, discoveries into applications, and so forth is unhelpful and inaccurate. Instead, I am arguing that the major challenge facing epidemiological research is assuring non-epidemiologists who might want to rely on those results that they are stable, meaning that they are not likely to be reversed in the near future.

I expect my claim to be provocative in two ways. First, the most obvious reasons I can think of for the popularity of the “translation” metaphor, given its clear inappropriateness (which I have not argued here but which I argue in the presentation), are unpleasant ones: claiming of scientific authority for dearly-held policy objectives; or blaming some sort of translational failing for what are actually shortcomings (or, perhaps, over-ambitious claims) in epidemiological research. This point is not, however, something I intend to emphasize; nor am I sure it is particularly important. Second, the claim that epidemiological results are reasonably regarded by non-epidemiologists as too unstable to be useful might be expected to raise a bit of resistance at an epidemiology conference.

Given the possibility that what I have to say will be provocative, I thought I would try my central positive argument out here.

(1) It is hard to use results which one reasonably suspects might soon be found incorrect.

(2) Often, epidemiological results are such that a prospective user reasonably suspects that they will soon be found incorrect.

(3) Therefore, often, it is hard to use epidemiological results.

I think this argument is valid, or close enough for these purposes. I think that (1) does not need supporting: it is obviously true (or obviously enough for these purposes). The weight is on (2), and my argument for (2) is that from the outside, it is simply too hard to tell whether a given issue – for example, the effect of HRT on heart disease, or the effect of acetaminophen (paracetamol) on asthma – is still part of an ongoing debate, or can reasonably be regarded as settled. The problem infects even results that epidemiologists would widely regard as settled: the credibility of the evidence on the effect of smoking on lung cancer is not helped by reversals over HRT, for example, because from the outside, it is not unreasonable to wonder what the relevant difference is between the pronouncements on HRT and the pronouncements on lung cancer and smoking. There is a difference: my point is that epidemiology lacks a clear framework for saying what it is.

My claim, then, is that the main challenge facing the use of epidemiological results is not “translation” in any sense, but stability; and that devising a framework for expressing to non-epidemiologists (“users”, if you like) how stable a given result is, given best available current knowledge, is where efforts currently being directed at “translation” would be better spent.

Comments on this line of thought would be very welcome. I am happy to share the slides for my talk with anyone who might be interested.

Book manuscript – comments invited

If anyone is interested in looking at a book manuscript on philosophy of epidemiology, or on any parts thereof, please get in touch. The manuscript is under contract and has been delivered to the publisher, so this is a good time for comments and criticism. Table of contents can be accessed here:

2012-06-29 contents

…with apologies for the “undefined bookmarks”. The publishers have already indicated that the title should be “Philosophy of Epidemiology” or something similarly descriptive – opinions on this also welcome.

Apologies for the recent inactivity on this blog – the book is a big part of the explanation.

Musings on laws of nature

I am always puzzled by philosophical talk of laws of nature. The terms “law”, “govern”, and so forth amount to an extended metaphor drawn from human affairs, and thoroughly unnatural ones at that. It is a recurrent philosophical mistake to suppose that the most fundamental thing about the universe can only be treated with philosophical precision through a human metaphor. Without “laws”, and the corresponding but evidently false idea that it is possible to break them, contemporary metaphysics would look quite different. If you ask a philosopher whether it is possible to walk through a wall, she will say “Yes”, because only the laws “prohibit” it – even though everyone knows you can’t walk through a wall.

Post-doc: Philosophy of Science, University of Johannesburg

The University of Johannesburg seeks to appoint a Postdoctoral Research Fellow in Philosophy. Any research area will be considered, but it is hoped that the Fellow will work with Dr Alex Broadbent on issues in the philosophy of science broadly construed, and in particular issues related to epidemiology, public health, causation, explanation or prediction (although unconceived alternatives will be considered). The Fellow will be asked to teach one Honours course for one term, out of four in our academic calendar. (This is a light commitment: Honours is a small group (5-10) course for first year postgraduates, whose content usually reflects the interests of the person teaching it.) Applicants should email two pieces of written work of around 8-10k words (e.g. publications, work under review, thesis chapters) along with a CV and covering letter to Dr Alex Broadbent at abbroadbent@uj.ac.uk by 30 June 2012. Doctorate must be in hand at time of commencement. Duration is one year in the first instance, with the possibility of another year, depending on publication performance.

Acetaminophen (paracetamol), Asthma, and the Causal Fallacy

In November 2011, a senior American pediatrician suggested that there was enough evidence to warrant restricting acetaminophen (paracetamol) use among children at risk of asthma, despite inadequate evidence for a causal inference. His argument was based on an ethical principle. However neither his argument nor the evidence he surveys are sufficient to warrant the recommendation, which therefore has the status, not of a sensible precaution, but a stab in the dark. I have written to the editors of Pediatrics to explain why – the link is here:

http://pediatrics.aappublications.org/content/128/6/1181.full/reply#pediatrics_el_53669

The theoretical point underlying this is one under-emphasized in both philosophical and epidemiological thinking, namely, that causal inference is something rather different from making a prediction based on the causal knowledge so obtained. The temptation to suppose that we have even a hunch what we happen when we restrict acetaminophen use on the basis that we have a hunch that it causes asthma is fallacious. It all depends on what consequences the non-use of acetaminophen has, and that in turn depends on the form that non-use takes. The point is familiar from philosophical studies of counterfactuals, but those studies arguably either do not offer much of practical use for epidemiology or else have not received an epidemiological audience. (I favour the former option, although I realise many philosophers will disagree.)

The result is a common fallacy of reasoning which we might call The Causal Fallacy: epidemiologists, policy makers, and probably the public assume that because we have causal knowledge, we have knowledge of what will happen when we manipulate those causes. In practice we do not. (This under-appreciated point has been emphasized by Sander Greenland among epidemiologists and Nancy Cartwright among philosophers, and as I see it tells heavily against the programme of manipulationist or interventionist theories of causation.) Establishing whether an exposure such as acetaminophen is a cause of an outcome such as asthma is not sufficient to predict the outcome of a given recommendation on the use of acetaminophen, for the simple reason that more than one such policy is possible, and each may in principle have a different outcome.