Cancers, Viruses, and the Contrastive Model of Disease

If there is any value in the idea that disease is something other than the mere absence of health then that value must lie in the way that diseases are defined. Once we move from the knowledge that someone lacks health in respect of their fever and diarrhoea to the knowledge that they have cholera, we are able to make a number of inferences about the likely progression of the disease, the effectiveness of various possible treatments, and the relevance or otherwise of various circumstances (e.g. age, sex) to both these. These inferences depend on the way the disease “cholera” has been defined. They hold across most cases of cholera, but they are not reliable for diarrhoea or fever more generally. And they hold across most cases of cholera because cholera has been defined in terms of a cause that is, by definition, shared by every case of cholera, and is such that, by definition, no case of diarrhoea and fever can be a case of cholera without the cause, however symptomatically similar.

That cause is the presence of vibrio cholerae in the small intestine. K Codell Carter has done an excellent job of bringing out the extent to which the definition of diseases in terms of infectious agents really was an act of definition – a conceptual feat, not something to be settled by empirical evidence (Carter 2003). But this conceptual exercise has fallen out of fashion. The diseases of primary interest to modern medical science are “multifactorial”, meaning that, for whatever reason, they have not been classified according to particular causes. I have argued that multifactorial thinking has been endorsed too enthusiastically, and that there are merits to the old “monocausal” way of thinking about disease – even if the obsession with identifying just one explanatory cause is wrong. My “contrastive” model of disease is intended to retain the benefits of the monocausal model without the implausible commitment to classification in terms of just one cause (Broadbent 2009; a better-expressed version is forthcoming in Broadbent 2013, Ch 10).

An obvious difficulty for my account is that some kinds of ill health, such as instances of particular cancers, seem to be fruitfully treated as belonging together. Yet on my account they cannot be called instances of a disease unless a classificatory constellation of causes is known or at least suspected. (Of course one might prefer to mark the distinction with a word other than “disease”; but my hope is to get at an important distinction without getting tangled in semantic disputes.) This raises an objection of irrelevance: the objection would be that my account lays down distinctions that are irrelevant both to clinical practice and to scientific understanding.

A particular instance of this difficulty arises in the case of cancers that are caused by viruses. For example, cervical cancer seems to be caused by the human papillomavirus in over 90% of cases (Kumar et al. 2007) – but not 100%. According to the logic of my contrastive model of disease, we ought to say that those cases of cervical cancer caused by the virus form a disease – let’s call it “HPV-cancer” – while the small remainder do not. But is this really a good idea, either from a clinical perspective, or from the perspective of advancing our conceptual grasp on the health-condition(s) in question? Conversely, the circumstances in which the virus is sufficient to produce the cancer are not known: HPV infection does not always result in cervical cancer, and it is not known why. According to my contrastive model this means that the notional disease “HPV-cancer” is not well-defined, and that to call it a disease is to express the hope or conviction that some further causes can be found which are jointly sufficient for the development of cervical cancer. This raises questions, again, as to whether such a hope or conviction is very helpful from either a clinical or a scientific perspective; or whether, rather, we are better off thinking of cervical cancer as a multifactorial disease arising from various constellations of causes, and sometimes failing to arise from the very same constellations of causes.

In short, the question is whether a contrastive approach to disease classification really would help either scientific understanding or medical effectiveness, as I have claimed that it would. Cancers caused by viruses present a good case for studying this question because if the recommendations of the contrastive model can be followed for any multifactorial diseases, these are among the most viable candidates.

I’m starting work on a paper in which I attempt to get to grips with some of these questions and to answer this irrelevance objection to the contrastive model of disease. My gut feeling remains that there is an important distinction to be drawn between disease and mere lack of health (thus my starting point is the opposite of that of Boorse 1975). Yet I am not satisfied that I have made clear exactly what this importance is; nor have I given enough thought to exactly what one is supposed to do with cases that do not fit the model. And for the distinction to be important, as I claim, there must always be cases that do not fit the model – that is, there must be cases of ill health that are not disease, otherwise the distinction between them cannot be important. I hope to get further into these issues and I would welcome any thoughts.

 

Boorse, Christopher. 1975. “On the Distinction Between Disease and Illness.” Philosophy of Public Affairs 5: 49–68.

Broadbent, Alex. 2009. “Causation and Models of Disease in Epidemiology.” Studies in History and Philosophy of Biological and Biomedical Sciences 40: 302–311.

———. 2013. Philosophy of Epidemiology. New Directions in the Philosophy of Science. London and New York: Palgrave Macmillan.

Carter, K. Codell. 2003. The Rise of Causal Concepts of Disease. Aldershot: Ashgate.

Kumar, Vinay, Abul K. Abbas, Nelson Fausto, and Richard Mitchell. 2007. Robbins Basic Pathology. 8th ed. Philadelphia: Saunders Elsevier.

 

Prediction – readings?

I’m working on a paper on prediction. It has a general philosophy of science focus and is not specific to epidemiology, but my interest in prediction was provoked by epidemiology. If anyone knows of existing work on prediction, that is philosophical in focus, I would be very interested to hear about it. I realise that there is a statistical literature on the topic but what strikes me is that there is very little treatment of prediction in the philosophy of science. There seems to be no “standard treatment”, as there is for explanation or confirmation; and nor are there clearly articulated competing positions on prediction, as there are concerning causation or laws of nature, for example – or so it seems to me. I would appreciate any confirmation or disconfirmation of these claims.

Synopsis

The final version of my manuscript Philosophy of Epidemiology is off to the publishers, and I hope it counts as self-promotion rather than self-plagiarism if I post the Synopsis here:

The content of the book can be summarised as follows. Causation and causal inference are over-emphasized in epidemiology and in epidemiological methodology, and explanation and prediction deserve greater emphasis. Explanation is a much more useful concept for understanding measures of strength of association (Chapter 3) and the nature of causal inference (Chapters 4 and 5) than causation itself is, because causation is really only part of what we seek to measure and infer respectively. What epidemiologists really seek to do is explain, and their practices are seen much more clearly when described as such. Explanation is also the central notion to an adequate theory of prediction, a topic which has been sadly neglected by both philosophers and epidemiologists (Chapter 6). To predict, one must explain why what one predicts is going to happen, rather than some – but certainly not all – other possible outcomes (Chapter 7). Just like an adequate explanation, a good prediction must be sensitive to the alternative hypotheses it needs to dismiss and those it can safely ignore.

These themes are developed in Chapters 3-7. The remaining chapters tackle more specific problems, and apply the lessons learned where appropriate. Thus Chapter 8 concerns measures of attributability, which are not conceptually straightforward; and the lesson is that an outcome is attributable to an exposure to the extent that it is explained by it. Chapter 9 concerns “risk relativism”, an unfortunate degenerative tendency of thought some epidemiologists have identified in their peers. Risk relativism is diagnosed in Chapter 9 as a case of physics envy, exacerbated by a tendency to seek a univocal measure of causal strength, rather than a context-appropriate explanatory measure. Chapter 10 examines “multifactorialism”, another modern epidemiological ailment – though considered by most epidemiologists to be a profitable mutation. Multifactorialism is criticised for dropping the requirement that diseases be defined in relation to explanatory causes. Chapter 11 discusses the various embarrassments that lawyers have inflicted upon themselves in trying to use epidemiological evidence. Again, the lack of attendance to explanatory questions in favour of causal ones is partly to blame for the mess. Lawyers reasonably resist the explanation of particular trains of events by general statistical facts; but to refuse to admit those facts as evidence for the truth of particular causal explanations is plainly unreasonable. The theme, then, is that explanation deserves more attention epidemiological attention, and causation less. We will revisit the theme in the concluding Chapter 12.

[Philosophy of Epidemiology is being published by Palgrave Macmillan in the series New Directions in the Philosophy of Science and will appear in 2013.]

Getting Over Hill’s Viewpoints

In 1965, Austin Bradford Hill identified nine “viewpoints” from which an association may be viewed in seeking to decide whether it is causal (Hill 1965). Hill’s nine viewpoints, often wrongly called “criteria”, have been repeated and rehashed very often indeed, with both approval and disapproval. Despite the profusion of developments in technical and non-technical literatures on causal inference since then, Hill’s viewpoints remain a starting point for discussions of causal inference in epidemiology.

There are good reasons for this. Technical advances do not eliminate the need for human judgement, and Hill’s nine viewpoints provide some structure for arriving at these judgements. And it is fair to say that the non-technical literature has not substantially advanced, at least in what it offers for practical purposes. There are other similar lists of guidelines, but it is hard to identify any clear advance, in the non-technical sphere, beyond the basic idea of identifying a few things to bear in mind when trying to decide if an association is causal. For example, Jon Williamson and Federica Russo suggest that, in the health sciences, evidence for causality must come from both population-level studies and experimental studies of underlying mechanisms (Russo and Williamson 2007). This claim may be theoretically interesting (for criticism see Broadbent 2011) but it is clearly intended as a theoretical analysis, and adds little from a practical perspective. Both items in question are covered by Hill’s list; the difference is that Hill does not think any item on his list is necessary, and the claim that evidence concerning underlying mechanisms, in particular, is necessary for a causal inference is highly doubtful in an epidemiological context, and identified as such by Hill. But however that difference is settled, as long as the debate is about what kind of evidence is or is not desired or required for a causal inference, we are not offering anything substantially more useful than what Hill has already offered.

Where should we start, if we wish to move beyond Hill-style lists? Lists of guidelines like Hill’s suffer from notable defects, despite their usefulness. They are open to misinterpretation as criteria, or as primitive algorithms for causal inference. They are a magnet for fruitless debate about exactly what should make the list, what should not, what order they should appear in, what weights to give the various components, and so forth. But most importantly – and this, I think, should be our starting point – they do not provide any clear bar that evidence must clear. The crucial question that making a decision imposes is: is the evidence good enough to act on?

A list of guidelines such as Hill’s has some heuristic value, but it does not tell us, in even the broadest terms, what constitutes enough of each item on the list. The guidelines tell us what the bar is made of, but they do not tell us how high it is. One way we might advance beyond Hill’s viewpoints, then, is to ask how good evidence needs to be before it warrants a causal inference, where that causal inference is important for a practical decision.

 

Broadbent, A. 2011. ‘Causal Inference in Epidemiology: Mechanisms, Black Boxes, and Contrasts’. In Causality in the Sciences, ed. Phyllis McKay Illari, Federica Russo, and Jon Williamson, 45–69. Oxford: Oxford University Press.

Hill, AB. 1965. ‘The Environment and Disease: Association or Causation?’ Proceedings of the Royal Society of Medicine 58: 259–300.

Russo, F and Williamson, J. 2007. ‘Interpreting Causality in the Health Sciences’. International Journal of the Philosophy of Science 21 (2): 157–170.

EuroEpi thoughts

One thing that has struck me listening to talks at the European Congress of Epidemiology is the incredible weight given to the phrase “statistically significant”. This is an old chestnut among theoreticians in the area, so my surprise perhaps indicates more about my selective contact with epidemiology to date than anything else. It is nonetheless interesting to see the work this strange concept does.

The most striking example was in an interesting talk on risk factors for colorectal cancers. A slide was displayed showing results of a case control study. For every one of the 8 or so risk factors, incidence among cases was higher than controls. However, the speaker pointed out that only some of these differences were statistically significant.

This struck me as very strange. The level of statistical significance is more or less arbitrary – perhaps not entirely, but arbitrary in the same way as specifying a certain height for “short”. In this context, that means that the choice of risk factors to ignore was also, in the same sense, arbitrary. Moreover, the fact that the difference was the same way in all the risk factors (ie higher exposure in cases than controls) also seemed, to my untutored eye, to be the sort of unlikely coincidence one might wish to investigate further.

In a way, that is exactly what came next. One of the “insignificant” factors turned out – and I confess I did not follow how – to interact significantly with another (the two being fibre and calcium intake).

I am not sure that any of this is problematic, but it is certainly puzzling. The pattern is not unique to this talk. I have seen more than one table presented of variables potentially associated with an outcome, with the non significant ones then being excluded. On many occasions this must surely be a good, quick way to proceed. It seems like a strange exercise, to my untutored eye, if some non significant differences are studied further anyway. But that is surely an artefact of my lack of understanding.

I am less sure that my lack of understanding is to blame for other doubts, however. Where a number of risk factors are aligned, it seems arbitrary to ignore the ones that fail a certain level of statistical significance. The fact of alignment is itself some evidence of a non chance phenomenon of some kind. And, of course, the alignment might indicate something important, for example an as yet unthought of causal factor. The non significant factors could be as useful as the significant ones in detecting such a factor, by providing further means of triangulation.

The Myth of Translation

Next week I am part of a symposium at EuroEpi in Porto, Portugal with the title Achieving More Effective Translation of Epidemiologic Findings into Policy when Facts are not the Whole Story.

My presentation is called “The Myth of Translation” and the central thesis is, as you would guess, that talk of “translating” data into policy, discoveries into applications, and so forth is unhelpful and inaccurate. Instead, I am arguing that the major challenge facing epidemiological research is assuring non-epidemiologists who might want to rely on those results that they are stable, meaning that they are not likely to be reversed in the near future.

I expect my claim to be provocative in two ways. First, the most obvious reasons I can think of for the popularity of the “translation” metaphor, given its clear inappropriateness (which I have not argued here but which I argue in the presentation), are unpleasant ones: claiming of scientific authority for dearly-held policy objectives; or blaming some sort of translational failing for what are actually shortcomings (or, perhaps, over-ambitious claims) in epidemiological research. This point is not, however, something I intend to emphasize; nor am I sure it is particularly important. Second, the claim that epidemiological results are reasonably regarded by non-epidemiologists as too unstable to be useful might be expected to raise a bit of resistance at an epidemiology conference.

Given the possibility that what I have to say will be provocative, I thought I would try my central positive argument out here.

(1) It is hard to use results which one reasonably suspects might soon be found incorrect.

(2) Often, epidemiological results are such that a prospective user reasonably suspects that they will soon be found incorrect.

(3) Therefore, often, it is hard to use epidemiological results.

I think this argument is valid, or close enough for these purposes. I think that (1) does not need supporting: it is obviously true (or obviously enough for these purposes). The weight is on (2), and my argument for (2) is that from the outside, it is simply too hard to tell whether a given issue – for example, the effect of HRT on heart disease, or the effect of acetaminophen (paracetamol) on asthma – is still part of an ongoing debate, or can reasonably be regarded as settled. The problem infects even results that epidemiologists would widely regard as settled: the credibility of the evidence on the effect of smoking on lung cancer is not helped by reversals over HRT, for example, because from the outside, it is not unreasonable to wonder what the relevant difference is between the pronouncements on HRT and the pronouncements on lung cancer and smoking. There is a difference: my point is that epidemiology lacks a clear framework for saying what it is.

My claim, then, is that the main challenge facing the use of epidemiological results is not “translation” in any sense, but stability; and that devising a framework for expressing to non-epidemiologists (“users”, if you like) how stable a given result is, given best available current knowledge, is where efforts currently being directed at “translation” would be better spent.

Comments on this line of thought would be very welcome. I am happy to share the slides for my talk with anyone who might be interested.

Book manuscript – comments invited

If anyone is interested in looking at a book manuscript on philosophy of epidemiology, or on any parts thereof, please get in touch. The manuscript is under contract and has been delivered to the publisher, so this is a good time for comments and criticism. Table of contents can be accessed here:

2012-06-29 contents

…with apologies for the “undefined bookmarks”. The publishers have already indicated that the title should be “Philosophy of Epidemiology” or something similarly descriptive – opinions on this also welcome.

Apologies for the recent inactivity on this blog – the book is a big part of the explanation.

Musings on laws of nature

I am always puzzled by philosophical talk of laws of nature. The terms “law”, “govern”, and so forth amount to an extended metaphor drawn from human affairs, and thoroughly unnatural ones at that. It is a recurrent philosophical mistake to suppose that the most fundamental thing about the universe can only be treated with philosophical precision through a human metaphor. Without “laws”, and the corresponding but evidently false idea that it is possible to break them, contemporary metaphysics would look quite different. If you ask a philosopher whether it is possible to walk through a wall, she will say “Yes”, because only the laws “prohibit” it – even though everyone knows you can’t walk through a wall.

Post-doc: Philosophy of Science, University of Johannesburg

The University of Johannesburg seeks to appoint a Postdoctoral Research Fellow in Philosophy. Any research area will be considered, but it is hoped that the Fellow will work with Dr Alex Broadbent on issues in the philosophy of science broadly construed, and in particular issues related to epidemiology, public health, causation, explanation or prediction (although unconceived alternatives will be considered). The Fellow will be asked to teach one Honours course for one term, out of four in our academic calendar. (This is a light commitment: Honours is a small group (5-10) course for first year postgraduates, whose content usually reflects the interests of the person teaching it.) Applicants should email two pieces of written work of around 8-10k words (e.g. publications, work under review, thesis chapters) along with a CV and covering letter to Dr Alex Broadbent at abbroadbent@uj.ac.uk by 30 June 2012. Doctorate must be in hand at time of commencement. Duration is one year in the first instance, with the possibility of another year, depending on publication performance.