Unknown's avatar

About Alex Broadbent

Director of the Institute for the Future of Knowledge and Professor of Philosophy, University of Johannesburg

Psychiatric Disorders, Multifactorialism, and Kinds

Today I attended and talked at a great symposium called “What Kinds of Things Are Psychiatric Disorders?” organised by the University Center of Psychiatry at Groningen. The guest of honour was Kenneth Kendler, who gave a long, wide-ranging, and thoroughly excellent talk about the reality or otherwise of the psychiatric disorders captured by the DSM. It was one of the best talks I have heard in a long time, and philosophically more astute than many talks by self-identifying philosophers (including myself). The position he outlined got me thinking about a number of issues in the philosophy of epidemiology and of medicine.

He begins by arguing that realism about psychiatric disorders cannot be modelled on realism about physical kinds: it cannot be like realism about the periodic table. At best it would need to be like realism about biological species – acknowledging the lack of essential properties, the fuzzy boundaries, and so forth. His own view, however, is that realism of even this moderate kind is not warranted about psychiatric disease kinds, at least not those in the DSM. Part of his argument is a pessimistic induction from past psychiatric diagnoses that are no longer recognised. The other part comes from the massive multifactorialism of psychiatric disorders.

One interesting thing about this view is that it also applies to multifactorial diseases outside psychiatry – cancer, for example. The same considerations apply – both the mutlifactorialism, and the pessimistic view of the durability of our current classifications. The two arguments interact, perhaps: massive multifactorialism means that we do not have a good general explanation of why a given set of symptoms occurs, and this lack of explanation indicates that we don’t understand the phenomenon very well and thus that we are quite likely to be wrong about it in some yet-to-be-discovered ways.

Kendler’s view of illness in general, however, is not constructivist. He combines a sceptical stance towards the kinds in the DSM with a realist-leaning pragmatism about psychiatric disorder, the overarching thing, as opposed to the ways we have cut it up. In other words, he thinks that psychiatric illness is a real thing; but he is sceptical about our way of individuating psychiatric diseases. So, at least, I understood his position.

This view also connects in interesting ways with the debate about health in the philosophy of medicine. In that debate, “realism” is not used – instead, “naturalism” is the closest correlate. It is not an equivalent, however, since a naturalist typically denies that there is any normative component to health facts, while Kendler – in response to an acute question from the audience – thinks that health is fundamentally about human goals and suffering, and thus fundamentally normative. The naturalism/normativist debate elides two dimensions of disagreement, one concerning the objectivity of health facts (on which Kendler is a pragmatist, but towards the objective end of the spectrum, opposed to so-called normativists in the health debate), the other concerning whether health is value-laden (on which Kendler appears to be with with the normativists). I have thought for a while that it should be possible to occupy a position like this – agreeing with naturalists on one dimension of disagreement, normativists on another – and I’m pleased to have found someone who does, and someone so credible.

The picture, then, is of a spectrum of disorder which is fairly independent of us, and also normative, but which is fairly arbitrarily cut up by us in our attempts to understand it. That’s a crude summary, of course. But it made me wonder if an exact reverse might also be attractive. Some disease kinds, especially infectious diseases, might suit realism of the kind we might have about biological species. After all, infectious diseases correspond to actual biological species, or something similar enough. Thus we might be (fairly) realist about cholera – it’s what happens when something we are fairly realist about (vibrio cholerae) gets into the small intestine of something else that we are fairly realist about (us). However, we might be irrealist, or less than realist, about health facts in general – not just psychiatric health and disease, but health and disease in general. We might think that health is not a fundamental category – not a kind, not even in the way that species are kinds – and yet think that certain diseases do count as kinds.

I am not sure where this leads, but my hope would be to develop a notion of health and illness – the mere absence of health – as secondary properties, like colour, which depend in non-arbitrary non-contingent ways on us (as opposed to a constructivist stance, which makes facts depend on us in contingent ways). Such a view may make room in the picture for the diagnoser as well as the diagnosed, and it may explain why a particular set of objective characteristics seem to us to belong together as “health” or as a given illness, despite the fact we often find little objective to bundle them together. Perhaps it also would answer to Kendler’s pragmatist intuitions, which I also find very compelling.

Whatever the prospects for Kendler’s view or for this hastily-sketched alternative, the lecture really helped me pull together a number of these issues. It left me thinking that the overarching goal for contemporary philosophical work on the nature of health and disease must be to link the debates about the status of health (naturalism, normativism, and all that) with debates about the status of disease kinds and with the literature on natural kinds more generally. Kendler’s excellent talk today embarked on this extremely complex task and went a remarkable way towards offering a comprehensive and plausible position.

Tobacco and epidemiology in Korea: old tricks, new answers?

Today I participated in a seminar hosted by the National Health Insurance Service (NHIS) of Korea, which is roughly the equivalent of the NHS in the UK, although the health systems differ. The seminar concerned a recent lawsuit in which tobacco companies were sued by the NHIS for the costs of treating lung cancer patients. The suit is part of a larger drive to get a grip on smoking in Korea, where over 40% of males smoke, and a packet of 20 cigarettes costs 4500 Korean Won (about USD 4.10 or UKP 2.80). The NHIS recently suffered a blow at the Supreme Court, where the ruling was somewhat luke-warm about a causal link between smoking and lung cancer in general, and moreover argued that such a link would anyway fail to prove anything about the two specific plaintiffs in the case at hand.

I was struck by the familiarity of some of the arguments that are apparently being used by the tobacco companies. For example, the Supreme Court has been convinced that diseases come in two kinds, specific and non-specific, and that since lung-cancer is a non-specific disease, it is wrong to seek to apply measures of attributability (excess/attributable fraction, population excess/attributable fraction) at all.

This is reminiscent of the use of non-specificity in the 1950s, when it was seen as a problem for the causal hypothesis that smoking causes lung cancer. It also gives rise to a strategy which is legally sound but dubious from a public health perspective, namely, first going for lung cancer, and leaving other health-risks of smoking for later. This is legally sound because lung cancer exhibits the highest relative risk of the smoking-related diseases, and perhaps it is good PR too because cancer of any kind catches the imagination. But the health burden of lung cancer is low, even in a population where smoking is relatively prevalent, since lung cancer is a rare disease even among smokers.

The health burden of heart disease, at the other end of the spectrum, is very large, and even though smoking less than doubles this risk (RR about 1.7), the base rate of heart disease is so high that this amounts to a very significant public health problem. I do not know what the right response to this complex of problems is: clearly, high-profile court cases are have an impact that extends far beyond their outcome, and also the reason that people stop smoking, or accept legislation, need not be an accurate reflection of the true risks in order for those risks to be mitigated. (If you stop smoking to avoid lung cancer, you also avoid heart disease, which is a much better reason to stop smoking from the perspective of a rational individual motivated to avoid fatal disease.) Nonetheless I am struck by the way that legal and health policy objectives interact here.

I was also interested to hear that the case of McTear was a significant blow to the Korean case because of its findings about causality, which indeed are exactly those of the Korean case. That case is not well regarded in the UK, and not authoritative (being first instance), so it is interesting – and unfortunate – that it has had an effect here.

The event was an extremely good-spirited affair, and the other speakers had some interesting things to say. My book, in Korean, received a significant plug, not least, I suspect, because the audience not understanding much of my talk, were repeatedly referred to it for more detail. The most shocking thing about the event was to hear the same obfuscatory strategies that are now history in Europe and America being used, to good effect, by the very same companies in this part of the world. It is one thing to defend a case on grounds that one believes, but there is not anyone who still reasonably believes that smoking does not cause lung cancer, which seems to be the initial burden that plaintiffs in this sort of case need to prove. It is a bit like being asked to begin your case against a scaffolder who dropped a metal bar on your head with a proof of the law of gravity, and then being asked to prove that the general evidence concerning gravity proves that gravity was the cause in this particular case, given that not all downward motions are caused by gravity. – Not exactly like that, of course, but not exactly unlike, either.

On the positive side, I am hoping that a clear explanation of the reasoning behind the PC Inequality that I favour might help with the next stage of the case, although I am unclear what that stage might be.

Is consistency trivial in randomized controlled trials?

Here are some more thoughts on Hernan and Taubman’s famous 2008 paper, from a chapter I am finalising for the epidemiology entry in a collection on the philosophy of medicine. I realise I have made a similar point in an earlier post on this blog, but I think I am getting closer to a crisp expression. The point concerns the claimed advantage of RCTs for ensuring consistency. Thoughts welcome!

Hernan and Taubman are surely right to warn against too-easy claims about “the effect of obesity on mortality”, when there are multiple ways to reduce obesity, each with different effects on mortality, and perhaps no ethically acceptable way to bring about a sudden change in body mass index from say 30 to 22 (Hernán and Taubman 2008, 22). To this extent, their insistence on assessing causal claims as contrasts to well-defined interventions is useful.

On the other hand, they imply some conclusions that are harder to accept. They suggest, for example, that observational studies are inherently more likely to suffer from this sort of difficulty, and that experimental studies (randomized controlled trials) will ensure that interventions are well-specified. They express their point using the technical term “consistency”:

consistency… can be thought of as the condition that the causal contrast involves two or more well-defined interventions. (Hernán and Taubman 2008, S10)

They go on:

…consistency is a trivial condition in randomized experiments. For example, consider a subject who was assigned to the intervention group … in your randomized trial. By definition, it is true that, had he been assigned to the intervention, his counterfactual out- come would have been equal to his observed outcome. But the condition is not so obvious in observational studies. (Hernán and Taubman 2008, s11)

This is a non-sequitur, however, unless we appeal to a background assumption that an intervention—something that an actual human investigator actually does—is necessarily well-defined. Without this assumption, there is nothing to underwrite the claim that “by definition”, if a subject actually assigned to the intervention had been assigned to the intervention, he would have had the outcome that he actually did have.

Consider the intervention in their paper, one hour of strenuous exercise per day. “Strenuous exercise” is not a well-defined intervention. Weightlifting? Karate? Swimming? The assumption behind their paper seems to be that if an investigator “does” an intervention, it is necessarily well-defined; but on reflection this is obviously not true. An investigator needs to have some knowledge of which features of the intervention might affect the outcome (such as what kind of exercise one performs), and thus need to be controlled, and which don’t (such as how far west of Beijing one lives). Even randomization will not protect against confounding arising from preference for a certain type of exercise (perhaps because people with healthy hearts are predisposed both to choose running and to live longer, for example), unless one knows to randomize the assignment of exercise-types and not to leave it to the subjects’ choice.

This is exactly the same kind of difficulty that Hernan and Taubman press against observational studies. So the contrast they wish to draw, between “trivial” consistency in randomized trials and a much more problematic situation in observational studies, is a mirage. Both can suffer from failure to define interventions.

Workshop, Helsinki: What do diseases and financial crises have in common?

AID Forum: “Epidemiology: an approach with multidisciplinary applicability”

(Unfamiliar with AID forum? For the very idea and the programme of Agora for Interdisciplinary Debate, see www.helsinki.fi/tint/aid.htm)

DISCUSSED BY:

Mervi Toivanen (economics, Bank of Finland)

Jaakko Kaprio (genetic epidemiology, U of Helsinki)

Alex Broadbent (philosophy of science, U of Johannesburg)

Moderated by Academy professor Uskali Mäki

Session jointly organised by TINT (www.helsinki.fi/tintand the Finnish Epidemiological Society (www.finepi.org)

TIME AND PLACE:

Monday 9 February, 16:15-18

University Main Building, 3rd Floor, Room 5

http://www.helsinki.fi/teknos/opetustilat/keskusta/f33/ls5.htm

TOPIC: What do diseases and financial crises have in common?

Epidemiology has traditionally been used to model the spreading of diseases in populations at risk. By applying parameters related to agents’ responses to infection and network of contacts it helps to study how diseases occur, why they spread and how one could prevent epidemic outbreaks. For decades, epidemiology has studied also non-communicable diseases, such as cancer, cardiovascular disease, addictions and accidents. Descriptive epidemiology focuses on providing accurate information on the occurrence (incidence, prevalence and survival) of the condition. Etiological epidemiology seeks to identify the determinants be they infectious agents, environmental or social exposures, or genetic variants. A central goal is to identify determinants amenable to intervention, and hence prevention of disease.

There is thus a need to consider both reverse causation and confounding as possible alternative explanations to a causal one. Novel designs are providing new tools to address these issues. But epidemiology also provides an approach that has broad applicability to a number of domains covered by multiple disciplines. For example, it is widely and successfully used to explain the propagation of computer viruses, macroeconomic expectations and rumours in a population over time.

As a consequence, epidemiological concepts such as “super-spreader” have found their way also to economic literature that deals with financial stability issues. There is an obvious analogy between the prevention of diseases and the design of economic policies against the threat of financial crises. The purpose of this session is to discuss the applicability of epidemiology across various domains and the possibilities to mutually benefit from common concepts and methods.

QUESTIONS:

1. Why is epidemiology so broadly applicable?

2. What similarities and differences prevail between these various disciplinary applications?

3. What can they learn from one another, and could the cooperation within disciplines be enhanced?

4. How could the endorsement of concepts and ideas across disciplines be improved?

5. Can epidemiology help to resolve causality?

READINGS:

Alex Broadent, Philosophy of Epidemiology (Palgrave Macmillan 2013)

http://www.palgrave.com/page/detail/?sf1=id_product&st1=535877

Alex Broadbent’s blog on the philosophy of epidemiology:

https://philosepi.wordpress.com/

Rothman KJ, Greenland S, Lash TL. Modern Epidemiology 3rd edition.

Lippincott, Philadelphia 2008

D’Onofrio BM, Lahey BB, Turkheimer E, Lichtenstein P. Critical need for family-based, quasi-experimental designs in integrating genetic and social science research. Am J Public Health. 2013 Oct;103 Suppl 1:S46-55. doi:10.2105/AJPH.2013.301252.

Taylor, AE, Davies, NM, Ware, JJ, Vanderweele, T, Smith, GD & Munafò, MR 2014, ‘Mendelian randomization in health research: Using appropriate genetic variants and avoiding biased estimates’. Economics and Human Biology, vol 13., pp. 99-106

Engholm G, Ferlay J, Christensen N, Kejs AMT, Johannesen TB, Khan S, Milter MC, Ólafsdóttir E, Petersen T, Pukkala E, Stenz F, Storm HH. NORDCAN: Cancer Incidence, Mortality, Prevalence and Survival in the Nordic Countries, Version 7.0 (17.12.2014). Association of the Nordic Cancer Registries. Danish Cancer Society. Available from http://www.ancr.nu.

Andrew G. Haldane, Rethinking of financial networks; Speech by Mr Haldane, Executive Director, Financial Stability, Bank of England, at the Financial Student Association, Amsterdam, 28 April 2009: http://www.bis.org/review/r090505e.pdf

Antonios Garas et al., Worldwide spreading of economic crisis: http://iopscience.iop.org/1367-2630/12/11/113043/pdf/1367-2630_12_11_113043.pdf

Christopher D. Carroll, The epidemiology of macroeconomic expectations: http://www.econ2.jhu.edu/people/ccarroll/epidemiologySFI.pdf

Two recent papers

I’ve had two papers come out this month (/year!):

‘Risk Relativism and Physical Law’ in Journal of Epidemiology and Community Health – http://jech.bmj.com/content/69/1/92?etoc

‘Disease as a theoretical concept: The case of HPV-itis’ in Studies in History and Philosophy of Biological and Biomedical Sciences – http://www.sciencedirect.com/science/article/pii/S1369848614000910

Interventionism vs contrastivism about causation

I am trying to summarise (for a paper to be submitted to an epidemiology journal) the differences and similarities between interventionism (a la Woodward, Hitchcock) and contrastivism (a la Schaffer, perhaps Menzies) about causation. Are the following comments fair, and fairly comprehensive?

1) The two approaches share the view that the significance of causal claims is relative to some specified scenario in which some other specified event occurs rather than the cause (unlike pure counterfactual theories, which do not require any such specification).

2) Contrastivism (a la Schaffer) is primarily a semantic thesis about the meaning of “cause” and cognates. Specifically, contrastivism is committed to a descriptive thesis about ordinary talk, namely that “C causes E” implies “C rather than C* causes E rather than E*”, while interventionism is not committed to this (though compatible with it).

3) Interventionism seeks to explicate or shed light on characteristic patterns of counterfactual dependence that surround the instantiation of causes (without fully reducing causation to counterfactual dependence). Contrastivism does not seek to do this (though is compatible with doing so).

4) Both accounts are amenable to “cause” being left ultimately unanalysed, and proponents of each (e.g. Schaffer, Woodward) have expressed pessimism about the prospects of reducing causes to counterfactual dependence.

Comments/thoughts welcome. Thanks.

Is the Methodological Axiom of the Potential Outcomes Approach Circular?

Hernan, VanderWeele, and others argue that causation (or a causal question) is well-defined when interventions are well-specified. I take this to be a sort of methodological axiom of the approach.

But what is a well-specified intervention?

Consider an example from Hernan & Taubman’s influential 2008 paper on obesity. In that paper, BMI is shown up as failing to correspond to a well-specified intervention; better-specifed interventions include one hour of strenuous physical exercise per day (among others).

But what kind of exercise? One hour of running? Powerlifting? Yoga? Boxing?

It might matter – it might turn out that, say, boxing and running for an hour a day reduce BMI by similar amounts but that one of them is associated with longer life. Or it might turn out not to matter. Either way, it would be a matter of empirical inquiry.

This has two consequences for the mantra that well-defined causal questions require well-specified interventions.

First, as I’ve pointed out before on this blog, it means that experimental studies don’t necessarily guarantee well-specified interventions. Just because you can do it doesn’t mean you know what you are doing. The differences you might think don’t matter might matter: different strains of broccoli might have totally different effects on mortality, etc.

Second, more fundamentally, it means that the whole approach is circular. You need a well-specified intervention for a good empirical inquiry into causes and you need good empirical inquiry into causes to know whether your intervention is well-specified.

To me this seems to be a potentially fatal consequence for the claim that well-defined causal questions require well-specified interventions. For if that were true, we would be trapped in a circle, and could never have any well-specified interventions, and thus no well-defined causal questions either. Therefore either we really are trapped in that circle; or we can have well-defined causal questions, in which case, it is false that these always require well-specified interventions.

This is a line of argument I’m developing at present, inspired in part by Vandebroucke and Pearce’s critique of the “methodological revolution” at the recent WCE 2014 in Anchorage. I would welcome comments.

Causation, prediction, epidemiology – talks coming up

Perhaps an odd thing to do, but I’m posting the abstracts of my two next talks, which will also become papers. Any offers to discuss/read welcome!

The talks will be at Rhodes on 1 and 3 October. I’ll probably deliver a descendant of one of them at the Cambridge Philosophy of Science Seminar on 3 December, and may also give a very short version of 1 at the World Health Summit in Berlin on 22 Oct.

1. Causation and Prediction in Epidemiology

There is an ongoing “methodological revolution” in epidemiology, according to some commentators. The revolution is prompted by the development of a conceptual framework for thinking about causation called the “potential outcomes approach”, and the mathematical apparatus of directed acyclic graphs that accompanies it. But once the mathematics are stripped away, a number of striking assumptions about causation become evident: that a cause is something that makes a difference; that a cause is something that humans can intervene on; and that epidemiologists need nothing more from a notion of causation than picking out events satisfying those two criteria. This is especially remarkable in a discipline that has variously identified factors such as race and sex as determinants of health. In this talk I seek to explain the significance of this movement in epidemiology, separate its insights from its errors, and draw a general philosophical lesson about confusing causal knowledge with predictive knowledge.

2. Causal Selection, Prediction, and Natural Kinds

Causal judgements are typically – invariably – selective. We say that striking the match caused it to light, but we do not mention the presence of oxygen, the ancestry of the striker, the chain of events that led to that particular match being in her hand at that time, and so forth. Philosophers have typically but not universally put this down to the pragmatic difficulty of listing the entire history of the universe every time one wants to make a causal judgement. The selective aspect of causal judgements is typically thought of as picking out causes that are salient for explanatory or moral purposes. A minority, including me, think that selection is more integral than that to the notion of causation. The difficulty with this view is that it seems to make causal facts non-objective, since selective judgements clearly vary with our interests. In this paper I seek to make a case for the inherently selective nature of causal judgements by appealing to two contexts where interest-relativity is clearly inadequate to fully account for selection. Those are the use of causal judgements in formulating predictions, and the relation between causation and natural kinds.

JOB: Post Doc – prediction, philosophy of epidemiology, philosophy of science

The Department of Philosophy at the University of Johannesburg seeks to appoint a postdoctoral research fellow to work under the supervision of Prof Alex Broadbent. In particular, ideas for work on (1) prediction or (2) philosophy of epidemiology are welcome; but any area of speciality within the philosophy of science broadly construed (including the philosophy of medicine) will be considered. Please send a CV, cover letter, and writing sample to abbroadbent@uj.ac.za by 26 September 2014. PhD must be in hand. Start date February 2014. Informal inquiries welcome to the same email address.