Potential Outcomes Approach as “epidemiometrics”

In a review of Jan Tinbergen’s work, Maynard Keynes wrote:

At any rate, Prof. Tinbergen agrees that the main purpose of his method is to discover, in cases where the economist has correctly analysed beforehand the qualitative character of the causal relations, with what strength each of them operates… [1]

Nancy Cartwright cites this passage in the context of describing the business of econometrics, in the introduction to her Hunting Causes and Using Them [2]. Her idea is that econometrics assumes that economics can be an exact science, that economic phenomena are governed by causal laws, and sets out to quantify them, making econometrics a fruitful domain for a study of the connection between laws and causes.

This helped me with an idea that first occurred to me at the 9th Nordic Conference of Epidemiology and Register-Based Health Research, that the potential outcomes approach to causal inference in epidemiology might be understood as the foundational work of a sub-discipline within epidemiology, related to epidemiology as econometrics is to economics. We might call it epidemiometrics.

This suggestion appears to resonate with Tyler Vanderweele’s contention that:

A distinction should be drawn between under what circumstances it is reasonable to refer to something as a cause and under what circumstances it is reasonable to speak of an estimate of a causal effect… The potential outcomes framework provides a way to quantify causal effects… [3]

The distinction between causal identification and estimation of causal effects does not resolve the various debates around the POA in epidemiology, since the charge against the POA is that as an approach (the A part in POA) it is guilty of overreach. For example, the term “causal inference” is used prominently where “quantitative causal estimation” might be more accurate [4]. 

Maybe there is a lesson here from the history of economics. While the discipline of epidemiology does not pretend to uncover causal laws, as does economics, it nevertheless does seek to uncover causal relationships, at least sometime. The Bradford Hill viewpoints are for answering a yes/no question: “is there any other way of explaining the facts before us, is there any other answer equally, or more, likely than cause and effect?” [5]. Econometrics answers a quantitative question: what is the magnitude of the causal effect, assuming that there is one? This question deserves its own disciplines because, like any quantitative question, it admits of many more precise and non-equivalent formulations, and of the development of mathematical tools. Recognising the POA not as an approach to epidemiology research, but as a discipline within epidemiology is deserved.

Many involved in discussions of the POA (including myself and co-authors) have made the point that the POA is part of a larger toolkit and that this is not always recognised [6,7], while others have argued that causal identification is a separate goal of epidemiology from causal estimation and that is at risk of neglect [8]. The italicised components of these contentions do not in fact concern the business of discovering or estimating causality. They are points about the way epidemiology is taught, and how it is understood by those who practice it. They are points, not about causality, but about epidemiology itself.

A disciplinary distinction between epidemiology and a sub-discipline of epidemiometrics might assist in realising this distinction that many are sensitive to, but that does not seem to have poured oil on the water of discussions of causality. By “realising”, I mean enabling institutional recognition at departmental or research unit level, enabling people to list their research interests on CVs and websites, assisting students in understanding the significance of the methods they are learning, and, most important of all, softening the dynamics between those who “advocate” and those who “oppose” the POA. To advocate econometrics over economics, or vice versa, would be nonsensical, like arguing liner algebra is more or less important than mathematics. Likewise, to advocate or oppose epidemiometrics would be recognisably wrong-headed. There would remain questions about emphasis, completeness, relative distribution of time and resources–but not about which is the right way to achieve the larger goals.

Few people admit to “advocating” or “opposing” the methods themselves, because in any detailed discussion it immediately becomes clear that the methods are neither universally, nor never, applicable. A disciplinary distinction–or, more exactly, a distinction of a sub-discipline of study that contributes in a special way to the larger goals of epidemiology–might go a long way to alleviating the tensions that sometimes flare up, occasionally in ways that are unpleasant and to the detriment of the scientific and public health goals of epidemiology as a whole.

[1] J.M. Keynes, ‘Professor Tinbergen’s Method’, Economic Journal, 49 (1939), 558-68 n. 195.

[2] N. Cartwright, Hunting Causes and Using Them (New York: Cambridge University Press, 2007), 15.

[3] T. Vanderweele, ‘On causes, causal inference, and potential outcomes’, International Journal of Epidemiology, 45 (2016), 1809.

[4] M.A. Hernán and J.M. Robins, Causal Inference: What If (Boca Raton: Chapman & Hall/CRC, 2020).

[5] A. Bradford Hill, ‘The Environment and Disease: Association or Causation?’, Proceedings of the Royal Society of Medicine, 58 (1965), 299.

[6] J. Vandenbroucke, A. Broadbent, and N. Pearce, ‘Causality and causal inference in epidemiology: the need for a pluralistic approach’, International Journal of Epidemiology, 45 (2016), 1776-86.

[7] A. Broadbent, J. Vandenbroucke, and N. Pearce, ‘Response: Formalism or pluralism? A reply to commentaries on ‘Causality and causal inference in epidemiology”, International Journal of Epidemiology, 45 (2016), 1841-51.

[8] Schwartz et al., ‘Causal identification: a charge of epidemiology in danger of marginalization’, Annals of Epidemiology, 26 (2016), 669-673.

Workshop, Helsinki: What do diseases and financial crises have in common?

AID Forum: “Epidemiology: an approach with multidisciplinary applicability”

(Unfamiliar with AID forum? For the very idea and the programme of Agora for Interdisciplinary Debate, see www.helsinki.fi/tint/aid.htm)

DISCUSSED BY:

Mervi Toivanen (economics, Bank of Finland)

Jaakko Kaprio (genetic epidemiology, U of Helsinki)

Alex Broadbent (philosophy of science, U of Johannesburg)

Moderated by Academy professor Uskali Mäki

Session jointly organised by TINT (www.helsinki.fi/tintand the Finnish Epidemiological Society (www.finepi.org)

TIME AND PLACE:

Monday 9 February, 16:15-18

University Main Building, 3rd Floor, Room 5

http://www.helsinki.fi/teknos/opetustilat/keskusta/f33/ls5.htm

TOPIC: What do diseases and financial crises have in common?

Epidemiology has traditionally been used to model the spreading of diseases in populations at risk. By applying parameters related to agents’ responses to infection and network of contacts it helps to study how diseases occur, why they spread and how one could prevent epidemic outbreaks. For decades, epidemiology has studied also non-communicable diseases, such as cancer, cardiovascular disease, addictions and accidents. Descriptive epidemiology focuses on providing accurate information on the occurrence (incidence, prevalence and survival) of the condition. Etiological epidemiology seeks to identify the determinants be they infectious agents, environmental or social exposures, or genetic variants. A central goal is to identify determinants amenable to intervention, and hence prevention of disease.

There is thus a need to consider both reverse causation and confounding as possible alternative explanations to a causal one. Novel designs are providing new tools to address these issues. But epidemiology also provides an approach that has broad applicability to a number of domains covered by multiple disciplines. For example, it is widely and successfully used to explain the propagation of computer viruses, macroeconomic expectations and rumours in a population over time.

As a consequence, epidemiological concepts such as “super-spreader” have found their way also to economic literature that deals with financial stability issues. There is an obvious analogy between the prevention of diseases and the design of economic policies against the threat of financial crises. The purpose of this session is to discuss the applicability of epidemiology across various domains and the possibilities to mutually benefit from common concepts and methods.

QUESTIONS:

1. Why is epidemiology so broadly applicable?

2. What similarities and differences prevail between these various disciplinary applications?

3. What can they learn from one another, and could the cooperation within disciplines be enhanced?

4. How could the endorsement of concepts and ideas across disciplines be improved?

5. Can epidemiology help to resolve causality?

READINGS:

Alex Broadent, Philosophy of Epidemiology (Palgrave Macmillan 2013)

http://www.palgrave.com/page/detail/?sf1=id_product&st1=535877

Alex Broadbent’s blog on the philosophy of epidemiology:

https://philosepi.wordpress.com/

Rothman KJ, Greenland S, Lash TL. Modern Epidemiology 3rd edition.

Lippincott, Philadelphia 2008

D’Onofrio BM, Lahey BB, Turkheimer E, Lichtenstein P. Critical need for family-based, quasi-experimental designs in integrating genetic and social science research. Am J Public Health. 2013 Oct;103 Suppl 1:S46-55. doi:10.2105/AJPH.2013.301252.

Taylor, AE, Davies, NM, Ware, JJ, Vanderweele, T, Smith, GD & Munafò, MR 2014, ‘Mendelian randomization in health research: Using appropriate genetic variants and avoiding biased estimates’. Economics and Human Biology, vol 13., pp. 99-106

Engholm G, Ferlay J, Christensen N, Kejs AMT, Johannesen TB, Khan S, Milter MC, Ólafsdóttir E, Petersen T, Pukkala E, Stenz F, Storm HH. NORDCAN: Cancer Incidence, Mortality, Prevalence and Survival in the Nordic Countries, Version 7.0 (17.12.2014). Association of the Nordic Cancer Registries. Danish Cancer Society. Available from http://www.ancr.nu.

Andrew G. Haldane, Rethinking of financial networks; Speech by Mr Haldane, Executive Director, Financial Stability, Bank of England, at the Financial Student Association, Amsterdam, 28 April 2009: http://www.bis.org/review/r090505e.pdf

Antonios Garas et al., Worldwide spreading of economic crisis: http://iopscience.iop.org/1367-2630/12/11/113043/pdf/1367-2630_12_11_113043.pdf

Christopher D. Carroll, The epidemiology of macroeconomic expectations: http://www.econ2.jhu.edu/people/ccarroll/epidemiologySFI.pdf

Snakes, statistics, and goals for the goal-setters

Cesar Victora gave a very interesting talk earlier today concerning the International Epidemiology Association’s position paper on the UN’s Sustainable Development Goals, which are currently being drafted (to replace the Millennium Development Goals post-2015). Victora is President of the IEA, for a few more hours at least (the new President takes office this evening). Many of his points were reiterated by the next speaker, Theodor Abelin, and in questions from the floor. There were no audible voices of dissent. (The talk reflects a fuller position paper, available here.)

The point that stayed with me most from Victora’s rich talk was the importance of relating goals to appropriate measurement techniques. My own interest in epidemiology has tended to focus on efforts to identify causes (“analytic” epidemiology), since causation is a natural magnet for philosophical interest. But measurement is also a focus of philosophical interest, and Victora nicely pointed out that “descriptive” epidemiology – the business of measuring things like maternal mortality rate, for example – is extremely important if these Sustainable Development Goals are to be effective. A country cannot be held to a goal that cannot be measured, and it cannot be fairly be held to a goal when progress towards that goal is estimated rather than measured.

For example, I was not surprised to learn that in many countries where maternal mortality is high, data on maternal mortality rates (MMRs) are scarce. What did surprise me was hearing about the calculations that some august international organisations perform in the absence of data. A calculation is performed involving GDP per capita, general fertility rate and skilled birth attendance. MMR is estimated as a function of these and perhaps some other similar variables. This means that if the country goes through a recession, the estimated MMR will automatically go up. – Perhaps is really will go up, but it seems strange to think of that calculation as a measurement, at least in the absence of extremely good evidence for the reliability of the estimating equation – evidence which, of course, we don’t have.

MMR is measurable, of course. The problem with MMR is simply a lack of data, and this problem afflicts a large class of conditions. As Victora put it in relation to snakebite: “Where we have snakes, we don’t have statistics, and where we have statistics, we don’t have snakes.”

However, Victora’s most penetrating critique of the SDGs concerned the setting of goals in the absence of clear ideas about how progress towards the goals will be measured. The health-related goal is as follows:

Goal 3. Ensure healthy lives and promote well-being for all at all ages” (from the Outcome Document)

This overarching goal is broken down into 13 subgoals, some of which are very loosely specified. For instance, how are we to tell whether a country has managed to “strengthen prevention and treatment of substance abuse, including narcotic drug abuse and harmful use of alcohol”? Ironically, those goals that are most clearly specified are wildly unattainable, such as halving global deaths and injuries from road traffic accidents by 2020. Those that are not well specified present measurement challenges for epidemiologists.

This made me wonder whether a body like the IEA could itself set some “goals for the goal-setters” – that is, criteria which any health-related goal must meet if, in the professional opinion of the IEA, they are to be useful. The simplest such criterion would be that outcomes must be specified in terms of a recognised epidemiological measure (mortality, for instance). Another might be to accompany each goal with information (perhaps in a corresponding entry in an appendix) concerning the trend over the past similar period: so if the goal is the halve road traffic deaths in 15 years, or 25, information on the growth of road traffic deaths over the past 15 or 25 years might be included. Goals of this kind will always be political, but there might be agreement on a set of simple rules for setting such goals, and if such rules existed, this might pull epidemiologists closer in to the goal-setting process – a kind of politicking which, as one of the questioners pointed out, is not part of standard epidemiological training.

 

Stability: an epidemiological ingredient in the realism debate?

I’m preparing a talk on stability for the New Thinking in Scientific Realism Conference that opens in Cape Town tomorrow. I introduced the notion of stability in my book, defined like this:

“A result, claim, theory, inference, or other scientific output is stable if and only if

(a) in fact, it is not soon contradicted by good scientific evidence; and

(b) given best current scientific knowledge, it would probably not be soon contradicted by good scientific evidence, if good research were done on the topic.” (Broadbent 2013, 63)

The introduction of this notion was a response to the perceived difficulties around “translating” epidemiological (or more generally biomedical) findings into good health policy. At Euroepi in Porto, 2012, I argued that translation was not the main or only difficulty for using epidemiological results, and that stability – or rather, the lack of it – was important. After all, one cannot comfortably rely on a result if one cannot be confident that the next study won’t completely contradict it, and that seems to happen pretty often in at least some areas of epidemiological investigation.

Thus the reasons for introducing the notion were thoroughly practical. More recently, though, I have been trying to tighten up the philosophical credentials of the notion, and that’s what I’m going to be talking about in Cape Town. Is stability epistemically significant? Can it be shown to be epistemically significant without collapsing into approximate truth? Can it be distinguished from approximate truth without collapsing into empirical adequacy? These are the questions I will seek to answer.

What’s interesting for me is that, as far as I can see, it’s pretty easy to answer these questions affirmatively. If I’m right about that, then this will be a nice case where studying actual science gives rise to new philosophical insights. The desire to make public health policy that will not have to be revised six months down the line is eminently practical; yet the proposal of a status that scientific hypotheses might have, distinct from truth and empirical adequacy and all the rest, is eminently abstract. If stability really is both defensible and novel, then it will illustrate the oft-repeated mantra that philosophers of science would benefit from looking more closely at science. I am personally put on guard when I hear that said, not because I disagree in principle, but because experience has taught me to suspect either lip service, or an excuse for poor philosophy. Perhaps I’m also guilty of one or both of these; I will be interested to see what Cape Town says.

The Myth of Translation

Next week I am part of a symposium at EuroEpi in Porto, Portugal with the title Achieving More Effective Translation of Epidemiologic Findings into Policy when Facts are not the Whole Story.

My presentation is called “The Myth of Translation” and the central thesis is, as you would guess, that talk of “translating” data into policy, discoveries into applications, and so forth is unhelpful and inaccurate. Instead, I am arguing that the major challenge facing epidemiological research is assuring non-epidemiologists who might want to rely on those results that they are stable, meaning that they are not likely to be reversed in the near future.

I expect my claim to be provocative in two ways. First, the most obvious reasons I can think of for the popularity of the “translation” metaphor, given its clear inappropriateness (which I have not argued here but which I argue in the presentation), are unpleasant ones: claiming of scientific authority for dearly-held policy objectives; or blaming some sort of translational failing for what are actually shortcomings (or, perhaps, over-ambitious claims) in epidemiological research. This point is not, however, something I intend to emphasize; nor am I sure it is particularly important. Second, the claim that epidemiological results are reasonably regarded by non-epidemiologists as too unstable to be useful might be expected to raise a bit of resistance at an epidemiology conference.

Given the possibility that what I have to say will be provocative, I thought I would try my central positive argument out here.

(1) It is hard to use results which one reasonably suspects might soon be found incorrect.

(2) Often, epidemiological results are such that a prospective user reasonably suspects that they will soon be found incorrect.

(3) Therefore, often, it is hard to use epidemiological results.

I think this argument is valid, or close enough for these purposes. I think that (1) does not need supporting: it is obviously true (or obviously enough for these purposes). The weight is on (2), and my argument for (2) is that from the outside, it is simply too hard to tell whether a given issue – for example, the effect of HRT on heart disease, or the effect of acetaminophen (paracetamol) on asthma – is still part of an ongoing debate, or can reasonably be regarded as settled. The problem infects even results that epidemiologists would widely regard as settled: the credibility of the evidence on the effect of smoking on lung cancer is not helped by reversals over HRT, for example, because from the outside, it is not unreasonable to wonder what the relevant difference is between the pronouncements on HRT and the pronouncements on lung cancer and smoking. There is a difference: my point is that epidemiology lacks a clear framework for saying what it is.

My claim, then, is that the main challenge facing the use of epidemiological results is not “translation” in any sense, but stability; and that devising a framework for expressing to non-epidemiologists (“users”, if you like) how stable a given result is, given best available current knowledge, is where efforts currently being directed at “translation” would be better spent.

Comments on this line of thought would be very welcome. I am happy to share the slides for my talk with anyone who might be interested.

Taubes’ Tautology

In the once fertile garden of epidemiology, all is not well, according to some commentators. The low-hanging fruit has been plucked, and the epidemiological ladder is not long enough to bring the remainder within reach. Possibly the most famous expression of this dissatisfaction is a report by a journalist writing in Science in 1995 called “Epidemiology Faces Its Limits”. Gary Taubes cites a number of contrary findings, where exposures have been found to be harmful and then safe (or vice versa) in different studies, or harmful in different ways, or harmful when studied using one study design but not when using another. He interviews a number of eminent epidemiologists and reaches a simple diagnosis: epidemiology has spotted the big effects already, and is now scrabbling around trying to identify small ones. These are much harder to distinguish from biases or chance effects. Indeed, he hypothesizes that epidemiological methods may be unable to tell the difference at all, in some cases. In this sense, Taubes suggests, epidemiology is facing its limits.

The epidemiological garden is still growing nearly two decades later. Either the gardeners did not listen, and continued to tend fruitless trees, or Taube’s diagnosis was wrong. But epidemiologists did listen: the piece is well-known. Moreover epidemiologists are among the most methodologically reflective and self-critical of scientists, which is evident from the fact that most of Taubes’ criticism is drawn directly from interviews with epidemiologists (and which is one reason epidemiologists are such a pleasure to engage with philosophically). The implication is that Taubes’ low-hanging fruit hypothesis is mistaken.

Taubes’ hypothesis is tempting because it is true that big discoveries lie in the past. It is, however, a fallacy to suppose that this means no big discoveries lie in the future. On inspection, the tempting hypothesis reveals itself as an instance of a very common theme: that we are nearing the end of what inquiry can tell us. This has been said before, most famously in physics shortly before Einstein’s impact. If the history of science tells us anything it is that this claim is always false. We know more about the past than the future, and so we know what the big discoveries of the past are, but not the big discoveries of the future. If there were low-hanging fruit that epidemiology has not yet plucked, then we would not know it, even if they were going to be plucked tomorrow afternoon. More is needed to prove that epidemiology faces its limits than that the tautologous claim that its most striking discoveries to date lie in the past.

“The Exposome” – a lab for epidemiology?

In February 2011, Nature ran a journalistic piece on the development of technologies designed to increase the accuracy of measuring exposures, spurred by various dissatisfactions with questionnaires. (Thanks to Thad Metz for pointing me to this.) The “exposome” is presented in that piece as the logical conclusion of improved measurement techniques. It is supposed to be a device (I am imagining an enormous plastic bubble) capable of measuring every exposure of study subjects. A quick hunt around the internet reveals that the idea is capturing at least a few imaginations, including some at the US Centers for Disease Control.

The CDC’s Overview of the exposome defines the exposome like this:

The exposome can be defined as the measure of all the exposures of an individual in a lifetime and how those exposures relate to health.

The idea of the exposome suggested two questions to me.

First, the idea of the exposome puts pressure on the concept of an exposure. In most epidemiological practice, the question “What is an exposure?” is of no practical importance. But if the aim is to measure every exposure, then we must answer the question in order to know whether we have succeeded.

The CDC article contrasts the target of the exposome with genetic risk factors, suggesting that exposures exclude genetic make-up. But the CDC article also suggests that exposures measured by the exposome may begin before birth. (I am imagining babies born in little plastic bags.) So it is not clear exactly what the rationale for excluding genetic make-up from “exposures” would be. If the goal is simply to measure anything that might affect a given health outcome then we should include genetics. We should also include our entire solar system, indeed the galaxy, so as to account for the effects of solar flares, meteorites, and so forth. (The plastic bubble in my imagination is getting very big.)

My first worry, then, about “exposomics” is that it will not get very far without circumscribing the notion of exposure, so as to be something less than what the authors of the CDC overview probably think they mean – that is, something less than all factors potentially affecting health outcomes.

My second question is whether striving for an exposome is a good idea, judged by the goals of epidemiology, which I take to be providing information which can be used to improve public health.

One central point of epidemiology is that it studies people, not in labs, but as they actually live their lives. The exposome is a sort of lab, and striving for it is nothing other than striving for the controlled experiment. Aside from the complete fantasy of ever achieving an exposome (my imaginary bubble just burst), it does not seem helpful even to “study” the exposome, or whatever else it is “exposomists” are supposed to do. (And, incidentally, it does not seem that the exposome is a logical extension of increasing accuracy of measurements of exposure.) Epidemiologists want to know what happens in reality, not in the exposome.

Epidemiology and laboratory sciences complement each other in this way. Tar may be shown to produce cancer in the skin of laboratory rats, but epidemiology tells us what happens when humans smoke cigarettes. The two sources of knowledge complement each other. Each has flaws. Causal inference is harder in epidemiology because of the lack of control over potentially relevant variables: exposures, for short. But lab sciences suffer a different inferential limitation: not in making a causal inference, but in inferring that the results obtained in the lab will apply outside. So it is hard to see how doing away with either source of knowledge could be a good idea, and hard to see what “exposomics” could add to epidemiology, except another buzz word.