Mostrando entradas con la etiqueta 2013. Mostrar todas las entradas
Mostrando entradas con la etiqueta 2013. Mostrar todas las entradas

4/3/14

Raffaella Campaner, Philosophy of medicine. Causality, Evidence and Explanation, Bologna, Archetipo Libri, 2012

For more than a decade now, there has been a growing interest in the philosophy of medicine as a scientific discipline. Already in 2005, Raffaella Campaner published a monograph in Italian on causality and explanation in medicine (Spigazione e cause in medicina: un’indagine epistemologica) showing how philosophy of science could be successfully applied to biomedical research. Throughout this decade, Campaner published a series of papers in English on the same topics that are now compiled in the reviewed volume. Most of these papers were originally published in edited collections or journals where medicine was not the central topic, so re-publishing all together in a single volume makes sense for the interested reader. Moreover, the Italian publisher has produced a decently edited but inexpensive book, so all in all philosophers of medicine should welcome it.

Campaner has gathered here 9 papers plus an introduction. Their structure is somewhat similar: the author presents different philosophical positions (mostly on causality, but also on explanation) and proceeds to illuminate them with medical case studies, arguing on this basis for her own claims. The reader will find thus an introduction to the following accounts on causality: mechanistic (Salmon, Machamer-Darden-Craven, Glennan), interventionist (Woodward) and manipulative (Price and Menzies), with a brief digression on counterfactuals (Lewis). Despite featuring on equal rank in the book’s title, we do not find introductory accounts of philosophical theories of explanation and evidence. Campaner considers instead plenty of medical explanations and evidences and see how they may fit in the different philosophical accounts of causality presented. Among her case studies, two of the most detailed are on deep brain stimulation (a therapy for Parkinson’s disease) and anti-AIDS treatments. Campaner deals also in several papers with epidemiological and psychiatric causation.

The book puts forward a pluralistic perspective on causation, showing how in actual medical practice we may find all the above mentioned approaches complementing (rather than competing with) each other. The choice often depends on the methods implemented and the context of implementation. The author does not try to construct a principled argument for causal pluralism: as she acknowledges, “lots of work is still to be done before a plausible and coherent view will be settled on and shared” (p.60). The strength of her argument is empirical: there is no evidence that a “one size fits all” concept of causation can cope with the diversity of causal approaches at work in medical practice. However, Campaner also draws on a conceptual insight emerging from this diversity: diseases would be multilevel phenomena (ranging from cells, molecules, tissues upwards to the whole organism) and medicine (siding here with Schaffner, p. 11) would be a set of middle-range theories coping with them. Campaner adopts here a sort of meta-philosophical instrumentalism regarding such deeply entrenched methodological divides such as the one opposing reductionism and anti-reductionism: as she illustrates in chapter 7, both strategies have been fruitful in medicine and both might make sense contextually. In this respect, I think is worth noticing how difficult it is to sustain even moderately pluralist stances about medical causality such as the Russo-Williamson thesis –according to which we would need mechanistic and probabilistic evidence to properly ground causal discoveries. Yet, as Campaner argues in chapter 2, medicine has been quite capable of making progress without mechanisms and yet, when we have them, we often need manipulative evidence, in addition to statistics, to properly ground them.

Campaner constantly reminds us that her pluralism does not “amount to treat all available methodological options as equal in value” (e.g., p. 133), but the book focuses mostly on cases where there is more complementarity than straightforward competition between the alternatives considered and all of them are worth, at this point in time, of scientific consideration. Historically, though, medicine has not been as peaceful as it might seem today. In The Rise of Causal Concepts of Disease (2003), for instance, K. Codell Carter has forcefully argued that scientific medicine began with the adoption of the etiological standpoint, the view that every disease has a single cause which is both necessary and sufficient for the disease, showing how this approach was crucial for progress in its treatment. Even today, I would say that medicine is not really pluralist when it comes to decision making about new therapies: we still rely on their success in randomized clinical trials as a rule. Of course, trials might be interpreted from different causal stances, but not all of them are equally captured in their design: mechanistic knowledge, for instance, does not currently qualify the value of a trial in most hierarchies of medical evidence.  In other words, as of today, philosophical pluralism about causation may faithfully reflects the way medicine is practiced, and methodological diversity may be in itself a fruitful research strategy. But I think Campaner’s claim would have been more balanced if it also considered cases in which there was open disagreement about causality between competing research agendas.

My major qualm with this volume is that the papers were not edited for the compilation. Reading it from cover to cover might be a bit reiterative sometimes since the same items are often revisited in different chapters. However, it makes it really suitable for use in undergraduate courses, in particular when teaching philosophy of science to medical students, since most concepts are explained in an accessible manner detail and illustrated with theories they will be certainly familiar with. The name index at the end is particularly helpful in tracing different approaches throughout the book and keeping the original abstracts at the beginning of each chapter is equally useful to guide the uninitiated reader. Campaner’s is thus not only a good philosophy of science in practice book, but also a very accessible book in itself.

{October 2013}
{International Studies in the Philosophy of Science 27.4 (2013), 456-458}

A. Briggle & C. Mitcham, Ethics and Science: an Introduction, Cambridge, Cambridge University Press, 2012

I don’t know if this qualifies as a conflict of interests, but I must admit I volunteered to review this book because I am teaching a course on ethics and science and there are not many comprehensive textbooks available. A strength of Briggle and Mitcham’s volume that immediately caught my eye is that it brings together an updated introduction to ethics, philosophy and the political sociology of science.  Chapters 2-7 cover ethical concepts and theories, research ethics (codes, investigation with humans and animals), norms in science and recent naturalist approaches to ethics. Chapters 9-10 deal with the main issues in science policy. Chapter 11 discusses the broader connections between science and culture and the 12th and final chapter analyzes ethics and engineering.

The structure of the chapters is clear: they all open with a quick case presentation, followed by several sections and a final summary, plus a closing case study with questions and readings for further reflection. There is a general list of references at the end, together with a complete subject index and addresses for ethics codes and declarations. All in all, Briggle and Mitcham’s volume has everything one would in principle expect from a textbook. However, I am not quite sure about how to make the best of it in class. 

The authors do not take anything from granted, so their presentations are as introductory and accessible as possible. This should make it particularly accessible to science and engineering students, their most likely target. In order to make it even easier to read, the authors often adopt an informal narrative tone, in which the case is presented as it unfolded historically, highlighting landmarks, often in a casual manner. Briggle and Mitcham rarely take an opinionated stance: we find the standard account on most topics. But it is sometimes simplified to a point that I was left wondering what use we can make of it in order to assess real world dilemmas. For instance, on page 45, we learn how virtue ethics is relevant for science. First, through an analogy between virtue ethics and virtue epistemology. Then, virtue ethics would also highlight “the importance of training processes and mentor–mentee relationships”, so often neglected in Big Science. Finally, virtue ethics can be used to object against “the wisdom of pursuing certain physical or cognitive enhancements”.

But, in fact, there is no other mention in the book of virtue epistemology and I guess most uninitiated readers won’t make much sense of its relevance for science in just one paragraph. I was left wondering why training processes are more defensible (or interesting) from virtue ethics than from any other approach, or why would virtue ethicists would be more opposed to cognitive enhancement than, for instance, a deontologist. I am not saying that such tenets cannot be defended, but rather that we do not find full-fledged arguments for any of them. 

This is just an instance of a problem I found throughout the book: the reader gets acquainted with plenty of interesting topics, but we rarely find a detailed discussion of any of them. This might just be the expression of my own preferences, but I think that the ultimate goal of a course on ethics and science would be to make the student capable of arguing his/her case as thoroughly as possible. For Briggle and Mitcham, I would say that the goal is rather to increase the student’s awareness about every contentious point in the intersection of ethics and science (and these are many). It is interesting to notice that chapters do not have exercises targeting directly their core points, but rather sets of questions on the closing case studies. The questions are often very open (e.g., “to what extent was this morally justified?”) and no template is provided to answer them. 

I guess that my concern might be shared both by the analytic philosopher and the STS scholar: if the former would care about the arguments not being fully developed, the latter would surely miss the many details that articulate the best case studies in STS. The many vignettes illustrating each chapter are certainly engaging, but I think it would have been instructive to present more thoroughly some cases, showing in detail how we can address them from various perspectives. 

Despite this concern, I think that the material provided in this book is so rich and updated that it may be worth trying it in class, at least as a starting point. Given how quickly this field evolves, I guess it is better to make the most of a textbook now that it is fresh and test it as thoroughly as we can, in order to see if anyone comes up with a different alternative. It is probably not easy.

{May 2013}

26/12/12


Peter Stone, The Luck of the Draw: The Role of Lotteries in Decision Making, Oxford University Press, 2011, 195pp., $49.95 (hbk), ISBN 9780199756100

Almost everybody is familiar with one or two instances of the use of lotteries for public decision-making: allocating immigration quotas or filling juries are, perhaps, among the most popular today, but it is indeed a very old practice that has been documented in Greece, Israel, and other ancient cultures. In other words, it is a deeply rooted tradition in Western politics and yet, until very recently, there has been no systematic account of its normative foundations and applications. The Luck of the Draw is one of very few attempts at finding some conceptual unity in the political use of lotteries. Peter Stone considers here the two main types of decision by lot: the allocation of scarce goods and the assignment of public offices. His take is clearly normative, despite the wealth of evidence on lotteries discussed in the book. Stone investigates when lotteries are desirable, in particular when they are fair.

Stone’s case for lotteries hinges on the conceptual systematization and clarification of what lotteries are and when it is fair to use them. This is achieved through two normative statements, the lottery principle and the just lottery rule. First, Stone contends that there is an encompassing rationale for every use of randomization in decision-making processes, what he calls the lottery principle. According to this principle (p. 37), we would be justified in requesting that a decision is made by lot if it prevents the decision maker from influencing the outcome on the basis of reasons. This is “the sanitizing effect” of lotteries: they screen off some reasons from the decision process, be they good or bad, because the outcome of the decision becomes unpredictable for the decision-maker. But, of course, lotteries seem only defensible in cases where bad reasons may affect decisions, either because there are no good ones or, if there are, we cannot filter the bad ones out. 

Stone analyses then under which circumstances we would be normatively justified in using lotteries in the allocation of goods. Again, there is a general justifying principle, the just lottery rule (p. 53): “under conditions of indeterminacy, if an agent must allocate a scarce homogeneous lumpy good amongst a group of parties with homogeneous claims, then the agent must do so using a fair lottery”. In other words, if there is a suitable division of a good which all parties are equally entitled to claim, but there is not enough of it for everybody, a fair lottery is the most impartial tie-breaker: it only takes into account the equal strength of the claims over it (p. 78). This is guaranteed by the sanitizing effect of the lottery principle, since in a random allocation every other reason (merit, desert, nepotism…) is simply left aside. If the lottery gives equal probability to equal rights over the good, the allocation procedure will be fair. According to Stone, the just lottery rule is grounded on Scanlon’s contractarian approach: in cases of indeterminacy, every other alternative allocation rule (be the outcome certain or uncertain, as in weighted lottery) will be reasonably rejected by the involved parties.

Apart from systematizing and clarifying the concept of a fair lottery for the allocation of scarce goods, Stone’s case rests on a critical examination of other normative approaches to lotteries and alternative allocation rules. As to the former, Stone takes issue with justifications of lotteries in terms of consent (Goodwin), equality of opportunities and equality of expectations (Kornhauser and Sager), as well as with alternative contractarian foundations (namely, Rawls and Harsanyi). Stone considers as well eight alternative allocation procedures (e.g., desert, queuing, etc). The discussion is often quick, but insightful too. 

Finally, Stone adds a chapter on the assignment of public responsibilities by lot (usually known as sortition). The author contends that there is no unifying principle to justify its many uses documented throughout history. Stone discusses instead three kinds of arguments for sortition: allocative justice –rehearsing the previous arguments–, incentive alignment and descriptive representation. The former exploits the capacity of randomization to screen off perverse motivations in politics, e.g., nepotism in the appointment of public officials. The latter draws on the virtues of lotteries to make assemblies representative in a statistical (descriptive) sense: a random draw in a population with certain relevant characteristics can provide a sample that faithfully represents the distribution of such characteristics in the whole. But, of course, in both cases there is a clear trade-off between the virtues of lotteries and their various side-effects (e.g., the exclusion of qualified candidates), so Stone does not consider any of these arguments conclusive. The Luck of the Draw ends with a long concluding chapter addressing nine loose ends. 

Even if the reader disagrees with Stone’s arguments, it must be granted that the two principles provide an excellent thread to organize a compact overview of political lotteries. In my view, the author is right in adopting impartiality as the key to his normative analysis: it is open to debate whether lotteries yield (e.g.) equality of opportunity, but there is a clear consensus on the role of randomization as a warrant of impartiality even in purely epistemic contexts: we want a randomized allocation of treatments to patients in a clinical trial, for example, to prevent physicians from giving one particular drug to the patients they feel would benefit more. Humanitarian as this may be, it would spoil a fair comparison between treatments. However, Stone argues as if the normative leverage of lotteries stems from the exclusion of reasons alone: in allocating scarce goods, it would not be reasonable to oppose a fair lottery, where reasons play no role. But the superiority of such lotteries (compared to other allocation methods) lays in that they do not just sanitize reasons, but conscious and unconscious biases of all sorts. Psychologists have documented at length how people acting with the best reasons unwittingly deviate from them in a systematic fashion: a physician allocating treatments may be sincerely convinced that he treated all patients the same; but the data often reveal otherwise. This is the impartiality bias: we tend to think that we are more neutral than we actually are. Hence, for purely strategic considerations, I will request a randomized procedure if, for whatever reason, I suspect that the allocation of a scarce good may be otherwise biased (e.g., Kadane & Berry 1997), even if the people in charge of the allocation were guided by the best reasons alone. An interest-based contractarianism may provide in this respect a broader foundation for the impartiality of lotteries than Scanlonian reasonability. But biases and interests are simply left aside in The Luck of the Draw.

Stone does a wonderful job in unifying the discussion of scarce good lotteries under the umbrella of impartiality. But fair lotteries as allocation mechanisms are often uncontroversial: whatever the reasons we have to accept them, there is experimental evidence showing that we tend to like them (e.g., Bolton et al. 2005). I would have liked to see a more thorough discussion of the limitations of impartiality in the justification of sortition. According to Stone, we would need a general account of political decision-making (p. 123) if we were to construct all-encompassing rules of justice for sortition. But my impression, at least, is that the justice of lotteries depends implicitly of the type of good we are drawing lots for. When treatments are randomized in a clinical trial, the resulting allocation is often corrected because it doesn’t look random enough (e.g., one treatment goes to men and the other one to women). Whatever we do, there is a potential source of bias in such cases: maybe the unbalanced allocation is in itself biased, but if we grant us an unrestricted right to correct it, we may end up biasing it anyway. Whereas in scarce good lotteries only a few allocations may seem unbalanced, the outcomes of sortition are much more open to controversy: as the randomized selection of criminal and civil juries in the United States shows, for instance, there is an endless supply of motives for defense attorneys to strike jurors. The impartiality of the procedure in lotteries of this sort does not always provide a good enough reason to accept the outcome.

In sum, it would have been interesting to explore empirically what makes a lottery acceptable, addressing them as a procedure to control subjective biases and not only as the implementation of general criteria of justice. However, the exploration of this latter is enough to make The Luck of Draw an excellent read.


Berry, Scott M., and Joseph B. Kadane. 1997. Optimal bayesian randomization. Journal of the Royal Statistical Society. Series B (Methodological) 59: 813-819.
Bolton, Gary E., Jordi Brandts, and Axel Ockenfels. 2005 Fair procedures: evidence from games involving lotteries. The Economic Journal 115: 1054-1076.

{Agosto 2012}
{Economics and Philosophy 29.1 (2013), 139-142}

  

Philip Dawid, William Twining, and Mimi Vasilaki, eds., Evidence, Inference and Enquiry

Philip Dawid, William Twining, and Mimi Vasilaki, eds., Evidence, Inference and Enquiry, Oxford-N.York, OUP/British Academy, 2011, 504 pages, ISBN 978-0-19-726484-3

This edited volume compiles seventeen papers developed in the framework of the Evidence programme, an interdisciplinary venture funded by the Leverhulme Trust and the ESRC between 2004 and 2007. Two of the editors, the statistician Philip Dawid and the jurist William Twining, led this project at University College London, in parallel to another research project directed by Mary Morgan at the London School of Economics (“How Well Do Facts Travel?”: the proceedings have also been edited in 2011 by Cambridge University Press). During those three years both UCL and LSE hosted many evidence-related events, bringing together scholars from many different fields, in what may have been –as Dawid suggest in his introduction (pp. 5-6)– the greatest achievement of the project. A survey paper by Jason Davies provides a good summary of those meetings. As to the accomplishment of the over-arching goal stated in the programme title (“Towards an Integrated Science of Evidence”), both Dawid and Twining are understandably modest, but they certainly had an intellectual agenda aiming at a unified treatment of all sorts of evidence. They found their inspiration (p.4) in the interdisciplinary theory of evidence developed by David Schum, an information scientist from George Mason University. One major flaw of this collection, in my view, is that it does not provide a précis of Schum’s approach, despite the explicit acknowledgement of its limited impact across disciplines (p. 93)

We just get a presentation of Schum’s bi-dimensional classification of evidence, since apparently it was the topic that most controversies elicited in the project. For Schum (p. 19), the three main properties of evidence are relevance, credibility and inferential force. For Schum information becomes evidence only when it bears upon a hypothesis (relevance), either directly or indirectly. As to credibility, it should be ascertained depending on the type of evidence we are dealing with (the main division being between tangible and testimonial). According to Schum, relevance and credibility would be the two main dimensions for classifying all kinds of evidence, since the inferential force –however we measure it– depends on a previous assessment of the former. Hence, whatever the hypothesis under consideration we can judge whether it is, for instance, testimonial and directly relevant for it. The classification would be purely formal (or “substance-blind” as Schum puts it). Many of the participants in the project considered controversial the possibility of one such general classification of evidence (framed in a scientific fashion), or so we gather from references in various papers in the collection, but, sadly, nobody takes issue directly against it.

Another contribution of Schum and Twining to the project is the recovery of the theory of evidence of John Henry Wigmore (1863-1943), an American legal scholar who defended the use of informal logic in law –the analysis of evidence in this tradition is brilliantly summarized in Twining’s paper. Wigmore represented legal arguments in form of networks, making explicit how evidence contributed to their cogency. In their respective papers, Terence J. Anderson and Peter Tillers, both legal scholars, use also Wigmore charts in order to analyze the role of generalizations in evidential arguments –the former in a case-based fashion and the latter in a more speculative manner. Schum and Dawid (together with A. Hepler) present a systematic comparison between Wigmore’s networks and Bayesian nets, applying both to analyze the evidential grounds of the conviction of Sacco and Vanzetti. I would have expected here some sort of attempt at integrating both approaches, but the authors just make explicit the differences, mentioning in a final footnote the possibility of combining both in a more formal manner. The contentious point might have been the resistance to quantification implicit in, at least part of, the Wigmorean tradition (p.86). Flexible formal approaches such as the evidential logic presented in John Fox’s paper might have provided a middle way, but Fox engaged neither Schum nor Wigmore and, unfortunately, his own paper was scarcely cross-referenced in the volume.

As I see it, David Lagnado pushes the farthest what seems to me the core intuition in the UCL project: evidence features in uncertain reasoning through piecemeal networks, exemplified at best in legal reasoning. In “Thinking about Evidence”, Lagnado, a psychologist, presents and defends the following theses. Bayesian networks can successfully model distinctive aspects of legal reasoning. They capture as well the qualitative networks underlying our cognitive processes (even if we cannot handle properly the probabilities in uncertain knowledge). In particular, they allow us to understand how we deal with fragments of such networks in evidential reasoning and how we adjust them to changes. Lagnado examines alternative approaches and presents two experiments on legal reasoning illustrating the viability of his own claims. 

The problem with this volume is that, interesting as some of them are, the remaining eight papers are a collection of independent contributions without explicit ties with any of the above mentioned topics. The narrative counterpoint within the programme is here represented by J. Russell and T. Greenhalgh, whose case study in discourse analysis shows how evidence is quantitatively framed in a health policy unit –without any explicit exchange with their fellow researchers. Drawing on a recent historiographical controversy, J. Davies ponders the pros and cons of analyzing historical evidence about ancient religions in terms of either ritual or belief. Again in a quantitative note and on legal topics, Tony Gardner-Medwin presents a short but very interesting plea for an explicit incorporation of our levels of uncertainty into our judgments (from students’ answers in exams to legal evidence in trials).

We also find here three papers by philosophers of science. N. Cartwright presents another installment of her theory of evidence for evidence-based policy (co-authored here with J. Stegenga), providing a very accessible presentation of her project. H. Chang and G. Fisher present their case for a contextual appraisal of evidence through an analysis of the ravens paradox. A. Wylie defuses the generalized skepticism about archeological evidence defended by several theorists in the field, applying ideas from Glymour and Hacking to show how different evidential sources in archaeology support each other in a non-circular manner.

From perhaps a too parochial perspective, I would complain about the quality of the two remaining papers in the collection. David Colquhoun, a biostatistician, contributes a paper “in praise of randomization” aimed, by all appearances, mainly against John Worrall. It turns out to be just a nice collection of illustrations of the virtues of this allocation procedure without an explicit engagement with the arguments of Worrall or any other objection against it (e.g., in Bayesian approaches such Howson and Urbach’s). In “What Would a Scientific Economics Look Like?”, the epidemiologist Michael Joffe shows his skepticism about the scientific status of mainstream economic theory through a comparison with biology. An enterprise perfectly legitimate as such, but with a very odd collection of references as intellectual ground: the methodologists he engages with (e.g., Friedman) are not anymore representative of the theoretical approaches Joffe is targetting. 

This collection shows that there is much more in the discussion of evidence, particularly in legal theory, than philosophers of science usually take into account – though, in point of fact, Larry Laudan, among others, took notice of it a while ago. The crucial point is to what extent the forms of argument in Law (in particular, trials) are comparable to the argumentative patterns in other fields, in particular science. As Twining reminds us though (p. 92), contested trials are just a step within the legal process and most of the cases discussed in this compilation are too narrowly focused on it. As a paradigm for an “integrated science of evidence”, I am afraid that this volume will leave many readers unimpressed. And this is partly due to the format chosen for this collection. As the editors warn, the three years of this project were not enough to form a coherent view of its goals among its members, as it shows in the final collection of papers. And I do not see the point today of publishing them all in a single volume as if there was some added value in binding them all together. A special issue in a mainstream journal plus a set of independent publications in specialized outlets, properly cross-referenced and all linked in the project’s webpage, might have been enough for most interested readers. 

{July  2012}
{British Journal for Philosophy of Science 64.3 (2013), 665-668}