Mostrando entradas con la etiqueta 0: In English. Mostrar todas las entradas
Mostrando entradas con la etiqueta 0: In English. Mostrar todas las entradas

5/4/15


Franklin G. Miller, Luana Colloca, Robert A. Crouch, Ted J. Kaptchuk, The Placebo: A Reader, Baltimore, Johns Hopkins University Press, 2013

The "reader" genre is becoming more difficult today than in any previous decade: in the pre-digital repositories age, compilations granted easy access in print to papers, otherwise difficult and expensive to obtain. Nowadays, we will only buy such compilations if they combine an excellence of editorial taste and readability that beats the temptation of downloading our own selection directly from a journal archive.  In this regard, I must admit that the editors of this placebo reader have succeeded in producing a volume worth buying. 
  
The book compiles 52 papers on the placebo effect divided in four sections. Eight articles feature in the first one, on the concept and significance of the placebo effect. 30 more are classified under four headings in the second section, experimental studies of the placebo effect: the headings are pioneering efforts, psychological mechanisms neurobiological mechanisms and contextual factors. The third and final section, on the ethics of using placebos, presents the last 14 papers about research and clinical practice. The papers are chosen and presented in such a way that the compilation reads as a historically motivated introduction to our current understanding of the placebo effects.
The uninitiated reader will probably appraise the placebo as a treatment without active ingredient that nonetheless makes (some) patients improve (e.g., sugar pills). As Ted Kaptchuk observes in his introduction, this is a relatively modern concept. Using placebos implies that we are somehow able to differentiate between effective and ineffective treatments and such a distinction could only be properly quantified with the emergence of clinical trials in the 1940s. There was an initial enthusiasm about the healing power of placebos that gradually vanished with more sophisticated analyses (documented in the anthology with papers from the 1980s and 1990s): the statistical data of most clinically relevant variables tend to regress from extreme values (disease) to the mean (health) of their distribution, and we can only take properly into account this spontaneous improvement if we compare in a trial a group of patients treated with a placebo with another one with no treatment at all. Taking all this into account, the mainstream view today is that the only placebo effect statistically documented appears when measuring patient reported outcomes, such as pain. The papers complied in the first section document how this view emerges.   

It is worth recalling here that clinical trials were introduced in medicine as a yardstick for assessing the effective of treatments without a real causal understanding of why they were effective. E.g., the underlying mechanism of the antidepressant action of benzodiazepines was understood in the 1970s almost two decades after the first trials, when Valium was already one of the best selling drugs of the past century. Trials often guided our investigation of such mechanisms, showing how a drug operated under a range of different circumstances. In this respect, placebos would be like any other treatment: we lack a "robust and comprehensive" (Kaptchuk) theory, but there is a growing body of experiments documenting how placebos work. First, there are psychological mechanisms, among which the most prominent are behavioral conditioning (often unconscious) and expectations (usually conscious) induced verbally or, e.g., through social observation. Through a number of experimental designs we learn how these mechanisms operate and the physiological responses they trigger (e.g., the release of opioids). Then there are neurobiological mechanisms underlying placebo, illustrated in a number of experiments supported by various forms of brain imaging. Finally, there are contextual factors, often related to the interaction between physicians and patients in a given setting. 

Half of the third section, on the ethics of placebo, hinges on clinical trials as well. Here the approach is mostly methodological: to what extent do we actually need placebo in order to ground solid experimental designs? More precisely, do placebos provide a real benchmark to gauge the efficacy of a new treatment? The obvious ethical implication is that, were the response negative, there would be no reason to use a placebo instead of an active treatment (if there was one) as a comparison. Moreover, since placebos partly operate through the patients' expectations, it is an open question how much shall we deceive them about the (lack of) treatment they are receiving. There is some evidence about placebos working when they are almost openly presented as such. And there is also evidence, presented in the second half of this section, about physicians prescribing placebos outside trials and a discussion about the ethics of such a practice. 

The book closes with a paper by Miller and Colloca, two of the editors of the volume -there are 10 papers co-authored by, at least, one of the members of the editorial team, so there is no pretence of neutrality in the compilation. Their conclusion summarizes somehow the agenda this volume seems to promote: for certain conditions, placebo might actually be an effective treatment; if there is evidence gathered in well-designed trials and the nature of the treatment is properly disclose to the patient, it is acceptable to prescribe it. Hence, acupuncture for pain relief in various conditions could be legitimately prescribed on evidence-based grounds. But, for the time being, no other placebo meets such a scientific standard. 

On a more general note, I'd say that a major thread in this book is that it contributes evidence to ground quite a paradoxical intuition: the mere act of diagnosing a patient and administering a treatment (at least for a few conditions) has effects that break the equivalence between "placebo" and "lack of treatment". My favorite illustration in the book is an trial with Kaptchuk as first author (pp. 226-32) testing different "treatments" for the irritable bowel syndrome: being in a waiting list (measuring patients' response to observation and assessment); sham and real acupuncture (placebo) and an augmented placebo: the needling was accompanied by a scripted positive interaction with the physician. The trial showed that these three components add up progressively, reaching a maximum effect with the augmented placebo, reaching a clinically significant effect in the treatment of the condition. 

In the traditional approach to trials, the preferences of patients were considered a source of bias, since they could make a difference on the outcome. The evidence gathered in placebo research shows that, except for pain, there is no trace, so far, of a significant effect of the patients’ expectations on the treatment outcome. Perhaps this volume should prompt us to reconsider the status of blinding as a debiasing method: its usual justification is precisely that it controls for placebo effects derived from the patients preferences about treatments. Making them entirely alike breaks any systematic correlation between such preferences and the treatment outcome. But if there is no placebo effect for most conditions, perhaps we should reappraise blinding as a method to enforce the treatment protocol: since some patients would drop off the trial if they knew they were receiving an unwanted treatment, blinding secures their compliance, independently of the placebo effect. Clinical trials are indeed strategic interactions between agents with different, sometimes conflicting, interests that we should take into account in designing the trial. Controlling for some of these interactions (more than the placebo effect) might be the real justification of masking devices. In one of the compiled papers, R. Temple and S. Ellenberg contribute another argument in this same vein against active-control equivalence trials (p. 258): if you compare a drug against placebo, you have every incentive to enforce compliance with the trial protocol, since every deviation will usually reduce the differences between treatment groups, making your drug equivalent to a placebo. If you make a comparison with a standard treatment in order to prove lack of difference, there are weaker incentives to enforce protocol compliance. In other words, placebos may play a role in making both patients and researchers alike play by the rules of the experiment.

Two final positive comments. One surprising feature of this collection is how well it reads: each section is preceded by a short (but incisive) introduction intended as a road map of the papers to come. These are short and clear, and very accessible for the lay reader. The design of the experiments, their findings and the issues they raise are often so puzzling that the collection becomes engaging: I found myself eager to know whether an experiment had passed the test of replication, whether counter-arguments existed, if there was a final word on a topic. The papers are selected and ordered in such a way as to elicit this sort of engagement. Another surprising trait is the size of the volume (10.9 x 8.4 x 0.9 inches): it may initially look difficult to handle (in this age of palm-sized readers), but I found it very pleasant to work with.
{January 2014}

4/3/14

Raffaella Campaner, Philosophy of medicine. Causality, Evidence and Explanation, Bologna, Archetipo Libri, 2012

For more than a decade now, there has been a growing interest in the philosophy of medicine as a scientific discipline. Already in 2005, Raffaella Campaner published a monograph in Italian on causality and explanation in medicine (Spigazione e cause in medicina: un’indagine epistemologica) showing how philosophy of science could be successfully applied to biomedical research. Throughout this decade, Campaner published a series of papers in English on the same topics that are now compiled in the reviewed volume. Most of these papers were originally published in edited collections or journals where medicine was not the central topic, so re-publishing all together in a single volume makes sense for the interested reader. Moreover, the Italian publisher has produced a decently edited but inexpensive book, so all in all philosophers of medicine should welcome it.

Campaner has gathered here 9 papers plus an introduction. Their structure is somewhat similar: the author presents different philosophical positions (mostly on causality, but also on explanation) and proceeds to illuminate them with medical case studies, arguing on this basis for her own claims. The reader will find thus an introduction to the following accounts on causality: mechanistic (Salmon, Machamer-Darden-Craven, Glennan), interventionist (Woodward) and manipulative (Price and Menzies), with a brief digression on counterfactuals (Lewis). Despite featuring on equal rank in the book’s title, we do not find introductory accounts of philosophical theories of explanation and evidence. Campaner considers instead plenty of medical explanations and evidences and see how they may fit in the different philosophical accounts of causality presented. Among her case studies, two of the most detailed are on deep brain stimulation (a therapy for Parkinson’s disease) and anti-AIDS treatments. Campaner deals also in several papers with epidemiological and psychiatric causation.

The book puts forward a pluralistic perspective on causation, showing how in actual medical practice we may find all the above mentioned approaches complementing (rather than competing with) each other. The choice often depends on the methods implemented and the context of implementation. The author does not try to construct a principled argument for causal pluralism: as she acknowledges, “lots of work is still to be done before a plausible and coherent view will be settled on and shared” (p.60). The strength of her argument is empirical: there is no evidence that a “one size fits all” concept of causation can cope with the diversity of causal approaches at work in medical practice. However, Campaner also draws on a conceptual insight emerging from this diversity: diseases would be multilevel phenomena (ranging from cells, molecules, tissues upwards to the whole organism) and medicine (siding here with Schaffner, p. 11) would be a set of middle-range theories coping with them. Campaner adopts here a sort of meta-philosophical instrumentalism regarding such deeply entrenched methodological divides such as the one opposing reductionism and anti-reductionism: as she illustrates in chapter 7, both strategies have been fruitful in medicine and both might make sense contextually. In this respect, I think is worth noticing how difficult it is to sustain even moderately pluralist stances about medical causality such as the Russo-Williamson thesis –according to which we would need mechanistic and probabilistic evidence to properly ground causal discoveries. Yet, as Campaner argues in chapter 2, medicine has been quite capable of making progress without mechanisms and yet, when we have them, we often need manipulative evidence, in addition to statistics, to properly ground them.

Campaner constantly reminds us that her pluralism does not “amount to treat all available methodological options as equal in value” (e.g., p. 133), but the book focuses mostly on cases where there is more complementarity than straightforward competition between the alternatives considered and all of them are worth, at this point in time, of scientific consideration. Historically, though, medicine has not been as peaceful as it might seem today. In The Rise of Causal Concepts of Disease (2003), for instance, K. Codell Carter has forcefully argued that scientific medicine began with the adoption of the etiological standpoint, the view that every disease has a single cause which is both necessary and sufficient for the disease, showing how this approach was crucial for progress in its treatment. Even today, I would say that medicine is not really pluralist when it comes to decision making about new therapies: we still rely on their success in randomized clinical trials as a rule. Of course, trials might be interpreted from different causal stances, but not all of them are equally captured in their design: mechanistic knowledge, for instance, does not currently qualify the value of a trial in most hierarchies of medical evidence.  In other words, as of today, philosophical pluralism about causation may faithfully reflects the way medicine is practiced, and methodological diversity may be in itself a fruitful research strategy. But I think Campaner’s claim would have been more balanced if it also considered cases in which there was open disagreement about causality between competing research agendas.

My major qualm with this volume is that the papers were not edited for the compilation. Reading it from cover to cover might be a bit reiterative sometimes since the same items are often revisited in different chapters. However, it makes it really suitable for use in undergraduate courses, in particular when teaching philosophy of science to medical students, since most concepts are explained in an accessible manner detail and illustrated with theories they will be certainly familiar with. The name index at the end is particularly helpful in tracing different approaches throughout the book and keeping the original abstracts at the beginning of each chapter is equally useful to guide the uninitiated reader. Campaner’s is thus not only a good philosophy of science in practice book, but also a very accessible book in itself.

{October 2013}
{International Studies in the Philosophy of Science 27.4 (2013), 456-458}

A. Briggle & C. Mitcham, Ethics and Science: an Introduction, Cambridge, Cambridge University Press, 2012

I don’t know if this qualifies as a conflict of interests, but I must admit I volunteered to review this book because I am teaching a course on ethics and science and there are not many comprehensive textbooks available. A strength of Briggle and Mitcham’s volume that immediately caught my eye is that it brings together an updated introduction to ethics, philosophy and the political sociology of science.  Chapters 2-7 cover ethical concepts and theories, research ethics (codes, investigation with humans and animals), norms in science and recent naturalist approaches to ethics. Chapters 9-10 deal with the main issues in science policy. Chapter 11 discusses the broader connections between science and culture and the 12th and final chapter analyzes ethics and engineering.

The structure of the chapters is clear: they all open with a quick case presentation, followed by several sections and a final summary, plus a closing case study with questions and readings for further reflection. There is a general list of references at the end, together with a complete subject index and addresses for ethics codes and declarations. All in all, Briggle and Mitcham’s volume has everything one would in principle expect from a textbook. However, I am not quite sure about how to make the best of it in class. 

The authors do not take anything from granted, so their presentations are as introductory and accessible as possible. This should make it particularly accessible to science and engineering students, their most likely target. In order to make it even easier to read, the authors often adopt an informal narrative tone, in which the case is presented as it unfolded historically, highlighting landmarks, often in a casual manner. Briggle and Mitcham rarely take an opinionated stance: we find the standard account on most topics. But it is sometimes simplified to a point that I was left wondering what use we can make of it in order to assess real world dilemmas. For instance, on page 45, we learn how virtue ethics is relevant for science. First, through an analogy between virtue ethics and virtue epistemology. Then, virtue ethics would also highlight “the importance of training processes and mentor–mentee relationships”, so often neglected in Big Science. Finally, virtue ethics can be used to object against “the wisdom of pursuing certain physical or cognitive enhancements”.

But, in fact, there is no other mention in the book of virtue epistemology and I guess most uninitiated readers won’t make much sense of its relevance for science in just one paragraph. I was left wondering why training processes are more defensible (or interesting) from virtue ethics than from any other approach, or why would virtue ethicists would be more opposed to cognitive enhancement than, for instance, a deontologist. I am not saying that such tenets cannot be defended, but rather that we do not find full-fledged arguments for any of them. 

This is just an instance of a problem I found throughout the book: the reader gets acquainted with plenty of interesting topics, but we rarely find a detailed discussion of any of them. This might just be the expression of my own preferences, but I think that the ultimate goal of a course on ethics and science would be to make the student capable of arguing his/her case as thoroughly as possible. For Briggle and Mitcham, I would say that the goal is rather to increase the student’s awareness about every contentious point in the intersection of ethics and science (and these are many). It is interesting to notice that chapters do not have exercises targeting directly their core points, but rather sets of questions on the closing case studies. The questions are often very open (e.g., “to what extent was this morally justified?”) and no template is provided to answer them. 

I guess that my concern might be shared both by the analytic philosopher and the STS scholar: if the former would care about the arguments not being fully developed, the latter would surely miss the many details that articulate the best case studies in STS. The many vignettes illustrating each chapter are certainly engaging, but I think it would have been instructive to present more thoroughly some cases, showing in detail how we can address them from various perspectives. 

Despite this concern, I think that the material provided in this book is so rich and updated that it may be worth trying it in class, at least as a starting point. Given how quickly this field evolves, I guess it is better to make the most of a textbook now that it is fresh and test it as thoroughly as we can, in order to see if anyone comes up with a different alternative. It is probably not easy.

{May 2013}

24/3/13


U. Mäki, ed., Philosophy of Economics [Handbook of the Philosophy of Science, vol. 13], Amsterdam, Elsevier, 2012, 903 pp. 

Uskali Mäki’s latest edited collection, Philosophy of Economics, appears as the 13th volume in the Handbook of Philosophy of Science, an Elsevier series presented by its editors as “the most comprehensive review ever provided of the philosophy of science”. Three of the seventeen volumes are devoted to social disciplines: apart from economics there is another one for linguistics, and a joint volume for sociology and anthropology. As Mäki points out in his general introduction, economic methodology is closer today to “frontline philosophy of science” (p. xv) than ever before in its very short life as an independent field. This volume marks its coming of age, and it should be celebrated as such.
Reviewing a nearly 1000 pages volume in about 1000 words seems not easy, so let me try to grasp its significance through a comparison with its obvious predecessor The Handbook of Economic Methodology, co-edited by Mäki, John Davis and Wade Hands for Edward Elgar in 1998. This latter was based on short entries, while the 2012 volume consists of regular size papers. Rather than discussing the content of every one of them –and for the suspicious reader, yes, I spent my Christmas break going through it from cover to cover–, I will try to see what this book as a whole reveals about the philosophy of economics as it is cultivated today.
We should notice first that 12 of the 29 authors in the 2012 volume featured already in the previous installment and 9 of them write on almost the same topics: Mäki on realism, R. Backhouse on Lakatos, M. Morgan on models, K. Hoover on causality, H. Kincaid on economic explanation, W. Hands on the positive-normative dichotomy, C. W. Granger on economic forecasting, A. Spanos on econometrics and V. Vanberg on rational choice and rule following. As the reader may guess, most of these pieces take stock of decades of reflections on each topic and sometimes provide real primers on the author’s views. For example, Spanos’ 70 pages paper on “the philosophy of econometrics” is, in fact, an excellent introduction to the error-statistical approach with a final section on the title. Philip Mirowski’s “The Unreasonable Efficacy of Mathematics in Economics” is a reconsideration of most Mirowskian themes (physics’ envy, computers and markets, etc.) from a new angle (what sort of philosophical approach to mathematics would do justice to its uses in economics). Some of these veterans present instead research conducted after the publication of that first Handbook: for example, John Davis on individuals in economics or Marcel Boumans on measurement.
There are topics in the 2012 volume that have grown beyond anyone’s expectations fifteen years ago. Alan Nelson’s 1998 entry on “Experimental economics” called for further methodological reflections on internal and external validity, now accomplished in Francesco Guala’s piece for the 2012 volume, drawing on a decade of his own work. Herbert Simon claimed in 1998 that “behavioural economics is not so much a specific body of economic theory as a critique of neoclassical economic theory and methodology”. But Erik Angner and George Loewenstein present it in their 2012 paper as a “bona fide subdiscipline of economics”. I could not find in the 1998 index a mention of the ultimatum game, for which there is a paper in 2012 by Cristina Bicchieri and Jiji Zhang (about the incorporation of norms of justice into decision models). Dan Hausman writes about the experimental testing of game theory, another virtually absent topic in the 1998 handbook. Summing up, the papers explicitly addressing experiments claim 133 of the 903 pages of the 2012 volume, whereas they barely add to 10 of the 572 pages in the previous installment.
Other topics have not changed so dramatically. Apart from experiments, it seems as if nothing is radically different in the philosophy of game theory (T. Grüne-Yanoff and A. Letihnen) and rational choice theory (P. Anand). The methodological debate on disciplines such as evolutionary economics, feminist economics and public choice is mostly the same if we judge it from the content of the papers by, respectively, Jack Vromen, Kristina Rolin and Hartmut Kliemt. The piece on the economics of science was programmatic in 1998, whereas now Jesús Zamora Bonilla provides a survey of actual contributions. Most results in judgment aggregation date from this last decade so the short (and helpful) introduction written by Christian List is a real novelty in this volume. Geographical economics is also a new topic, but not because the discipline is new but rather thanks to a philosopher (Caterina Marchionni) who has decided to tackle its methodology.
At this point, the reader might be wondering what has been left behind in economic methodology. The most significant loss is the history of economic thought: organized in short dictionary-like entries, in the 1998 volume there were many about particular economists; but in 2012 only Mirowski adopts a full-fledged historical approach. Is this is a sign of how the profession is evolving? Whereas many of the senior contributors had a separate career as historians (Backhouse, Hands, Hoover, Morgan, etc.), this is less common among their junior peers (Angner is probably the most significant exception). However, there is an increasing (but still far from consensual) advocacy for an integration of History and Philosophy of science among practitioners of this latest discipline (e.g., Hasok Chang). Given how prominent this integration has been during decades in the case of economics, I think a more explicit reflection on the virtues (or flaws) of doing philosophy in such close connection with history could have been useful.
The second, but minor, loss is Marxism. In the 1998 Index it was explicitly mentioned in about 30 pages. In 2012 it is just mentioned five times, all in the same paper. In it we find, I think, the more original contribution to this volume, as compared to its predecessor. Don Ross’ “Economic Theory, Anti-economics and Political Ideology”, classifies five principled objections against economics as a purely positive endeavor and proceeds to refute them. Ross argues that economists defending the efficiency of markets are not promoting partisan views, if their arguments are read literally or, if they do, they are not representative of the profession. “The economic attitude –he claims– is consistent with policies drawn from anywhere on the left-right spectrum that acknowledges scarcity as fundamental to political and social organization” (p. 280). Ross’ strategy is to subvert the popular understanding of economic theories drawing on a combination of conceptual analysis and historical acumen (an integrated HPS, after all?). The reader will find another take on the same approach in his other paper in this compilation, “The economic agent: Not Human, But Important”, where he defends his well-known thesis that economic agency reliably captures bug-like decision making, but still is useful to understand ours. In other words, economics is a perfectly positive science, provided you have the proper concept of what economics and science are. I was unable to find anyone so bold about it in the 1998 volume.
It is significant that both Don Ross and Harold Kincaid have two papers each in this volume (unlike every other contributor), totaling 130 pages. In a decade of joint work, they have renewed our understanding of the philosophy of the social sciences putting forward a variety of naturalism that combines their own versions of contextualism (Kincaid) and structural realism (Ross). Following the path of Uskali Mäki, Alex Rosenberg or Dan Hausman in the 1990s, they have addressed economics in connection with broader problems in philosophy of science –in this respect, I guess it would have been fair to include a chapter by/about Nancy Cartwright’s views, the other great contender in this league. This is probably the take home lesson of this volume for newcomers in the discipline: stay close to actual practice (e.g, experiments) and make your philosophical position as general as possible (as Kincaid and Ross have done).
But speaking of newcomers, I should say something about the audience of this volume. Namely, that I do not see very clearly who their readers are. Its size and price (165 EUR) make it clearly a reference work for libraries, but not every paper is suitable for just the curious reader or first year student: all of them are very good, but some are very long, some are narrow in their approach (or very personal) and few suggest the reader where to go next. In addition, I think that, as an intellectual community, we are about to exhaust the Handbook genre for pure lack of diversity (and I plead myself guilty since I have contributed to three similar volumes in the last five years). Ross and Kincaid have their fair share of Mäki’s compilation, but they have edited themselves two other Handbooks for Oxford, where they extensively present their views (and so do Mäki, Vromen, Guala and a few other authors of this volume). I also read Ross, Guala and Knuuttila in another Handbook co-edited by Zamora Bonilla for Sage. This is probably the Matthew effect, but I wonder if our university libraries will really benefit from acquiring all these collections every ten years. Perhaps we should invest more in new editorial formats (like the Stanford Encyclopedia of Philosophy) that perform more or less the same function in a more updated and accessible way. de
{December  2012}
{Journal of Economic Methodology, 21.1 (2014)  96-98 }

26/12/12


Peter Stone, The Luck of the Draw: The Role of Lotteries in Decision Making, Oxford University Press, 2011, 195pp., $49.95 (hbk), ISBN 9780199756100

Almost everybody is familiar with one or two instances of the use of lotteries for public decision-making: allocating immigration quotas or filling juries are, perhaps, among the most popular today, but it is indeed a very old practice that has been documented in Greece, Israel, and other ancient cultures. In other words, it is a deeply rooted tradition in Western politics and yet, until very recently, there has been no systematic account of its normative foundations and applications. The Luck of the Draw is one of very few attempts at finding some conceptual unity in the political use of lotteries. Peter Stone considers here the two main types of decision by lot: the allocation of scarce goods and the assignment of public offices. His take is clearly normative, despite the wealth of evidence on lotteries discussed in the book. Stone investigates when lotteries are desirable, in particular when they are fair.

Stone’s case for lotteries hinges on the conceptual systematization and clarification of what lotteries are and when it is fair to use them. This is achieved through two normative statements, the lottery principle and the just lottery rule. First, Stone contends that there is an encompassing rationale for every use of randomization in decision-making processes, what he calls the lottery principle. According to this principle (p. 37), we would be justified in requesting that a decision is made by lot if it prevents the decision maker from influencing the outcome on the basis of reasons. This is “the sanitizing effect” of lotteries: they screen off some reasons from the decision process, be they good or bad, because the outcome of the decision becomes unpredictable for the decision-maker. But, of course, lotteries seem only defensible in cases where bad reasons may affect decisions, either because there are no good ones or, if there are, we cannot filter the bad ones out. 

Stone analyses then under which circumstances we would be normatively justified in using lotteries in the allocation of goods. Again, there is a general justifying principle, the just lottery rule (p. 53): “under conditions of indeterminacy, if an agent must allocate a scarce homogeneous lumpy good amongst a group of parties with homogeneous claims, then the agent must do so using a fair lottery”. In other words, if there is a suitable division of a good which all parties are equally entitled to claim, but there is not enough of it for everybody, a fair lottery is the most impartial tie-breaker: it only takes into account the equal strength of the claims over it (p. 78). This is guaranteed by the sanitizing effect of the lottery principle, since in a random allocation every other reason (merit, desert, nepotism…) is simply left aside. If the lottery gives equal probability to equal rights over the good, the allocation procedure will be fair. According to Stone, the just lottery rule is grounded on Scanlon’s contractarian approach: in cases of indeterminacy, every other alternative allocation rule (be the outcome certain or uncertain, as in weighted lottery) will be reasonably rejected by the involved parties.

Apart from systematizing and clarifying the concept of a fair lottery for the allocation of scarce goods, Stone’s case rests on a critical examination of other normative approaches to lotteries and alternative allocation rules. As to the former, Stone takes issue with justifications of lotteries in terms of consent (Goodwin), equality of opportunities and equality of expectations (Kornhauser and Sager), as well as with alternative contractarian foundations (namely, Rawls and Harsanyi). Stone considers as well eight alternative allocation procedures (e.g., desert, queuing, etc). The discussion is often quick, but insightful too. 

Finally, Stone adds a chapter on the assignment of public responsibilities by lot (usually known as sortition). The author contends that there is no unifying principle to justify its many uses documented throughout history. Stone discusses instead three kinds of arguments for sortition: allocative justice –rehearsing the previous arguments–, incentive alignment and descriptive representation. The former exploits the capacity of randomization to screen off perverse motivations in politics, e.g., nepotism in the appointment of public officials. The latter draws on the virtues of lotteries to make assemblies representative in a statistical (descriptive) sense: a random draw in a population with certain relevant characteristics can provide a sample that faithfully represents the distribution of such characteristics in the whole. But, of course, in both cases there is a clear trade-off between the virtues of lotteries and their various side-effects (e.g., the exclusion of qualified candidates), so Stone does not consider any of these arguments conclusive. The Luck of the Draw ends with a long concluding chapter addressing nine loose ends. 

Even if the reader disagrees with Stone’s arguments, it must be granted that the two principles provide an excellent thread to organize a compact overview of political lotteries. In my view, the author is right in adopting impartiality as the key to his normative analysis: it is open to debate whether lotteries yield (e.g.) equality of opportunity, but there is a clear consensus on the role of randomization as a warrant of impartiality even in purely epistemic contexts: we want a randomized allocation of treatments to patients in a clinical trial, for example, to prevent physicians from giving one particular drug to the patients they feel would benefit more. Humanitarian as this may be, it would spoil a fair comparison between treatments. However, Stone argues as if the normative leverage of lotteries stems from the exclusion of reasons alone: in allocating scarce goods, it would not be reasonable to oppose a fair lottery, where reasons play no role. But the superiority of such lotteries (compared to other allocation methods) lays in that they do not just sanitize reasons, but conscious and unconscious biases of all sorts. Psychologists have documented at length how people acting with the best reasons unwittingly deviate from them in a systematic fashion: a physician allocating treatments may be sincerely convinced that he treated all patients the same; but the data often reveal otherwise. This is the impartiality bias: we tend to think that we are more neutral than we actually are. Hence, for purely strategic considerations, I will request a randomized procedure if, for whatever reason, I suspect that the allocation of a scarce good may be otherwise biased (e.g., Kadane & Berry 1997), even if the people in charge of the allocation were guided by the best reasons alone. An interest-based contractarianism may provide in this respect a broader foundation for the impartiality of lotteries than Scanlonian reasonability. But biases and interests are simply left aside in The Luck of the Draw.

Stone does a wonderful job in unifying the discussion of scarce good lotteries under the umbrella of impartiality. But fair lotteries as allocation mechanisms are often uncontroversial: whatever the reasons we have to accept them, there is experimental evidence showing that we tend to like them (e.g., Bolton et al. 2005). I would have liked to see a more thorough discussion of the limitations of impartiality in the justification of sortition. According to Stone, we would need a general account of political decision-making (p. 123) if we were to construct all-encompassing rules of justice for sortition. But my impression, at least, is that the justice of lotteries depends implicitly of the type of good we are drawing lots for. When treatments are randomized in a clinical trial, the resulting allocation is often corrected because it doesn’t look random enough (e.g., one treatment goes to men and the other one to women). Whatever we do, there is a potential source of bias in such cases: maybe the unbalanced allocation is in itself biased, but if we grant us an unrestricted right to correct it, we may end up biasing it anyway. Whereas in scarce good lotteries only a few allocations may seem unbalanced, the outcomes of sortition are much more open to controversy: as the randomized selection of criminal and civil juries in the United States shows, for instance, there is an endless supply of motives for defense attorneys to strike jurors. The impartiality of the procedure in lotteries of this sort does not always provide a good enough reason to accept the outcome.

In sum, it would have been interesting to explore empirically what makes a lottery acceptable, addressing them as a procedure to control subjective biases and not only as the implementation of general criteria of justice. However, the exploration of this latter is enough to make The Luck of Draw an excellent read.


Berry, Scott M., and Joseph B. Kadane. 1997. Optimal bayesian randomization. Journal of the Royal Statistical Society. Series B (Methodological) 59: 813-819.
Bolton, Gary E., Jordi Brandts, and Axel Ockenfels. 2005 Fair procedures: evidence from games involving lotteries. The Economic Journal 115: 1054-1076.

{Agosto 2012}
{Economics and Philosophy 29.1 (2013), 139-142}

  

Philip Dawid, William Twining, and Mimi Vasilaki, eds., Evidence, Inference and Enquiry

Philip Dawid, William Twining, and Mimi Vasilaki, eds., Evidence, Inference and Enquiry, Oxford-N.York, OUP/British Academy, 2011, 504 pages, ISBN 978-0-19-726484-3

This edited volume compiles seventeen papers developed in the framework of the Evidence programme, an interdisciplinary venture funded by the Leverhulme Trust and the ESRC between 2004 and 2007. Two of the editors, the statistician Philip Dawid and the jurist William Twining, led this project at University College London, in parallel to another research project directed by Mary Morgan at the London School of Economics (“How Well Do Facts Travel?”: the proceedings have also been edited in 2011 by Cambridge University Press). During those three years both UCL and LSE hosted many evidence-related events, bringing together scholars from many different fields, in what may have been –as Dawid suggest in his introduction (pp. 5-6)– the greatest achievement of the project. A survey paper by Jason Davies provides a good summary of those meetings. As to the accomplishment of the over-arching goal stated in the programme title (“Towards an Integrated Science of Evidence”), both Dawid and Twining are understandably modest, but they certainly had an intellectual agenda aiming at a unified treatment of all sorts of evidence. They found their inspiration (p.4) in the interdisciplinary theory of evidence developed by David Schum, an information scientist from George Mason University. One major flaw of this collection, in my view, is that it does not provide a précis of Schum’s approach, despite the explicit acknowledgement of its limited impact across disciplines (p. 93)

We just get a presentation of Schum’s bi-dimensional classification of evidence, since apparently it was the topic that most controversies elicited in the project. For Schum (p. 19), the three main properties of evidence are relevance, credibility and inferential force. For Schum information becomes evidence only when it bears upon a hypothesis (relevance), either directly or indirectly. As to credibility, it should be ascertained depending on the type of evidence we are dealing with (the main division being between tangible and testimonial). According to Schum, relevance and credibility would be the two main dimensions for classifying all kinds of evidence, since the inferential force –however we measure it– depends on a previous assessment of the former. Hence, whatever the hypothesis under consideration we can judge whether it is, for instance, testimonial and directly relevant for it. The classification would be purely formal (or “substance-blind” as Schum puts it). Many of the participants in the project considered controversial the possibility of one such general classification of evidence (framed in a scientific fashion), or so we gather from references in various papers in the collection, but, sadly, nobody takes issue directly against it.

Another contribution of Schum and Twining to the project is the recovery of the theory of evidence of John Henry Wigmore (1863-1943), an American legal scholar who defended the use of informal logic in law –the analysis of evidence in this tradition is brilliantly summarized in Twining’s paper. Wigmore represented legal arguments in form of networks, making explicit how evidence contributed to their cogency. In their respective papers, Terence J. Anderson and Peter Tillers, both legal scholars, use also Wigmore charts in order to analyze the role of generalizations in evidential arguments –the former in a case-based fashion and the latter in a more speculative manner. Schum and Dawid (together with A. Hepler) present a systematic comparison between Wigmore’s networks and Bayesian nets, applying both to analyze the evidential grounds of the conviction of Sacco and Vanzetti. I would have expected here some sort of attempt at integrating both approaches, but the authors just make explicit the differences, mentioning in a final footnote the possibility of combining both in a more formal manner. The contentious point might have been the resistance to quantification implicit in, at least part of, the Wigmorean tradition (p.86). Flexible formal approaches such as the evidential logic presented in John Fox’s paper might have provided a middle way, but Fox engaged neither Schum nor Wigmore and, unfortunately, his own paper was scarcely cross-referenced in the volume.

As I see it, David Lagnado pushes the farthest what seems to me the core intuition in the UCL project: evidence features in uncertain reasoning through piecemeal networks, exemplified at best in legal reasoning. In “Thinking about Evidence”, Lagnado, a psychologist, presents and defends the following theses. Bayesian networks can successfully model distinctive aspects of legal reasoning. They capture as well the qualitative networks underlying our cognitive processes (even if we cannot handle properly the probabilities in uncertain knowledge). In particular, they allow us to understand how we deal with fragments of such networks in evidential reasoning and how we adjust them to changes. Lagnado examines alternative approaches and presents two experiments on legal reasoning illustrating the viability of his own claims. 

The problem with this volume is that, interesting as some of them are, the remaining eight papers are a collection of independent contributions without explicit ties with any of the above mentioned topics. The narrative counterpoint within the programme is here represented by J. Russell and T. Greenhalgh, whose case study in discourse analysis shows how evidence is quantitatively framed in a health policy unit –without any explicit exchange with their fellow researchers. Drawing on a recent historiographical controversy, J. Davies ponders the pros and cons of analyzing historical evidence about ancient religions in terms of either ritual or belief. Again in a quantitative note and on legal topics, Tony Gardner-Medwin presents a short but very interesting plea for an explicit incorporation of our levels of uncertainty into our judgments (from students’ answers in exams to legal evidence in trials).

We also find here three papers by philosophers of science. N. Cartwright presents another installment of her theory of evidence for evidence-based policy (co-authored here with J. Stegenga), providing a very accessible presentation of her project. H. Chang and G. Fisher present their case for a contextual appraisal of evidence through an analysis of the ravens paradox. A. Wylie defuses the generalized skepticism about archeological evidence defended by several theorists in the field, applying ideas from Glymour and Hacking to show how different evidential sources in archaeology support each other in a non-circular manner.

From perhaps a too parochial perspective, I would complain about the quality of the two remaining papers in the collection. David Colquhoun, a biostatistician, contributes a paper “in praise of randomization” aimed, by all appearances, mainly against John Worrall. It turns out to be just a nice collection of illustrations of the virtues of this allocation procedure without an explicit engagement with the arguments of Worrall or any other objection against it (e.g., in Bayesian approaches such Howson and Urbach’s). In “What Would a Scientific Economics Look Like?”, the epidemiologist Michael Joffe shows his skepticism about the scientific status of mainstream economic theory through a comparison with biology. An enterprise perfectly legitimate as such, but with a very odd collection of references as intellectual ground: the methodologists he engages with (e.g., Friedman) are not anymore representative of the theoretical approaches Joffe is targetting. 

This collection shows that there is much more in the discussion of evidence, particularly in legal theory, than philosophers of science usually take into account – though, in point of fact, Larry Laudan, among others, took notice of it a while ago. The crucial point is to what extent the forms of argument in Law (in particular, trials) are comparable to the argumentative patterns in other fields, in particular science. As Twining reminds us though (p. 92), contested trials are just a step within the legal process and most of the cases discussed in this compilation are too narrowly focused on it. As a paradigm for an “integrated science of evidence”, I am afraid that this volume will leave many readers unimpressed. And this is partly due to the format chosen for this collection. As the editors warn, the three years of this project were not enough to form a coherent view of its goals among its members, as it shows in the final collection of papers. And I do not see the point today of publishing them all in a single volume as if there was some added value in binding them all together. A special issue in a mainstream journal plus a set of independent publications in specialized outlets, properly cross-referenced and all linked in the project’s webpage, might have been enough for most interested readers. 

{July  2012}
{British Journal for Philosophy of Science 64.3 (2013), 665-668}

11/4/12

Catherine Will & Tiago Moreira, eds, Medical Proofs, Social Experiments. Clinical Trials in Shifting Contexts, Farnham, Ashgate, 2010.

More than half a century ago now, physicians began to struggle with how to assess the efficacy of treatments. As the late Harry Marks documented at length, around the 1950s the two alternatives considered for making these assessments were the case-based judgment of individual experts and the results of randomized clinical trials (RCTs). However, after only a few decades, the RCT reached the apex of the hierarchy of clinical evidence, where it remains despite the objections of a number of dissenting doctors, philosophers and sociologists. The compilation edited by Catherine Will and Tiago Moreira brings us a selection of the most recent sociological literature on medical experiments. It is interesting to note that, as the editors themselves present it, this book constitutes a vindication of case-based reasoning against the purported generality of RCTs. In these latter, we assume that we are dealing with a representative sample of patients and a standardized treatment protocol, allowing us to generalize our conclusions beyond the trial. The case studies compiled in this book question the possibility of such generalization: as the editors conclude, information about how clinical trials are organized and carried out goes beyond reporting of methods and is crucial for critical interpretation of evidence. This information should be compiled precisely through case studies, bridging the gap between the agents defined in the research protocol and the communities and contexts where these protocols are implemented.

Unlike other edited collections of case studies, this one aims at constructing a systematic argument. In this respect, Will and Moreira have done a wonderful editorial job, making explicit the threads between the different chapters in their introduction and conclusion and in short prefaces to each of the three parts into which they divide their compilation. In part I, “The Practices of Research,” three case studies, by Stefan Timmermans, Ben Heaven and Claes-Fredrik Helgesson, analyze how researchers struggle with trial protocols, either adapting them to their own goals, resisting them if they conflict with these latter or supplementing the protocols with their own ad hoc methods in order to assure that trials are completed. The editors present their own papers in part II, “Framing Collective Interpretation”. Both deal with the appraisal of trial results by third parties: the medical profession through their specialized journals and the State (the British National Institute for Clinical Excellence). In part III, “Testing the Limits for Policy,” three more papers discuss the use of trials for policy-making purposes. Again, the analyses focus on the role of contexts in policy-oriented trials: the adverse consequences of bracketing contextual information (briefly discussed by Trudy Dehue regarding depression) or the virtues of making the most of it in the trial (Ann Kelly and Alex Faulkner).

This quick summary is obviously guilty of saying very little about the actual the content of the papers. If we list them according to the interventions examined, we find a trial on the use of antidepressants (bupropion) against methamphetamine dependency (Timmermans), a comparison of two lifestyle interventions with medication against a common, chronic condition (Heaven), the controversy on the rosuvastatin trials (Will), the NICE cost-utility analysis of dementia drugs (Moreira), two hybrid trials of an arthritis screening program and of mesh screens against malaria (Kelly), and a recent British prostate cancer detection program (Faulkner).

The general point the editors are trying to make is that the conduct of clinical trials and the interpretation of their results depend not only on their research protocol, but on the intentions of the many agents who, one way or another, are involved in the process. Generalizing the results of a trial beyond their “context of discovery” is something that we can only decide on case by case basis. Indeed, from what they hint in the conclusion (e.g., p. 158), the editors would rather advocate redesigning regulatory trials so that their different stakeholders could have their say.

Emphasizing the ultimate context dependence of RCTs is a point worth making against those philosophers (or perhaps statisticians) who allow no epistemic role for such contextual dependencies of RCTs. But do the contextual dependencies discussed in this volume actually interfere with our ability to identify treatments that are efficacious for the general population? Do we have less reliable trials as a result of these out-of-the protocol interventions, and should the medical community consider alternatives to the RCT ?

Unfortunately, none of the papers in this collection addresses this crucial problem. The one that comes closest is Helgesson’s analysis the practices of out-of-protocol data cleaning in large Swedish RCTs. Helgesson tracks the ways in which data are informally recorded and corrected without leaving a trace in the trial’s logbook, from post-it notes to guesses about the misspelling of an entry. In his view, the trial participants who make such corrections do them in good faith in order to increase the credibility of their results. However, Helgesson explicitly refuses to discuss what sort of errors may be thus introduced in the data, as if “any idiosyncratic shaping of data should be understood as producing biased data and biased results” (p. 52) and we therefore cannot draw any conclusions about the impact of such errors on the interpretation of the study. But psychologists have documented at length how the credibility these practitioners seek is directly connected with confirmation biases, despite Helgesson’s contention: we all tend to accept more easily information that confirms our prior beliefs than disconfirming data. Confirmation biases have been documented in scientific laboratories, for instance, by Kevin Dunbar and his team at Toronto, who have shown as well that experimenters rely on bias-correction procedures from which the reliability of the data stems.

Are these informal practices of data recording and correction threatening the goals of trials as safety and efficacy tests? We know that RCTs do not provide full information about the effects of a drug, as the statistics on adverse effects reported to the FDA show. But, at the same time, regulatory clinical trials have so far been reasonably good at screening off the pharmaceutical markets from toxic and ineffective compounds. If trials were conducted to learn as much as we could about new treatments, perhaps the sort of contextual information provided in these case studies would help. In her chapter, Ann Kelly shows, for instance, how the self-selection of participants in a trial may turn out to be a good thing if the information gathered about this particular group of patients shows how to best implement a medical intervention. However, most RCTs are conducted just to prove certain effects to a skeptical audience (the regulatory agencies). Given RCTs track record of efficacy for regulatory purposes, how would an ethnography of the trial, or any reform of the type we saw Will and Moreira advocate, help the regulator in making his decision? Will it improve our current standards of safety and efficacy? Would it just make the trials more credible to the public?

At any rate, if case studies are to play a role in this re-shaped regulatory process, we ought to require from them the same warrants of impartiality we require from RCTs. A number of well-documented biases interfere in the conduct of trials and we try, at least, to prevent them with devices such as blinding and randomization. If a case study on the conduct of a trial should be taken into account by the regulator, by way of background information, how does this latter know that the report is not biased? Sociologists and anthropologists are presumably as vulnerable to biases as any other researcher involved in a trial, and the case-study should incorporate methodological caveats preventing partiality. Will and Moreira do not mention any such safeguards in their conclusions, but if their proposal ever succeeds, I am sure this is a problem they will have to address.

{December 2011}
{Theoretical Medicine and Bioethics 33.5 (2012)}

10/7/11

Paco Calvo & Toni Gomila, Handbook of Cognitive Science. An Embodied Approach, Amsterdam, Elsevier, 2008

“Is cognitive activity more similar to a game of chess than to a game of pool?” This is the opening question of this volume and every social scientist concerned with the explanation of our decisions should carefully consider the answer. At least, they should if they use standard intentional explanations, where decisions result from a particular combination of beliefs and desires that purportedly captures our folk understanding of action. If we are not uncomfortable with such foundation is mostly thanks to the progress of cognitive science that shows how our beliefs and desires can be processed, beyond folk psychology, as “a computational manipulation of representational inner states”. If you are already wondering if there is anything else to a decision, you probably consider cognitive ability akin to a game of chess. The authors in this volume would rather see it as a game of pool, that is, a non-formal game in which you need to take into account real-time physical interactions. In the case of decisions, our sensorimotor interaction with a given environment plus our social interaction with other agents. All this conceived as a continuous process that should be modeled (and explained) as such: i.e., describing the range of changes that the agent-cum-environment system experiences over real time. In principle, there is no need to invoke standard mental representations or a global plan of action.

This seems to be the explanatory approach emerging in the interdisciplinary field of embodied cognitive science, at least according to the editors of this Handbook (p.13). Calvo and Gomila are well aware that not every author in their volume would accept such an approach to explanation. The aim of this compilation is precisely to bring the different agendas in this new field to converge on a joint research program (p.15). Among these agendas, the editors cite: ecological psychology, behavior-based AI, embodied cognition, distributed cognition, perceptual symbol systems, some forms of connectionism, interactivism and dynamical systems theory. Their common thread, according to Calvo and Gomila, is to conceive of cognition and behavior “in terms of the dynamical interaction (coupling) of an embodied system that is embedded in the surrounding environment” (p. 7). The reader is properly warned that many of these terms are still awaiting a more precise definition ―including here “embodied” (p.12)―, but Calvo and Gomila believe that the success obtained by this approach in certain particular domains justifies a generalization that would first redefine the research agenda of cognitive science. And then eventually expand into every other field in the social sciences where cognition plays an explanatory role.

The structure of the volume somehow reflects the current disunity of this project: it goes through the fields listed above, including several surveys, a number of success stories and a few conceptual discussions of the pros and cons of this emerging approach as opposed to mainstream cognitive science. The main division, for the purposes of this review, is between the analysis of, so to speak, lower and higher cognitive processes. The former are covered in sections 2-4, namely: “Robotics and Autonomous Agents”, “Perceiving and Acting” and “A Dynamic Brain”. These three sections exemplify several tenets that the editors present as distinctive in the embodied approach. For instance, the claim about perception being active and action perceptually guided is explored in chapters dealing with a control system for human avatars (ch. 8), an analysis of the use of inconsistent visual information for the control of our actions (ch. 11), experimental evidence on visual processes guiding sorting tasks (ch. 10) and, finally, a dynamical system model of the interaction of the neural network, the body and the environment of an evolutionary agent featuring visually guided object discrimination (ch. 6).

The evidence presented in these three sections is fascinating, at least for readers like me without any competence in the topics addressed therein. However, it is not presented in the systematic fashion you would expect from a Handbook. It is more a collection of papers representing the diversity of perspectives announced in the Introduction, but they rarely engage with the claims made by each other. The editors have a point when they call for an empirical comparison of the different post-cognitive hypotheses in order to ponder their merit within the joint agenda (p. 15). But such comparison rarely features in the Handbook, which is perhaps an accurate portrait of the state of the art in this field.

Nonetheless, we should grant that the evidence accumulated at these lower levels of cognitive activity is compelling enough to reconsider several traditional tenets about them. E.g., whereas in the traditional approach (both in philosophy and in cognitive science) vision was most often understood as yielding “internal representations for general-purpose use”, the brick-sorting experiment presented in chapter 10 compellingly suggests that eye movements are task-oriented instead. The evidence for this hypothesis is provided by an experimental setup in which subjects operate in a virtual environment wearing a head mounted display tracking their eye movements and manipulating a mechanical arm with their hands. Variations in the visual cues of the bricks during the sorting task revealed, for instance, that the subjects retrieved the relevant information either from the scene or from their working memory. An implicit cost function regulating visual attention seems to be at work here, even if we still do not know much about the mechanism implementing it. It probably evaluates such aspects as metabolic cost, cognitive load, temporal urgency, etc. The subjects themselves are certainly unaware of it being at work. According to Calvo and Gomila (p. 12), in this experiment perceptions seems to be more than building visual representations: it seems active and guides action in quite a straightforward manner.

However, as the authors of chapter 10 (Droll and Hayhoe) point out the evidence presented is not contradictory with “formal models of executive control in which high-level decision processes [about the relevant visual parameters] affect lower level sensory selection” (p. 202). In other words, these experiments can also be interpreted as speaking for a certain continuity/compatibility between embodied and traditional approaches to cognition. The former may well help us in reconsidering certain low level cognitive activities, but maybe at a higher scale the latter may still play a role. This is the problem that the editors dub “scaling-up”: can we explain high level cognition in an embodied fashion? This is the topic of sections 5-7, which cover “Embodied Meaning”, “Emotion and Social Interaction” and a general discussion of the transition from lower to higher levels of cognition.

In chapter 15, Lotte Meteyard and Gabriela Vigliocco present a wonderful review of the embodied theories of semantic representation. As they recall, in this approach, we apprehend linguistic meaning simulating the sensory-motor information produced by the referent of a word or a proposition. They distinguish between stronger and weaker versions of this approach according to the degree to which semantic content depends on this sensory-motor information (what the authors call their engagement hypothesis), reviewing the available evidence (namely behavioral and neurological) for or against each version. The authors conclude that there is a tie between them, but the evidence speaks against those who deny the engagement hypothesis and claim absolute independence between semantic and sensory-motor information. Chapter 16 presents one particular approach to embodied meaning, stemming from the Neural Theory of Language project, taking concept learning as case in point. The two remaining chapters in this section on embodied meaning deal with mathematics: in the former, Rafael Núñez applies a metaphorical approach to mathematics he developed with Lakoff to the analysis of axiomatic systems; in the latter, Arthur Glenberg draws out the practical implications of this approach for the teaching of mathematics. This Handbook is mainly aimed at practitioners of the cognitive sciences, but, all in all, this is probably the section that impinges most on the main tenets of mainstream analytic philosophy and I miss a straightforward discussion of the philosophical “paradigm shift” implicit in its claims.

There is quite a contrast between sections 6 and 7. In the former, on “Scaling up” , two of the three papers compiled seem quite deflationary, at least if we measure by the standards of the editors. In chapter 19, Margaret Wilson explores the possible mechanisms by which abstract de-contextualized thought may have emerged from sensorimotor abilities applied to immediate situations. However, she argues explicitly against reducing human cognition to situated cognition, which, in principle, leaves some room for traditional approaches to the former. In a similar vein, Michael Anderson (ch. 21) analyses brain imaging results showing cognitive overlaps between different areas of the brain and discusses to what extent these images speak unambiguously for embodied cognition. E.g., there is evidence that perceiving objects an object names activate brain regions associated with grasping. But this may be explained as a result of the redeployment of neural circuits across different domains in the evolution of our brain. Some sort of functional inheritance would often ensue as a result, without any further implication about their “embodied” connection.

The tone in the two papers compiled in section 7 is inflationary, by contrast. For instance, Shaun Gallagher (ch. 22) argues for an embodied alternative to standard theories of mind, in which we would not need belief or desire attribution to understand each other’s actions. This understanding would often be primary, originating in body expressions that we would apprehend directly through perception without mental representations. Gallaguer’s paper puts forward a different worldview than the sort of empirically informed hypotheses that abound in this volume. However, it is worth reading, even if just to have a flavor of what a fully embodied approach would entail ―even more so for M. Sheets-Johnstone final chapter.

The volume ends abruptly or, at least, I miss a final overview taking stock of all the evidence compiled and assessing the viability of the research program outlined by the editors in the introduction. The only general discussion can be found at the beginning, in the first two chapters ―therefore written without any explicit reference to the volume. M. Bickhard presents a conceptual argument against standard models of representation in cognitive science: they cannot account, he claims, for the possibility that the organism detects and corrects its own errors. Following Bickhard, if we ground representations on embodied interaction instead, it is possible to account for errors. Interactions involve a circular causal flow between the system and the environment, according to a range of indicated possibilities. Errors will be detected by the system when this range is violated in the interaction. Again, we may wonder how this error-detection model applies to higher level representations, but the volume is not very rich on suggestions about this particular point.

Hence, the only really general discussion of the project in the volume is Andy Clark’s paper on “Embodiment in explanation” (ch.2). Clark defends a somewhat conservative position: mainstream cognitive science should take into account the many findings of the embodied approach, without abandoning its current paradigm. Clark’s argument is based on a review of a significant sample of current research. Had he used the evidence compiled in this Handbook, it would have made an excellent conclusion. His conservatism originates in his skepticism regarding the possibility of a total identification between an agent’s experience and the underlying sensorimotor exercise, as it is often assumed in the most radical versions of the embodied approach ―for instance, the connection between bodily experience and our basic conceptual repertoire, as it is sometimes presented by Lakoff and Johnson.

This Handbook certainly feeds Clark’s skepticism. Despite the effort of the editors, I cannot discern in the papers compiled the possibility of building a general paradigm for cognitive science, impinging on the very foundations of our many theories of social interaction. But I may be just short-sighted. Nonetheless, it is a good invitation to rethink many deeply rooted assumptions across the social sciences.