The Journal of Things We Like (Lots)
Select Page
Jens Clausen and Neil Levy (eds.), Handbook of Neuroethics (2015).

The relationship between the mind and the brain is a topic of immense philosophical, scientific, and popular interest.1 The diverse but interacting powers, abilities, and capacities that we associate with the mind and mental life both link humans with other animals and constitute what make us uniquely human. These powers, abilities, and capacities include perception, sensation, knowledge, memory, belief, imagination, emotion, mood, appetite, intention, and action. The brain, in interaction with other aspects of the nervous system and the rest of the human body, makes these possible.

Obviously, the relationship between the mind and the brain is enormously complicated. It is one thing to say that the mind (or some particular aspect of mental life, for example, pain) “depends on” (contract supervenience—the idea of no change in mental state without underlying change in physical (i.e., brain) state) the brain and another to say that the mind (or a particular aspect of it) just is the brain, or can be “reduced” to the brain (in the sense that it can be explained or explained away). Whether it can or cannot will depend on a number of empirical and conceptual issues.

The empirical issues concern the evidential base and the adequacy of the scientific explanations for the phenomena that we associate with the mind and the sensory, affective, cognitive, and cogitative categories that comprise our mental lives. The empirical issues on the relationship of mind and brain have been aided by an explosion of work in cognitive neuroscience over the past couple of decades, itself aided by an explosion of technology providing detailed information about brain structure and process (most importantly, types of brain imaging).

A good example of the importance of the conceptual/empirical distinction is found in the context of fMRI lie detection. The problem is one of under- and over-inclusion. fMRI studies may be under-inclusive if they are measuring “intent to deceive” rather than lying because some lies do not involve any intent to deceive. More importantly, however, the studies may be over-inclusive in that they count as “lies” acts by subjects that are not in fact acts of lying. If so, then this undermines attempts to draw inferences from neural data about the test subjects to whether actual witnesses are engaged in acts of actual lying.

Here is the problem. Not every utterance that a speaker believes to be false is a lie. For example, when a speaker is telling a joke or reciting a line in a play a false assertion is not a lie. As Don Fallis notes in an insightful article, the difference that makes “I am the Prince of Denmark,” a lie when told at a dinner party but not a lie when told on stage at a play are the norms of conversation in effect.2 Fallis explores the conceptual contours of lying through numerous examples and presents the following schematic definition:

You lie to X if and only if:

  • You state that p to X.
  • You believe that you make this statement in a context where the following norm of conversation is in effect:
       Do not make statements that you believe to be false.
  • You believe that p is false.3

This definition “capture[s] the normative component of assertion that is necessary for lying.”4

The fMRI studies do not fit. The subjects in the studies are instructed to assert false statements on certain occasions, sometimes with an intent to deceive an audience; however, their false statements are not acts of lying. Even when subjects commit or plan mock “crimes,” they are not in a situation where the following norm is in effect: do not make statements that you believe to be false.  Indeed, they are instructed to do precisely that. Thus, the acts being measured, even when they involve deception, appear to be closer to actions of someone playing a game, joking, or role-playing. If this is so, then the relationship between the neural activity of these subjects and acts of lying is not clear. In the legal context, this norm—do not make statements that you believe to be false—is in place, as the perjury and false-statements crimes make clear. The practical significance to this conceptual issue is obvious: to draw conclusions about whether someone is actually lying based on the fact that his neural activity resembles subjects who are not lying (but mistakenly thought to be) could be a disaster. To draw conclusions about whether someone is actually lying in a legal context, the underlying studies must examine actual lying or at least provide compelling reasons why the results from non-lies should inform judgments about lying.

All of the issues mentioned above come together in the interface between neuroscience and law, which has become a burgeoning field. The level of interest in questions such as the use of fMRI technology in courts, the possibility of lie detection, the role of the brain in memory, cognitive enhancement, and free will are all issues that legal scholars have taken a deep and abiding interest in.  Owing to these interests, the collection under review will be of immense value to scholars working in this emerging and exciting subfield.

This three-volume set should be consulted by anyone working in law and neuroscience. An entire section of the book (in V. 3) is devoted to “Neurolaw.”  But many other parts of this treatise will be of interest to lawyers working in the field. Neuroenhancement—using drugs and technologies to enhance human cognitive skills—is a very hot topic at the moment. The same can be said of free will, ethics of brain imaging, and neuromarketing, just to name a few of the topics treated in this collection.  Articles are by many well-known authors in the field.  The editors and the publisher have put together an indispensable collection that will be of interest to all scholars working on the law and neuroscience interface.

Download PDF
  1. The full range of issues in the interface between law and neuroscience are covered in M. Pardo and D. Patterson, Minds, Brains and Law (2013).
  2. Don Fallis, What is Lying?, 106 J. Phil. 29, 33-37 (2009).
  3. Id. at 34.  Astute readers will notice that Fallis’s definition does not include that the statement actually be false.
  4. Id. at 35.
Cite as: Dennis Patterson, Law, Neuroscience and Neuroethics, JOTWELL (May 19, 2015) (reviewing Jens Clausen and Neil Levy (eds.), Handbook of Neuroethics (2015)),