The Journal of Things We Like (Lots)
Select Page

Repackaging Normativity

Triantafyllos Gkouvas, The Metric Approach to Legal Normativity, in Unpacking Normativity (Kenneth Einar Himma, Miodrag Jovanovic & Bojan Spaic, eds. 2018).

The subject of legal normativity has attracted a great deal of attention recently. The collection in which Tria Gkouvas’s chapter appears does much to display the variety of perspectives, themes and issues that inform the current debate. Or, perhaps, current debates, given that a number of positions being expounded here and in other works on normativity over recent years tend to fix the normativity debate with a particular character prior to making a contribution to it. Gkouvas’s chapter is particularly stimulating in seeking to develop an approach that cuts across different perspectives and joins together different roles of normativity in what he presents as a “standard of normative robustness.” (P. 17.)

This approach is styled the “metric approach” precisely because it can be used to measure the normative robustness of quite different legal theories. It offers to do this by concentrating on “the Nexus space of reason-giving facts,” (P. 18) in which the different roles of action-guidance, evaluation of action, and explanation of action cohere in a single fact (Pp. 18-19). Gkouvas’s notion of Nexus is borrowed from Joseph Raz’s use of the term in From Normativity to Responsibility to indicate the connection between the normative force of a fact and its explanatory potential in a normative/explanatory nexus. Gkouvas amplifies this nexus as covering the three normative roles just mentioned of guidance, evaluation, and explanation; corresponding to “three distinct component functions (metaphysical, evaluative and explanatory).” (P. 18.)

Gkouvas stresses (P. 21) that the use of the Nexus is to measure normative robustness within a theory of law and not to investigate the extent to which legal facts for their existence depend on possessing normative force. By this he means to indicate that the Nexus is neutral as between positivist and non-positivist theories of law. What the Nexus does measure in terms of normative robustness is then the ability of a particular theory to deliver legal facts that can fulfill the three roles, or the three “component functions.” After detailed investigation, he concludes that the theories of Raz and Greenberg do, whereas those of Dworkin and Shapiro amount to “deviant” theories that may nevertheless be measured by the Nexus. (P. 31.)

Within the detailed examination of both compliant and deviant theories, we are given further insights into how Gkouvas understands the three roles/components within the Nexus. Of particular interest are the different ways in which he sees Dworkin and Shapiro deviating in “need[ing] supplementation by extra-legal facts in order to account for the performance of one of the three functions that constitute a Nexus reason.” (P. 31.) Dworkin falls down on the second, in not providing “a robust evaluative role for legal facts” while Shapiro on the third in failing to deliver “the explanatory potency of legal plans.” (P. 36.)

As devised and utilized by Gkouvas, the metric approach accordingly provides a novel framework in which to assess different characteristics of legal theories, beyond the familiar oppositions of positivist/non-positivist. It also stimulates further lines of inquiry on legal normativity. The neutrality he claims for his approach is clearly one that is premised on “answer[ability] to the Nexus standards” (P. 31), yet the Nexus itself is regarded as governing theories that relate legal facts to moral and social facts in different ways (P. 17), so the metric approach is extensive in its reach. Its fuller implications merit further reflection. Among them is quite possibly an interesting angle on the sui generis perspective on legal normativity, which has been featured in recent writing. If the satisfaction of normative robustness by a theory is dependent on not requiring supplementation by extra-legal facts, but the legal facts can exist “atop a stratified ontology featuring moral and/or social facts,” (Id.) it seems to follow that a robust legal normativity without the need for supplementation may nevertheless incorporate moral and social facts into a fully legalized normativity. This holds out the prospect of a sui generis understanding of legal normativity enjoying a richer profile than might at first be thought.

Cite as: Andrew Halpin, Repackaging Normativity, JOTWELL (May 10, 2019) (reviewing Triantafyllos Gkouvas, The Metric Approach to Legal Normativity, in Unpacking Normativity (Kenneth Einar Himma, Miodrag Jovanovic & Bojan Spaic, eds. 2018)), https://juris.jotwell.com/repackaging-normativity/.

The Neuroscience of Responsibility

William Hirstein, Katrina Sifferd & Tyler K. Fagan. Responsible Brains: Neuroscience, Law, and Human Culpability (2018).

The interface between law and neuroscience has been a continuing source of interest for lawyers and philosophers. Many scholars have hailed developments in neuroscience as singularly transformative for our understanding of human agency. Further—it is argued—once we understand human agency from the neuronal point of view, we will be forced to alter the ways in which our practices of responsibility—especially law—regulate human conduct.

In the view of some scholars, claims for the transformative impact of neuroscientific developments on law are overblown. Taken to an extreme, those who trumpet the transformative effects of neuroscience on law have sometimes been found to suffer from the malady Stephen Morse labels “Brain Overclaim Syndrome.” Labelling the syndrome a “cognitive pathology,” Morse argues that claims made by those in the grip of the pathology make claims that cannot be conceptually or empirically sustained.1

The authors of this provocative and interesting book make strong claims for the importance of neuroscience for our practices of responsibility. Their strongest conceptual claim is one they make often. In fact, the claim is the central thesis of their book. When it comes to responsibility assessment, the authors argue that the brain itself—specifically its executive functions—are “the seat of human responsibility.” (P. viii.)

The first of the book’s eleven chapters begins with a description of three recent high-profile criminal cases where the state of the defendant’s brain was at issue. The most well-known of the three is that of Anders Brevik. In July 2011, then 32-year-old Brevik killed 77 people—including many youths on the island of Utøya—using both explosives and automatic weapons. Alternatively found to be insane and then sane, Brevik’s case raises the question of the relevance of mental illness to responsibility and, thus, punishment. Mental illness has always been treated as a mitigating factor in both culpability and punishment assessments. The authors want to go beyond conventional approaches to these issues with the argument that “specific facts about the brains of the agents discussed in these cases…strongly inform assessment of their culpability.” (P. 8.)

It is important to be clear about just what the authors are claiming when it comes to the relationship between facts about the brain and normative evaluations of conduct (e.g., the criminal law). No one disputes the fact that neuroscientific assessments can and should impact responsibility assessments. The usual form such assessments takes are pleas for mitigation at the sentencing stage of criminal proceedings. While the authors take no issue with these conventional approaches, their central claim is much stronger than one commonly encounters in the literature. They write: “Neuroscience is both relevant to responsibility and consistent with our ordinary ‘folk’ conceptions of it. Evidence from cognitive science and neuroscience can illuminate and inform the nature of responsibility and agency in specific, testable ways.” (P. 12.)

The authors develop their position with an account of “executive functions” (responsibility for action presupposes that the agent in question is possessed of a MWS (minimal working set of executive functions)). After a review of the current neuroscience of executive function (Chapter 2), the authors integrate executive function with reasons responsiveness (Chapter 3) and legal theory as well as criminal law (Chapter 4). Chapters 5-7 are an extended engagement with the work of Neil Levy, who developed a theory of responsibility out of a theory of consciousness. Chapter 8 applies the authors’ theory to the special problems of juvenile responsibility. Chapter 9 considers insanity and Chapter 10 takes up punishment theory. The book ends with a chapter on future work and what issues remain open for further development.

So what are executive functions and why are they so important? Executive functions are usually thought to be located in the prefrontal cortex of the brain. Recent work in neuroscience suggests that “all executive functions (or at least a core set of them) are accomplished by a single, unified brain network, the frontoparietal cognitive-control network (working together with adjunctive areas, some of which are unique to the particular executive process involved).” (P. 21.) They regulate a variety of human behaviors, most importantly perception, memory, and emotion.

The language of moral assessment—both everyday terms and specialized vocabularies like the criminal law—are disparaged by reductionist hardliners (e.g., Patricia Churchland) as mere “folk psychology.” Reductionist physicalists want to replace folk psychological terms like those employed in the criminal law with the language of science, specifically neuroscience. The motivation for this approach is the belief that folk psychological terms refer to nothing in the world. For the naturalist/reductionist/physicalist, because all behavior is the result of causal forces alone, only the language of science is appropriate in ascribing responsibility for action. Folk psychological terms are dismissed as vacuous and non-referential. They have no traction in the real world.

The authors describe themselves as naturalists and physicalists, but they wear their philosophical positions lightly. None of the arguments in their book turns on adoption of a particular metaphysics of action or responsibility. Nowhere is this more apparent than in their treatment of the relationship between the folk psychological vocabulary of the criminal law and neuroscientific facts about executive functions. Before further discussing this important aspect of their argument, let me deal with a preliminary issue.

Many philosophers claim that the criminal law presupposes free will. The argument is straightforward. The law punishes agents for wrongful acts committed with a guilty mind (mens rea). Agents choose whether to commit criminal acts through the exercise of individual will. The law assumes exercise of will is a matter of free choice. Agents decide (choose) whether to commit bad acts. The law punishes bad acts committed with the requisite mental state. As such, law presupposes free will.

Some philosophers claim that the criminal law rests on a mistake: there is no free will. All behavior is caused. Human action is not the result of individual choice as we are all just nodes in a long causal chain. The experience (i.e., the feeling) that we are in control and making choices is just an epiphenomenal illusion. We are no more in control of our behavior than a robot.

The authors describe themselves as “compatibalists,” meaning “that despite the laws of physics and our increasing ability to understand the mechanistic, causally determined nature of the physical underpinnings of human actions and decisions, we are still responsible for such actions and decisions….” (P. 75.) Nevertheless, the authors acknowledge the philosophical sophistication of some of the arguments of determinists. They reply with a discussion of the case of “Bert.” Bert forgot that he had custody of his kids for the weekend and left for Las Vegas. The kids spent the weekend alone in Bert’s apartment and he was arrested for child neglect. Bert’s executive capacities were all working (he possessed what the authors describe as “diachronic agency”): he had no excuse. He was responsible for the kids and he failed to meet that responsibility. Hard determinists want to argue that Bert had no choice in the matter: his genes, his environment, and his brain all made him act as he did. The authors spend little time with the arguments of hard determinists, likening the position to hardcore skepticism: “Nothing—no causal powers available to persons within our universe—could satisfy the free will skeptics.” (P. 209.)

How do the authors get from facts about the brain (i.e., executive functions) to responsibility assessments? Recall that the unique claim made by these authors is that neuroscientific facts can inform responsibility assessments, not just by providing facts to be taken into account but by setting standards for responsible conduct. In the case of Bert, they argue, he was possessed of all the cognitive capacities necessary to conform his conduct to the law. He could have trained himself to be more aware of his schedule. He could have given himself reminders. It was not that difficult for Bert to habituate himself to be a responsible agent. Bert has no defense to the charge of neglect.

A critical comment on this approach to responsibility is to agree that one cannot be held responsible for action if one lacks the neuronal faculties necessary for proper conduct. This is uncontroversial. But the authors of this engaging book make a further claim. They claim that these very capacities themselves set the standard for responsible action. Such a claim looks dangerously close to a violation is the is-ought fallacy. Selim Berker—in a sublime article on this point—argued that no scientific facts can generate normative consequences. Neuroscientific facts, he argued, are normatively inert.2 Thus, sentences like this one are worrying: “Evidence from cognitive science and neuroscience can illuminate and inform the nature of responsibility and agency in specific, testable ways.” (P. 12.)

It is one thing to identify cognitive capacities necessary for action and to then use neuroscientific as well as behavioral evidence of their presence or absence to make responsibility judgments. It is quite another thing to suggest that the neuroscientific evidence for those behavioral capacities and neuroscientific evidence generally provide the criteria for assessments of responsible conduct. To the extent that the latter claim is made, such a controversial move requires further argument, lest one attract the criticism of Brain Overclaim Syndrome.

This is a thoroughly engaging and well-written book. The authors survey much of the responsibility literature and provide engaging discussions of the leading positions. Their suggestions for the use of neuroscientific evidence in various contexts (e.g., assessment of minors) is particularly persuasive. This is a book to be read by anyone with an interest in law and neuroscience, responsibility, criminal law, and ethics.

  1. See Stephen J. Morse, Brain Overclaim Syndrome and Criminal Responsibility: A Diagnostic Note, 3 Ohio St. J. Crim. L. 397 (2006).
  2. See Selim Berker, The Normative Insignificance of Neuroscience, 37 Phil. and Pub. Affairs 293 (2009).
Cite as: Dennis Patterson, The Neuroscience of Responsibility, JOTWELL (April 17, 2019) (reviewing William Hirstein, Katrina Sifferd & Tyler K. Fagan. Responsible Brains: Neuroscience, Law, and Human Culpability (2018)), https://juris.jotwell.com/the-neuroscience-of-responsibility/.

“Who Do You Think I Am?” or What it Means When We Lose Our Privacy

Craig Konnoth, An Expressive Theory of Privacy Intrusions, 102 Iowa L. Rev. 1533 (2017).

In the spring of 2018, we learned that Facebook, the technology company we cannot seem to get away from, allowed a political analytics group to obtain Facebook users’ data. In late 2018, Facebook admitted another, even more egregious intrusion. The New York Times showed us how the technology company gave millions of users’ personal data to other companies. It also allowed other companies to read the content of personal messages made on the platform, messages users assumed to be private. CEO Mark Zuckerberg testified before Congress and Facebook ran an apology ad campaign, including airing an apology video during the NBA playoffs. In a Facebook post, Zuckerberg pledged: “We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you.” In doing so, Zuckerberg signaled its users’ importance, and their importance required privacy protection. In other words, Facebook acknowledged that when it allowed a privacy violation, it inherently disrespected its users.

In An Expressive Theory of Privacy Intrusions, Craig Konnoth explicitly argues what Zuckerberg implicitly acknowledged: privacy intrusions involve more than what is being taken or how the intruders use that information. Intrusions express something about the breacher and the breachee beyond the material consequences; according to Konnoth, the social meaning of privacy intrusions suggest the victim’s lower social status, a form of “disrespect.”

In this article, Konnoth makes two main contributions that can help us understand the problem of privacy breaches. First, he argues that the very act of information intrusion harms, even when that information is relatively benign, does not stop actors from acting, or where the intruders protect that information against others. “Instead,” Konnoth argues, “the very act of intrusion sends a message about the values society holds dear and the status that particular individuals have in society.” (P. 1535.) He grounds this argument in expressive theories of law, placing privacy intrusions within theories of how state action communicates certain values. For example, he shows how the Supreme Court’s Fourth Amendment jurisprudence acknowledges these expressive purposes of searches: when schools conduct drug tests on student athletes, the search conveys an important value, the “abhorrence of…drug abuse.” (P. 1545.) In addition, privacy intrusions say something about their victims. For example, when a school drug tests athletes, the search expresses a belief that students are inherently immature and unable to make good choices.

Konnoth’s second contribution closely follows the first. He argues that when privacy intrusions affect a particular group, the intruder communicates a belief in the relatively lower social standing of that group vis-à-vis other groups not similarly affected. For equality purposes, Konnoth argues that rather than lower social standing triggering the privacy intrusion, the intrusion itself can also signal to others to regard the group as having lower social standing: “[P]rivacy intrusions are often combined with other forms of status expression…that specifically identify certain groups as undesirable.” (P. 1561.)

Of course, Facebook is not the government, the subject of Konnoth’s piece. Facebook is a private entity engaging in the intrusion, thus while its actions are disrespectful, it does not have the social-status-generating power that the law does. It cannot grant social status, while the government can. Konnoth argues that if the government legally could do what Facebook did, it would create a unique status problem because when the government disrespects, it marks social status more generally. And it also marks those who intrude with impunity as having higher status.

What to do? Konnoth offers three solutions. The first and third are familiar: end the intrusion and apologize for the intrusion. But it is his second solution that brings the article together: change the privacy norms so that when privacy breaches occur, they don’t feel like intrusions at all. A breach that does not feel like an intrusion likely will not trigger feelings of disrespect. Indeed, so-called “data breaches” are becoming ubiquitous in our modern society, and fears of government spying are widespread. From companies whose job it is to monitor our most personal financial information (Equifax) to technology companies like Facebook, such breaches have already led many of us to feel that there is no such thing as privacy anymore. Furthermore, we have gotten so used to the breaches—what is one or a thousand more?

If Konnoth is right, maybe having more breaches, not fewer, is not such a bad thing.

Cite as: LaToya Baldwin Clark, “Who Do You Think I Am?” or What it Means When We Lose Our Privacy, JOTWELL (March 14, 2019) (reviewing Craig Konnoth, An Expressive Theory of Privacy Intrusions, 102 Iowa L. Rev. 1533 (2017)), https://juris.jotwell.com/who-do-you-think-i-am-or-what-it-means-when-we-lose-our-privacy/.

A New Jurisprudence?

This important and impressive new book by Roger Cotterrell represents a new and original perspective on legal theory, building considerably upon the author’s previous, justly celebrated, work. It calls for a “sociological jurisprudence” (not a mere sociology of law) and for a reorientation of jurisprudential study as a form of social inquiry. The book is not likely to please all jurisprudential scholars, but all should read it and will profit from doing so.

The book is divided into three parts: first, concerning the ‘juristic point of view’; second, transnational legal theory; and third, on “legal values.” I will very briefly explore each in turn.

The first part of the book is devoted to the argument that there is not one single idea of “law” that holds true of all forms of legal order at all times and places. Instead, the inquiry must be an empirical one, generalizing a model with which to describe a great array of regulatory systems. This is possible if one begins one’s theorizing from the perspective of the “jurist,” addressing specifically juristic concerns. For the jurist’s responsibilities and characteristic modes of action are likely to be different in different social orders: in a tyranny, a bureaucratic state, a theocracy or a police state. (P. 34.)

It is nevertheless possible to ask, if the instances of regulation are so various and resistant to being described by one single model, how is one to distinguish between one empirically grounded generalization (which will only fit a percentage of actual regulatory systems) and another? Moreover, the book does not, it seems to me, adequately explain what a “jurist” is, beyond referring to it as an ideal-type. (P. 43.) If jurists’ responsibilities differ within different systems of regulation, how, again, can we distinguish jurists from non-jurist officials who govern otherwise than through law? It would be interesting to compare this idea with Finnis’s viewpoint, which also starts by considering the formation of concepts for sociological inquiry and develops (borrowing from Aristotle) the idea of “focal” concepts.1. Perhaps a second edition of the book might challenge this viewpoint.

The second part, on transnational legal theory, argues that many of the systems that have formed to address transnational legal problems do not readily fit within the models of legal order that are prevalent within domestic jurisprudence. Rather than categorizing all such systems as non-legal (for they are often treated as legal by participants in those contexts), Cotterrell argues that it is our definition or understanding of law that should shift and evolve. Indeed, Cotterrell believes that the presence of such systems indicates that there may be no timeless “essence” of law or legality by which we can measure social systems, to determine whether or not they embody distinctively legal governance. Again, many legal philosophers might reject this idea, but Cotterrell’s sophisticated defence of the idea should at least oblige philosophers to look at and understand anew their theories and underpinning assumptions. Indeed, this part of the book will be of intense interest not only to jurisprudential scholars, but also those working in and around such transnational systems of regulation and “soft law.”

The final part of the book is an extended consideration of “legal values.” Here, the book seems to occupy the same ground as that of legal positivists, for it argues that the values that animate the law (such as justice and security) are not universal but time-bound and place-bound; and that we must therefore resort to what the book calls a “client-orientation” or perspective. Cotterrell writes:

If values are important, it has to be asked how they are important to those whom legal experts address…How do, for example, values of justice and security vary in significance and meaning for different client groups? How does the balance between such values vary in the aspirations and expectations of the legal expert’s clients and audiences—in various public or official perceptions; in certain social groups as contrasted with others? How does it vary in the perspectives of various agencies of the state or in the regulated citizenry, among academic audiences of students and scholars of law, or among diverse popular audiences outside the academy? And how far can any legal expert speak to society at large as the ultimate client or audience? (P. 22.)

Presumably then, this (empirical) perspective must be value-neutral and descriptive? But this leaves open the question how we are meant to elicit these values, even supposing particular “client groups” all hold the same values. Furthermore, if the legal expert’s “clients and audiences” are indeed diverse, how (aside from armchair sociology) are we to go about finding out the values of agencies, citizenry, students, scholars, and popular audiences? Not with such a blunt instrument as voting; but even a careful survey requires the surveyor to choose certain questions and not others, to structure a survey in this way rather than that. But more to the point, if individuals do indeed get their values not from abstract or universal principles, but in the heat of particular contexts and difficulties, how could even the most careful survey provide an accurate account of people’s values?

The final chapters of the book discuss two values in particular that Cotterrell argues are central to at least modern, western states: individualism and social solidarity. The former value has been extensively explored in legal and political theory, so that its inclusion comes as no surprise. But the latter receives virtually no sustained attention, making this the most interesting part of a very interesting book. Clearly individuals will not themselves promote the value of social solidarity in their actions (unless toward family, friends, and perhaps colleagues), so it is up to the law—the jurist—to promote social integration. Cotterrell argues that social solidarity directly arises from economic interdependence (P. 189); but it seems arguable that such extensive and complex economic relationships depend upon prior social cohesion and shared goods such as language.

The book’s discussion of the importance of social solidarity is reminiscent of the work of an earlier positivist, Thomas Hobbes. (In fact Hobbes was also a natural lawyer, but that is a matter for another day.) As Hobbes makes clear, the internal peace and security of the community is more important than, and a prerequisite of, any other form of social good. But the idea of social solidarity is perhaps superior to that of Hobbesian security, for it intimates a realisation that safety and order can only be achieved if they also incorporate at least a degree of justice. I hope this latter idea is one that Cotterrell will return to in future writing.

In all, this is a very interesting, thought-provoking, and beautifully written book. The foregoing scarcely breaks the surface of the ideas involved, and I would encourage anyone who works in jurisprudence to read it carefully and sympathetically.

  1. see John Finnis, Natural Law and Natural Rights (2011), Ch I.
Cite as: Sean Coyle, A New Jurisprudence?, JOTWELL (February 11, 2019) (reviewing Roger Cotterrell, Sociological Jurisprudence: Juristic Thought and Social Inquiry (2018)), https://juris.jotwell.com/a-new-jurisprudence/.

Layers of Intentions

Martin Matczak, Three Kinds of Intention in Lawmaking, 36 Law and Philosophy 651 (2017).

“Legislative intention” is one of those concepts that many people use without recognizing the complexity of the underlying idea. The issue of statutory interpretation is frequently characterized as being a disagreement between “intentionalists” and “textualists,” an argument regarding what role, if any, lawmakers’ intentions should be given in determining the meaning and application of statutes. However, even if one starts from the position that legislative intentions are important, there is a further question regarding which intentions we are talking about.

This is where Marcin Matczak’s article, Three Kinds of Intention in Lawmaking, comes in. Matczak analyzes legislative intentions using the analytical structure J. L. Austin offered for talking about the intentions of everyday speech: locutionary intentions, illocutionary intentions, and perlocutionary intentions. The first, locutionary intentions, refers to (“semantic”) meaning—what the speaker was trying to say. The second, illocutionary intentions, refers to the type of speech act intended. Austin was well known for pointing out that utterances sometimes change things in the world—e.g., “I now pronounce you man and wife” can change the legal status of the individuals involved (he called such utterances “performative”). More generally, a set of words can be intended to be a special kind of utterance: e.g., a promise, request, order, etc. Austin’s third category, perlocutionary intentions, regard how the person making the utterance hopes to change the world through the words chosen (e.g., getting other people to do things because the speaker has made certain promises, requests, or orders).

Matczak’s article asserts that the debates about statutory interpretation have wrongly emphasized locutionary (semantic) intentions; certainly, many of the commentators described (or self-described) as “intentionalists” emphasize semantic intentions. However, as Matczak points out, legislation is usually drafted by people other than the lawmakers; the semantic intentions thus belong to the drafters, not the legislators. Beyond this, there is no reason to suppose that all the lawmakers who voted for a bill shared identical understandings of its meaning, and there is no obvious way to aggregate differing semantic intentions.

Some commentators described, or self-described, as “intentionalists” (and a few constitutional “originalists”) focus on a different kind, or different aspect, of intention: the proposed effects of a law—intentions about how a legal rule will be applied (in Austin’s and Matczak’s terminology, “perlocutionary intentions”). However, as Matczak points out, some of the same problems arise here as with semantic intentions: it is highly unlikely that all those who support a proposed legal rule have the same views about how it would or should be applied. As Matczak argues (and here, as the article notes, he is agreeing with Joseph Raz), what lawmakers who vote for a bill do share is an intention to make the rule a valid law in that jurisdiction. These are illocutionary intentions: to make a certain kind of utterance—here, passing a law.

However, if we do not obtain the meaning of legislation from the semantic intentions of the drafters (because they are not legal officials) or from the intentions of the lawmakers (because they do not agree), where do we get meaning? Matczak’s response summarizes the views of “anti-intentionalist” theorists of language like Ruth Millikan: “the semantic content of the text does not depend on anyone’s intention or state of mind but rather on the history of language tools (words, sentences, etc.) used in that text” (P. 661.) What the article advocates is more of an objective, “reasonable person” approach to meaning, rather than the sort of strict intentionalism advocated by Larry Alexander and (in his most recent writings) Stanley Fish.

In general, Matczak’s work is a worthy addition to the literature on legislative intention, joining theorists like Ronald Dworkin, Andrei Marmor, and Joseph Raz, in reminding us that figuring out which (what kind of) intentions can be used in statutory interpretation is not as easy as we generally believe.

Cite as: Brian Bix, Layers of Intentions, JOTWELL (January 14, 2019) (reviewing Martin Matczak, Three Kinds of Intention in Lawmaking, 36 Law and Philosophy 651 (2017)), https://juris.jotwell.com/layers-of-intentions/.

Should Courts Punish Government Officials for Contempt?

Nicholas R. Parrillo, The Endgame of Administrative Law:  Governmental Disobedience and the Judicial Contempt Power, 131 Harv. L. Rev. 1055 (2018).

What happens when a federal court issues a definitive order to a federal agency and the agency takes a how-many-divisions-does-the-Pope-have position in response? The answer that comes to mind is that the court can find the agency or its officials in civil or criminal contempt. But when is that finding available, how often is it used, what sanctions are attached to it, and what is their effect?

Nicholas Parrillo answers those questions in this comprehensive and carefully reasoned article. He collects (using a methodology described in an on-line appendix) all the records of federal court opinions “in which contempt against a federal agency was considered at all seriously” and all the records of district court docket sheets “in which a contempt motion was made…against a federal agency.” (P. 696.) After analyzing the results, Professor Parrillo concludes that while district courts are willing to issue contempt findings against federal agencies and officials, appellate courts almost invariably reverse any sanctions attached to such findings. But he also finds that the appellate courts reverse on case-specific grounds that do not challenge the authority of courts to impose sanctions for contempt, and that findings of contempt, even without sanctions, can operate effectively through a shaming mechanism. This article provides unique and valuable documentation about contempt, the “endgame of administrative law” and an obviously important element of our legal system. In addition, it contains major implications about the nature of the appellate process and about the normative force of law itself.

While Professor Parrillo does not explicitly identify the interpretive theory that appellate courts employ in reviewing trial court imposition of contempt sanctions, he strongly indicates that it is de novo review, followed by a sort of strict scrutiny regarding the conclusion. The reason, his research reveals, is the obverse of the reason why appellate courts review trial court findings of fact with a deferential standard.

The deference standard is based on the recognition that the trial judge has heard the witnesses, examined the physical evidence in detail, and reached her conclusion based on this experiential and intensive interaction with the litigating parties and the facts at issue. The de novo review and strict scrutiny that appellate courts apply to contempt sanctions are based on the sense that the trial judge has had this same experiential and intensive interaction and gotten angry at the agency.

Recognizing the truth of Aesop’s adage that “familiarity breeds contempt,” appellate courts, on the basis of their greater distance from the incompetent or recalcitrant agency, seem to employ stringent review of contempt sanctions to counter the trial judge’s ire. Their opinions indicate that they are concerned about the disruption of the agency’s mission and the impact on the public fisc that would result from imposition of the sanction. This constitutes an important insight into the nature of appellate review, one linked to the emerging literature on law and emotions. In addition to correcting legal errors, the appellate courts make use of their distance from the tumult of trial and of the abstract, discursive character of their own procedures to correct errors of excessive emotional engagement and thus increase the perceived rationality of the law.

Although sanctions for contempt are rarely imposed by trial courts and almost never upheld at the appellate level, Professor Parrillo does not conclude that findings of contempt are without effect. Rather, both the agency and its individual officials regularly make intense and sustained efforts to avoid being subject to such findings. The reason, Professor Parrillo suggests, on the basis of the language in judicial opinions and statements by agency officials, is that contempt has a powerful shaming function. “Federal agency officials,” he writes “inhabit an overlapping cluster of communities…[that] recognize a strong norm in favor of compliance with court orders.” (P. 777.)

He thus provides, in the somewhat technical context of administrative law, specific confirmation of Max Weber’s sociological insight that government authority is derived from its normative force, an insight echoed in jurisprudence by H.L.A. Hart and in democratic theory by Robert Dahl. Stalin, from a position of absolute power and amoral cynicism, may have thought the Pope’s power resided only in any military force that he possessed, but both the leaders and members of a democratic society must accept and rely upon shared norms of legality in order for such a society to function.

This raises the question of civil disobedience; as Professor Parrillo points out at the end of the article, such disobedience is generally based on a countervailing norm. Federal officials, whose position is defined by law, are not likely to believe in any norm that would justify disobedience to law. A number of President Trump’s actions, however, suggest that he sees himself outside this legal context, not on the basis of a countervailing norm but as a cynical assertion of power. Professor Parrillo’s article serves as a reminder of the crucial role that norms of legality play in our system of government, and the need for all public officials to sustain them absent a convincing and deeply felt countervailing norm that they are willing to assert and defend.

Cite as: Edward Rubin, Should Courts Punish Government Officials for Contempt?, JOTWELL (December 18, 2018) (reviewing Nicholas R. Parrillo, The Endgame of Administrative Law:  Governmental Disobedience and the Judicial Contempt Power, 131 Harv. L. Rev. 1055 (2018)), https://juris.jotwell.com/should-courts-punish-government-officials-for-contempt/.

Disagreement and Adjudication

William Baude and Ryan Doerfler, Arguing with Friends, 117 Mich. L. Rev. 319 (2018).

In the mid-aughts, philosophers began to seriously consider the following question: how should you revise a belief, if at all, upon learning that you disagree with someone you trust? This has come to be known as the problem of peer disagreement. It’s a vexing problem. In the face of disagreement, our inclination is to remain confident. Yet, it is difficult to say why we should: if you think your friend is equally smart, and she reviewed the same information, what reason do you have to think that, in this particular case, you’re right and she’s wrong? On the other hand, if we should become much less confident, this seems, as philosopher Adam Elga puts it, rather spineless. And, while disagreement may prompt you to recheck your math on a split bill, it’s unlikely you’d rethink the morality of abortion. What, if anything, about the cases licenses distinct treatment?

Philosophers have proposed various responses. But, until recently, a search for “peer disagreement” in the legal literature would have yielded few results. Thankfully, a slew of articles has remedied this. Alex Stein writes on tribunals whose members come to the same conclusion, but for different reasons, and, separately, about post-conviction relief in light of conflicting expert testimony. Youngjae Lee writes about disagreement and the standard of proof in criminal trials. And, although they do not explicitly engage with the philosophical literature, Eric Posner and Adrian Vermeule discuss how judges on multimember courts ought to take into account the votes of their colleagues. William Baude and Ryan Doerfler’s article, in part a response to Posner and Vermeule, is required reading for anyone interested in disagreement and adjudication. Baude and Doerfler discuss what judges should do when they find out that other judges, or academics, disagree with them about a case. They land upon a moderate conciliationist position: become less confident when the disagreeing party is a “methodological friend,” and not otherwise.

This is in line with what some philosophers propose. The thought is something like this: if you think, before hearing some case, that a certain colleague on the bench would be as likely as you to get the right answer, then, upon disagreeing with her, it would be irrational (in the strict, philosophical sense) to think that you are right and she is wrong. After all, you share the same interpretative method and you heard the same legal arguments. This is why you thought you’d be equally likely to come to the right answer. When you disagree, it’s surprising. Thus, you ought to count the disagreement with the methodological friend as evidence, but not necessarily decisive evidence, that you’ve erred.

Baude and Doerfler’s view is moderate because it treats the disagreement as evidence that you’ve erred only when the disagreement is with a methodological friend. A disagreement with a non-friend provides no new evidence. Of course your originalist friend disagrees with you if you think originalism is bunk. As Baude and Doerfler put it while discussing the deep disagreement between Justices Scalia and Breyer, “…judges have had ample opportunity to rationally update themselves on the basis of those fundamental disputes. Hearing, one more time, that their colleagues have a different approach tells them nothing new.” (P. 12.)

Baude and Doerfler do a service to the discipline by contributing to a small but seemingly growing literature that attempts to draw applicable lessons from abstract work of contemporary analytic philosophers.

Cite as: Sam Fox Krauss, Disagreement and Adjudication, JOTWELL (December 3, 2018) (reviewing William Baude and Ryan Doerfler, Arguing with Friends, 117 Mich. L. Rev. 319 (2018)), https://juris.jotwell.com/disagreement-and-adjudication/.

Adapting Capabilities Approaches to Domestic Policy Problems

Armin Tabandeh, Paolo Gardoni & Colleen Murphy, A Reliability-Based Capability Approach, 38 Risk Anal. 410 (2018).

Whether by statute or executive order, many agencies are required to produce cost-benefit analyses when proposing significant regulations and to justify decisions in its terms. The reason is not that cost-benefit analysis is perfect. Even its most thoughtful proponents recognize it has limitations. According to Matthew Adler and Eric Posner, for example, “[m]odern textbooks on [cost-benefit analysis] are plentiful, and some of them are optimistic about the usefulness of the procedure, but most of them frankly acknowledge its serious flaws and the inadequacy of standard methods for correcting these flaws.”1

Most proponents of cost-benefit analysis nevertheless suggest that when it comes to agency decision-making, no better and feasible alternative currently exists. Whether that is true depends on what the alternatives are. I have recently found A Reliability-Based Capabilities Approach useful in this regard. I believe it offers the right building blocks to articulate an alternative, capabilities approach to agency decision-making that may prove useful in a wide range of domestic policy contexts.

Capabilities approaches, as pioneered by Amartya Sen and Martha Nussbaum, are by now well known. Though there are many different ways to develop the idea, all begin with the conceptual claim that what is intrinsically valuable for people is not the resources they have, or just any subjective mental states, but rather what people are able to be or do. Whereas orthodox cost-benefit analysis relies heavily on willingness to pay to measure “costs” and “benefits” and thus typically uses market data or surveys to “price” most “costs” and “benefits,” capabilities approaches do not assume that everything of value must be priceable by a market. Capabilities approaches recognize that human welfare can also be multi-dimensional: deficits in one capability need not always be compensable through benefits to another. This means that it is not always useful to present things in terms of one aggregate measure.

Capabilities approaches have proven enormously influential in some contexts. The United Nations, for example, uses a capabilities approach to produce several metrics, like the Human Development Index and the Multi-Dimensional Poverty Index. These metrics have been widely used to guide policy decisions in many development contexts, but capabilities approaches have thus far had much less impact on domestic policy analysis.

What explains this difference in application? One reason relates to liberal concerns for value neutrality. Whatever its limitations, cost-benefit analysis at least has the merit of being sensitive to the changing preferences of a population, insofar as they are reflected in the market. By contrast, once one goes beyond the basic conceptual claims of capabilities approaches mentioned above, their application typically requires some method to settle which capabilities are intrinsically valuable and how to weigh them. This can pose a problem for liberal methods of decision-making because values are contested in free societies.

For some time now, I have thought that some of the conceptual claims made by capabilities are undeniable. I have nevertheless shared the concern that capabilities approaches may not be sufficiently value-neutral for widespread use in domestic policy contexts by federal agencies. A Reliability-Based Capability Approach has prompted me to reexamine that view. The article develops a mathematically rigorous method to quantify the societal impacts of certain hazards, using a capabilities approach. Though the piece is focused on hazards, I believe these methods could be extended to produce a capabilities approach to evaluate legal regulations that avoids the charge of illiberalism.

When assessing liberal concerns with capabilities approaches, it can help to distinguish between two different types of capabilities. There are some capabilities that almost everyone agrees are valuable or even necessary for a good life. I will call these “basic capabilities.” Examples would include the capability to be healthy, to avoid premature mortality, and to have shelter. Then there are other capabilities, which different people in a free society might choose to exercise in different amounts (or sometimes not at all) based on their different conceptions of the good. I will call these “non-basic capabilities.”

I see potentially useful aspects to A Reliability-Based Capability Approach when it comes to measuring the impacts of legal regulations on both basic and non-basic capabilities. The article begins with a mathematical formalism that uses vectors to represent different achieved functionings (which are valuable beings or doings) of individual persons. (A vector is just a quantity in an n-dimensional space that can be represented as an arrow with a direction and magnitude. In this case, the n dimensions reflect the n classes of achieved functionings that will be measured.) These vectors are then transformed into vectors of indices of these achieved functionings. Standard empirical methods can be used to predict the likely outcomes of hazards (or regulations, I suggest, by extension) on these indices.

The article allows for the definition of certain thresholds of “acceptability” and “tolerability” of any component of an index. It then offers a mathematical approach, based in systems analysis, which allows one to calculate the “acceptability” and “tolerability” of a predicted outcome and return a “non-acceptable” or “non-tolerable” conclusion if any predicted functioning for an individual falls below a set threshold for that type of functioning. It should be noted that “functionings,” in the language of capabilities approaches, are achieved beings or doings, whereas “capabilities” are abilities to achieve valuable beings or doings. Functionings can only be presumed to provide good proxies for capabilities when it comes to basic capabilities, which almost no one would fail to pursue if they were capable.

The authors suggest using democratic processes to determine what capabilities are valuable and what thresholds should be used to make these determinations. But there is another possibility. With suitable modification, these equations could be used to determine what thresholds of “acceptability” and “tolerability” are implied for each basic capability, within a larger group, by a proposed regulation. This might be done by combining information about the predicted average and standard deviations for each component. When it comes to basic capabilities, which everyone agrees are valuable, I believe it would provide useful information to know whether these implicit thresholds would be increased or lowered by a proposed regulation.

Consider, for example, a capabilities-based measure that is similar in spirit and might be integrated into such a framework: quality-adjusted life years (QALYs). An agency that is considering two different regulations, which decrease the overall costs of healthcare, might find that both are cost-benefit justified. One set of regulations might nevertheless be predicted to lower the implied minimal acceptability or tolerability thresholds for quality-adjusted years of life because it decreases the costs for certain luxury health services (i.e., services that some people may decide to purchase but do not extend QALYs) while making it harder for many other people, who have less financial resources, to obtain cheap health services that would greatly extend their quality-adjusted years of life. The other regulation might be predicted to raise these minimum thresholds. All else being equal, the second regulation should be preferred.

Instead of trying to decide in advance how to weigh all these factors, it might be sufficient to render all these facts transparent during the notice and comments period of a proposed regulation. Then more people could know what regulations are actually doing and could respond politically.

By contrast, cost-benefit analysis—at least as it is typically operationalized using willingness to pay to measure the relevant “costs” and “benefits”—tends to obscure some consequences of regulations. There is nothing inherently valuable about willingness to pay. Hence, reliance on this metric only makes sense if differences in willingness to pay are the best available proxies for differences in human welfare. But as the hypothetical example of healthcare in an unregulated market will now show, market prices are often poor indicators of the routes to human welfare.

The problem arises from a combination of cost-benefit analysis with wealth inequality. People who have more resources may be willing to pay relatively large amounts for some health services that do not contribute much at all to QALYs. But many poor people may be unable to afford even some basic healthcare services that are critical for their QALYs. This is not because the capability to be healthy or to avoid premature mortality is less intrinsically valuable to the poor. Nor is it because some lives are more valuable than others. People who are poor must simply make harder choices with their limited financial resources. As a result, orthodox cost-benefit analysis can count small welfare benefits to the rich more heavily than larger welfare benefits to the poor.

That mainstream cost-benefit analysis systematically favors the wealthy is well known among philosophers and economists. The language and formalisms of mainstream cost-benefit analysis nevertheless hide these consequences of regulatory choices from most people. It would be much more transparent if agencies were required to produce not only cost-benefit analyses, when proposing major regulatory changes, but also reports on the likely impacts on the thresholds of acceptability and tolerability for any basic capabilities that may be affected. It is not necessary to decide in advance what the right thresholds should be. Sunshine may often be a sufficient disinfectant.

A different solution is required when it comes to measuring the effects of regulations on non-basic capabilities. These are capabilities that different people may value differently (or not at all) in a free society. I believe that a different idea found in A Reliability-Based Capability Approach may help with this problem as well.

In particular, the article proposes using the standard deviation of indices as a way to measure the variability in achieved functionings that people exhibit with respect to different capabilities. Though the idea would need to be developed, I see in it the embryonic form of an index that could measure peoples’ effective abilities to choose between different achieved functionings and thus pursue different conceptions of the good.

An index of this kind would be just as value-neutral as cost-benefit analysis, but it would not systematically favor the wealthy. Use of it would also address another well-known limitation of cost-benefit analysis. Most people value some things—like community, friendship, and faith—that are neither sold on a market nor could maintain their value if they were. Some other goods and services—like domestic labor within a family—contribute great amounts to human welfare but are not well priced by markets because they are often freely given. Regulations that rely too heavily on cost-benefit analysis tend to overcount values that are commodified (or at least commodifiable) over values that are not. That cannot be good for a society, given everything that people actually value. An index that measures peoples’ capabilities to pursue their personal conceptions of the good, regardless of how much is commodified or commodifiable, would be extremely useful for the law.

  1. Rethinking Cost-Benefit Analysis, 109 Yale L.J. 165 (1999).
Cite as: Robin Kar, Adapting Capabilities Approaches to Domestic Policy Problems, JOTWELL (October 17, 2018) (reviewing Armin Tabandeh, Paolo Gardoni & Colleen Murphy, A Reliability-Based Capability Approach, 38 Risk Anal. 410 (2018)), https://juris.jotwell.com/adapting-capabilities-approaches-to-domestic-policy-problems/.

Does the Center Want to Hold?

David Adler, The Centrist Paradox: Political Correlates of the Democratic Disconnect (May 01, 2018), available at SSRN.

The very idea of a meaningful left-center-right political spectrum always seemed suspect to me. Many commentators have warned against conflating cultural and economic “wings.” The cultural left wants to get the state out of the bedroom (so to speak). The economic left wants to get the state into the boardroom. The cultural right wants to inject the state into the bedroom, to regulate sexual and procreative matters. The economic right wants the state out of the boardroom, sweeping away pesky regulations of the workplace and the market.

Plainly, one might be on the economic right but on the cultural left, or vice versa. It would be a mistake to try to cram these different dimensions into one. Would someone who happened to fall simultaneously on the economic left and the cultural right count as…a centrist? An outlier? (Gene Debs called socialism “Christianity in action“—where does that put him?)

Set this worry aside, and assume that correlations with, say, attitudes about immigration serve to validate the use of a one-dimensional spectrum. Extensive surveys have been conducted that ask respondents where they place themselves. Some of these surveys go on to ask about attitudes toward democracy and elections and the importance of having a strong, decisive leader unfettered by a congress or parliament. David Adler, a young researcher who recently moved from London to Athens, has looked at this data and has uncovered what he calls the “Centrist Paradox.” Anyone who is concerned about the direction democracies are taking ought to take a careful look, too.

I had always assumed that if social science places a representative person on a left-center-right political spectrum, and independently measures that person’s attachment to democratic ideals, that social science would find that people toward the extremes tend to have a lesser attachment to the norms of democracy, while people in the middle are more attached. As Adler puts it, “there is an intuition that there is an elective affinity between extreme political views and support for strongman politics to implement them.” (P. 2.) (Lenin for the left/ Franco for the Right, as it were.) No research, he finds, has bothered to test this assumption. And—shockingly—it turns out that the reverse is likelier to be true. People in the center appear to be (for the most part) the least attached to democracy.

Adler reports his analysis of data representing the U.S., the U.K., and a number of E.U. countries from 2008 and 2010-16. He says his results are robust when controlling for variables such as income, education, and age (which have been suggested as factors tending toward “populism”). He is careful to distinguish support for democratic principles from satisfaction with democratic outcomes. (P. 7.) While the left and right wings may be less happy with outcomes, it is the center—paradoxically—that is the least happy with the process itself.1

The U.S. results are especially striking, and the heaviest gob-smacker of all is that “less than half of the political centrists in the United States view free elections as essential to democracy—over thirty percent less than their center-left neighbors.” (P. 4.) Free elections! This is far more disturbing than polls that indicate the Bill of Rights lacks majority support. Those amendments are meant to constrain majority power, so the majority can be expected to chafe. A Bill of Rights, like a separation of powers, is essential to liberal democracy, but not to democracy per se. But if free elections are not essential to democracy, what is? Even Hungarian Premier Viktor Orbán’s “illiberal democracy”—not to mention a host of sham democracies—is wedded to free elections. Yet, Adler’s analysis finds that a majority of self-identified U.S. centrists rejects the almost tautological proposition that free elections are the essence of democracy.

Trying to wrap my head around what Adler seems to have uncovered, I ask myself what other commonsense assurances have to be called in for re-examination if he is right. Many assume that, in “our” democracy, the center will tend to check the excesses of any extreme candidate. The landslide losses of “far-right” Barry Goldwater to “centrist” Lyndon Johnson in 1964, and “far-left” George McGovern to “centrist” Richard Nixon in 1972, are the cautionary tales directed at “fringe” insurgencies. A polarizing candidate is supposed to frighten and activate the center, and thus lose. That’s how the system works.

But is there an as-yet untried method by which a polarizing candidate might win over the American center? Perhaps by posturing as uncommonly strong and decisive, even if— especially if!—unfashionably and unapologetically “undemocratic”? If the strong, decisive figure also has an energized base on one extreme, so much the better. (I mean, so much the worse…for our received wisdom.) A strongman with an unshakable base might find polarization to be an effective tactic for exploiting the center’s relative indifference to democratic values.

  1. Cf. Man In Center of Political Spectrum Under Impression He Less Obnoxious, The Onion (Aug. 18, 2017).
Cite as: W.A. Edmundson, Does the Center Want to Hold?, JOTWELL (October 2, 2018) (reviewing David Adler, The Centrist Paradox: Political Correlates of the Democratic Disconnect (May 01, 2018), available at SSRN), https://juris.jotwell.com/does-the-center-want-to-hold/.

Does Belief Beyond a Reasonable Doubt Require Unanimity Among Jurors?

Youngjae Lee, Reasonable Doubt and Disagreement, 23 Legal Theory 203 (2017).

Although in most states and in the federal system, the law’s answer to the title question is “yes,” Youngjae Lee’s answer—with a qualification it will take the rest of this jot to explain—is “no.” To be more precise, his answer, surprisingly, is that it depends on the issue that is liable to disagreement. Making certain assumptions, Lee argues that unanimity is the best rule to adopt for juries reaching decisions about empirical facts in criminal cases. In these circumstances, requiring unanimity among jurors is both most faithful to the beyond-the-reasonable-doubt requirement for conviction and most faithful to the justification of this requirement. But juries must make decisions on all of the elements of crimes (and sometimes on affirmative defenses, I might add); to do this, often juries must make decisions on issues that are at least partly evaluative. (Lee calls them “moral issues.”) Some of his examples come from the core of criminal law: rape (reasonable belief in consent or a reasonable expectation that defendant recognize lack of consent) or homicide (depraved-heart murder, reckless homicide, self-defense). For these decisions, Lee argues, unanimity is not the rule to adopt.

He arrives at these conclusions by assuming a principle of rationality that has lately attracted attention from epistemologists: the “equal weight view.” That view says that if there is disagreement among persons with equal cognitive capabilities and equal access to information (“epistemic peers”), each belief is equally reasonable, and so, has equal weight. Each person should adjust his belief in the direction of those with whom he or she disagrees. In a simple case of 11-1 disagreement where eleven have the highest confidence about the defendant’s guilt, the equal weight view requires that they lower their confidence. Under some circumstances, lowering by the eleven results in an insufficient average level of confidence among all the jurors—insufficient to satisfy the requirement of being beyond a reasonable doubt—so a unanimous verdict of not guilty should be reached. If the sole dissenter is not very confident in his opinion for acquittal, the average belief in the probability of guilt may remain high enough to satisfy the standard of beyond a reasonable doubt and so, a unanimous verdict of guilty should be reached. But not if the level of confidence satisfying the beyond-a-reasonable-doubt standard is very stringent. Then any amount of dissent regarding conviction leads on the equal weight view to acquittal.

Lee then adds other assumptions. One is that jurors are likely to fail to apply the equal weight view consistently—i.e., they do not always adjust their confidence levels in the face of disagreement with those they recognize as epistemic peers. When this happens, he shows, assuming the equal weight view is correct, a supermajority voting standard will sometimes result in a false conviction. A unanimity rule would lead to either an acquittal or a mistrial, due to a hung jury. Something similar happens under Lee’s next assumption: that it is likely that jurors who are very confident of a defendant’s guilt and applying the equal weight rule will not recognize dissenters as epistemic peers. In both cases, given the undesirability of convicting the factually innocent, the unanimity rule leads to better results when jurors disagree. It generates decisions that approximate ones that jurors would reach if they were more rational, Lee claims. Plus, it is a way of enforcing the beyond-a-reasonable-doubt requirement.

But only for the finding of factual matters. On most of Lee’s earlier assumptions, in an 11-1 split on moral issues, the equal weight thesis would require acquittal. But moral disagreement is common. Lee thinks many splits among jurors on moral issues, with various numbers of dissenters, would, on the equal weight view, have to end in acquittals.

The mechanism that generates this outcome, however, seems wrong. It is inappropriate for disagreeing jurors to alter their opinions on moral issues in accordance with the equal weight view. Lee contends that doing so conflicts with the justification for the criminal jury: the jury reflects the community morality and is the community “conscience.” Lee takes the latter word seriously and tries to explain why respecting a juror’s conscience conflicts with instructing the juror to revise a moral judgment in the face of controversy. Simply put, in moral disagreement, it is not rational to treat another’s conscience and one’s own as equally reasonable.

I don’t think he convincingly pinpoints why, for reasons too lengthy to explain here. However, the case for the inappropriateness of an alternation-and-unanimity requirement for moral decisions can be strengthened. If there is a moral truth to which the community is committed, and if exposing that commitment requires advanced moral skills, then the alteration requirement is inappropriate; for rarely will there be twelve jurors with equal moral abilities. It is unlikely that disagreeing jurors are epistemic peers, contrary to one of Lee’s assumptions. (Lee has misgivings about this not-epistemic-peers response.) If, on the other hand, the question of community morality is about the application of a social norm, it is likely the jurors are epistemic peers. However, social norms are indeterminate at points. (Lee remarks that the evaluative terms in question are “vague.”) If the disputed issue falls into this region, a decision must be made, a precisifying. One can argue for the appropriateness of a majority, or a supermajority, on democratic grounds, perhaps; however, given that there are always deviants from social norms, there is no reason to require unanimity.

I said that Lee answers “no” to the title question, with a qualification. Lee ends his article by suggesting that if beyond a reasonable doubt requires the equal weight view (recall that he has made assumptions that are merely plausible), then it may turn out that jury decisions on moral issues should not be required to be beyond a reasonable doubt, after all.

Cite as: Barbara Levenbook, Does Belief Beyond a Reasonable Doubt Require Unanimity Among Jurors?, JOTWELL (September 5, 2018) (reviewing Youngjae Lee, Reasonable Doubt and Disagreement, 23 Legal Theory 203 (2017)), https://juris.jotwell.com/does-belief-beyond-a-reasonable-doubt-require-unanimity-among-jurors/.