The Journal of Things We Like (Lots)
Select Page

Yearly Archives: 2018

Should Courts Punish Government Officials for Contempt?

Nicholas R. Parrillo, The Endgame of Administrative Law:  Governmental Disobedience and the Judicial Contempt Power, 131 Harv. L. Rev. 1055 (2018).

What happens when a federal court issues a definitive order to a federal agency and the agency takes a how-many-divisions-does-the-Pope-have position in response? The answer that comes to mind is that the court can find the agency or its officials in civil or criminal contempt. But when is that finding available, how often is it used, what sanctions are attached to it, and what is their effect?

Nicholas Parrillo answers those questions in this comprehensive and carefully reasoned article. He collects (using a methodology described in an on-line appendix) all the records of federal court opinions “in which contempt against a federal agency was considered at all seriously” and all the records of district court docket sheets “in which a contempt motion was made…against a federal agency.” (P. 696.) After analyzing the results, Professor Parrillo concludes that while district courts are willing to issue contempt findings against federal agencies and officials, appellate courts almost invariably reverse any sanctions attached to such findings. But he also finds that the appellate courts reverse on case-specific grounds that do not challenge the authority of courts to impose sanctions for contempt, and that findings of contempt, even without sanctions, can operate effectively through a shaming mechanism. This article provides unique and valuable documentation about contempt, the “endgame of administrative law” and an obviously important element of our legal system. In addition, it contains major implications about the nature of the appellate process and about the normative force of law itself.

While Professor Parrillo does not explicitly identify the interpretive theory that appellate courts employ in reviewing trial court imposition of contempt sanctions, he strongly indicates that it is de novo review, followed by a sort of strict scrutiny regarding the conclusion. The reason, his research reveals, is the obverse of the reason why appellate courts review trial court findings of fact with a deferential standard.

The deference standard is based on the recognition that the trial judge has heard the witnesses, examined the physical evidence in detail, and reached her conclusion based on this experiential and intensive interaction with the litigating parties and the facts at issue. The de novo review and strict scrutiny that appellate courts apply to contempt sanctions are based on the sense that the trial judge has had this same experiential and intensive interaction and gotten angry at the agency.

Recognizing the truth of Aesop’s adage that “familiarity breeds contempt,” appellate courts, on the basis of their greater distance from the incompetent or recalcitrant agency, seem to employ stringent review of contempt sanctions to counter the trial judge’s ire. Their opinions indicate that they are concerned about the disruption of the agency’s mission and the impact on the public fisc that would result from imposition of the sanction. This constitutes an important insight into the nature of appellate review, one linked to the emerging literature on law and emotions. In addition to correcting legal errors, the appellate courts make use of their distance from the tumult of trial and of the abstract, discursive character of their own procedures to correct errors of excessive emotional engagement and thus increase the perceived rationality of the law.

Although sanctions for contempt are rarely imposed by trial courts and almost never upheld at the appellate level, Professor Parrillo does not conclude that findings of contempt are without effect. Rather, both the agency and its individual officials regularly make intense and sustained efforts to avoid being subject to such findings. The reason, Professor Parrillo suggests, on the basis of the language in judicial opinions and statements by agency officials, is that contempt has a powerful shaming function. “Federal agency officials,” he writes “inhabit an overlapping cluster of communities…[that] recognize a strong norm in favor of compliance with court orders.” (P. 777.)

He thus provides, in the somewhat technical context of administrative law, specific confirmation of Max Weber’s sociological insight that government authority is derived from its normative force, an insight echoed in jurisprudence by H.L.A. Hart and in democratic theory by Robert Dahl. Stalin, from a position of absolute power and amoral cynicism, may have thought the Pope’s power resided only in any military force that he possessed, but both the leaders and members of a democratic society must accept and rely upon shared norms of legality in order for such a society to function.

This raises the question of civil disobedience; as Professor Parrillo points out at the end of the article, such disobedience is generally based on a countervailing norm. Federal officials, whose position is defined by law, are not likely to believe in any norm that would justify disobedience to law. A number of President Trump’s actions, however, suggest that he sees himself outside this legal context, not on the basis of a countervailing norm but as a cynical assertion of power. Professor Parrillo’s article serves as a reminder of the crucial role that norms of legality play in our system of government, and the need for all public officials to sustain them absent a convincing and deeply felt countervailing norm that they are willing to assert and defend.

Cite as: Edward Rubin, Should Courts Punish Government Officials for Contempt?, JOTWELL (December 18, 2018) (reviewing Nicholas R. Parrillo, The Endgame of Administrative Law:  Governmental Disobedience and the Judicial Contempt Power, 131 Harv. L. Rev. 1055 (2018)), https://juris.jotwell.com/should-courts-punish-government-officials-for-contempt/.

Disagreement and Adjudication

William Baude and Ryan Doerfler, Arguing with Friends, 117 Mich. L. Rev. 319 (2018).

In the mid-aughts, philosophers began to seriously consider the following question: how should you revise a belief, if at all, upon learning that you disagree with someone you trust? This has come to be known as the problem of peer disagreement. It’s a vexing problem. In the face of disagreement, our inclination is to remain confident. Yet, it is difficult to say why we should: if you think your friend is equally smart, and she reviewed the same information, what reason do you have to think that, in this particular case, you’re right and she’s wrong? On the other hand, if we should become much less confident, this seems, as philosopher Adam Elga puts it, rather spineless. And, while disagreement may prompt you to recheck your math on a split bill, it’s unlikely you’d rethink the morality of abortion. What, if anything, about the cases licenses distinct treatment?

Philosophers have proposed various responses. But, until recently, a search for “peer disagreement” in the legal literature would have yielded few results. Thankfully, a slew of articles has remedied this. Alex Stein writes on tribunals whose members come to the same conclusion, but for different reasons, and, separately, about post-conviction relief in light of conflicting expert testimony. Youngjae Lee writes about disagreement and the standard of proof in criminal trials. And, although they do not explicitly engage with the philosophical literature, Eric Posner and Adrian Vermeule discuss how judges on multimember courts ought to take into account the votes of their colleagues. William Baude and Ryan Doerfler’s article, in part a response to Posner and Vermeule, is required reading for anyone interested in disagreement and adjudication. Baude and Doerfler discuss what judges should do when they find out that other judges, or academics, disagree with them about a case. They land upon a moderate conciliationist position: become less confident when the disagreeing party is a “methodological friend,” and not otherwise.

This is in line with what some philosophers propose. The thought is something like this: if you think, before hearing some case, that a certain colleague on the bench would be as likely as you to get the right answer, then, upon disagreeing with her, it would be irrational (in the strict, philosophical sense) to think that you are right and she is wrong. After all, you share the same interpretative method and you heard the same legal arguments. This is why you thought you’d be equally likely to come to the right answer. When you disagree, it’s surprising. Thus, you ought to count the disagreement with the methodological friend as evidence, but not necessarily decisive evidence, that you’ve erred.

Baude and Doerfler’s view is moderate because it treats the disagreement as evidence that you’ve erred only when the disagreement is with a methodological friend. A disagreement with a non-friend provides no new evidence. Of course your originalist friend disagrees with you if you think originalism is bunk. As Baude and Doerfler put it while discussing the deep disagreement between Justices Scalia and Breyer, “…judges have had ample opportunity to rationally update themselves on the basis of those fundamental disputes. Hearing, one more time, that their colleagues have a different approach tells them nothing new.” (P. 12.)

Baude and Doerfler do a service to the discipline by contributing to a small but seemingly growing literature that attempts to draw applicable lessons from abstract work of contemporary analytic philosophers.

Cite as: Sam Fox Krauss, Disagreement and Adjudication, JOTWELL (December 3, 2018) (reviewing William Baude and Ryan Doerfler, Arguing with Friends, 117 Mich. L. Rev. 319 (2018)), https://juris.jotwell.com/disagreement-and-adjudication/.

Adapting Capabilities Approaches to Domestic Policy Problems

Armin Tabandeh, Paolo Gardoni & Colleen Murphy, A Reliability-Based Capability Approach, 38 Risk Anal. 410 (2018).

Whether by statute or executive order, many agencies are required to produce cost-benefit analyses when proposing significant regulations and to justify decisions in its terms. The reason is not that cost-benefit analysis is perfect. Even its most thoughtful proponents recognize it has limitations. According to Matthew Adler and Eric Posner, for example, “[m]odern textbooks on [cost-benefit analysis] are plentiful, and some of them are optimistic about the usefulness of the procedure, but most of them frankly acknowledge its serious flaws and the inadequacy of standard methods for correcting these flaws.”1

Most proponents of cost-benefit analysis nevertheless suggest that when it comes to agency decision-making, no better and feasible alternative currently exists. Whether that is true depends on what the alternatives are. I have recently found A Reliability-Based Capabilities Approach useful in this regard. I believe it offers the right building blocks to articulate an alternative, capabilities approach to agency decision-making that may prove useful in a wide range of domestic policy contexts.

Capabilities approaches, as pioneered by Amartya Sen and Martha Nussbaum, are by now well known. Though there are many different ways to develop the idea, all begin with the conceptual claim that what is intrinsically valuable for people is not the resources they have, or just any subjective mental states, but rather what people are able to be or do. Whereas orthodox cost-benefit analysis relies heavily on willingness to pay to measure “costs” and “benefits” and thus typically uses market data or surveys to “price” most “costs” and “benefits,” capabilities approaches do not assume that everything of value must be priceable by a market. Capabilities approaches recognize that human welfare can also be multi-dimensional: deficits in one capability need not always be compensable through benefits to another. This means that it is not always useful to present things in terms of one aggregate measure.

Capabilities approaches have proven enormously influential in some contexts. The United Nations, for example, uses a capabilities approach to produce several metrics, like the Human Development Index and the Multi-Dimensional Poverty Index. These metrics have been widely used to guide policy decisions in many development contexts, but capabilities approaches have thus far had much less impact on domestic policy analysis.

What explains this difference in application? One reason relates to liberal concerns for value neutrality. Whatever its limitations, cost-benefit analysis at least has the merit of being sensitive to the changing preferences of a population, insofar as they are reflected in the market. By contrast, once one goes beyond the basic conceptual claims of capabilities approaches mentioned above, their application typically requires some method to settle which capabilities are intrinsically valuable and how to weigh them. This can pose a problem for liberal methods of decision-making because values are contested in free societies.

For some time now, I have thought that some of the conceptual claims made by capabilities are undeniable. I have nevertheless shared the concern that capabilities approaches may not be sufficiently value-neutral for widespread use in domestic policy contexts by federal agencies. A Reliability-Based Capability Approach has prompted me to reexamine that view. The article develops a mathematically rigorous method to quantify the societal impacts of certain hazards, using a capabilities approach. Though the piece is focused on hazards, I believe these methods could be extended to produce a capabilities approach to evaluate legal regulations that avoids the charge of illiberalism.

When assessing liberal concerns with capabilities approaches, it can help to distinguish between two different types of capabilities. There are some capabilities that almost everyone agrees are valuable or even necessary for a good life. I will call these “basic capabilities.” Examples would include the capability to be healthy, to avoid premature mortality, and to have shelter. Then there are other capabilities, which different people in a free society might choose to exercise in different amounts (or sometimes not at all) based on their different conceptions of the good. I will call these “non-basic capabilities.”

I see potentially useful aspects to A Reliability-Based Capability Approach when it comes to measuring the impacts of legal regulations on both basic and non-basic capabilities. The article begins with a mathematical formalism that uses vectors to represent different achieved functionings (which are valuable beings or doings) of individual persons. (A vector is just a quantity in an n-dimensional space that can be represented as an arrow with a direction and magnitude. In this case, the n dimensions reflect the n classes of achieved functionings that will be measured.) These vectors are then transformed into vectors of indices of these achieved functionings. Standard empirical methods can be used to predict the likely outcomes of hazards (or regulations, I suggest, by extension) on these indices.

The article allows for the definition of certain thresholds of “acceptability” and “tolerability” of any component of an index. It then offers a mathematical approach, based in systems analysis, which allows one to calculate the “acceptability” and “tolerability” of a predicted outcome and return a “non-acceptable” or “non-tolerable” conclusion if any predicted functioning for an individual falls below a set threshold for that type of functioning. It should be noted that “functionings,” in the language of capabilities approaches, are achieved beings or doings, whereas “capabilities” are abilities to achieve valuable beings or doings. Functionings can only be presumed to provide good proxies for capabilities when it comes to basic capabilities, which almost no one would fail to pursue if they were capable.

The authors suggest using democratic processes to determine what capabilities are valuable and what thresholds should be used to make these determinations. But there is another possibility. With suitable modification, these equations could be used to determine what thresholds of “acceptability” and “tolerability” are implied for each basic capability, within a larger group, by a proposed regulation. This might be done by combining information about the predicted average and standard deviations for each component. When it comes to basic capabilities, which everyone agrees are valuable, I believe it would provide useful information to know whether these implicit thresholds would be increased or lowered by a proposed regulation.

Consider, for example, a capabilities-based measure that is similar in spirit and might be integrated into such a framework: quality-adjusted life years (QALYs). An agency that is considering two different regulations, which decrease the overall costs of healthcare, might find that both are cost-benefit justified. One set of regulations might nevertheless be predicted to lower the implied minimal acceptability or tolerability thresholds for quality-adjusted years of life because it decreases the costs for certain luxury health services (i.e., services that some people may decide to purchase but do not extend QALYs) while making it harder for many other people, who have less financial resources, to obtain cheap health services that would greatly extend their quality-adjusted years of life. The other regulation might be predicted to raise these minimum thresholds. All else being equal, the second regulation should be preferred.

Instead of trying to decide in advance how to weigh all these factors, it might be sufficient to render all these facts transparent during the notice and comments period of a proposed regulation. Then more people could know what regulations are actually doing and could respond politically.

By contrast, cost-benefit analysis—at least as it is typically operationalized using willingness to pay to measure the relevant “costs” and “benefits”—tends to obscure some consequences of regulations. There is nothing inherently valuable about willingness to pay. Hence, reliance on this metric only makes sense if differences in willingness to pay are the best available proxies for differences in human welfare. But as the hypothetical example of healthcare in an unregulated market will now show, market prices are often poor indicators of the routes to human welfare.

The problem arises from a combination of cost-benefit analysis with wealth inequality. People who have more resources may be willing to pay relatively large amounts for some health services that do not contribute much at all to QALYs. But many poor people may be unable to afford even some basic healthcare services that are critical for their QALYs. This is not because the capability to be healthy or to avoid premature mortality is less intrinsically valuable to the poor. Nor is it because some lives are more valuable than others. People who are poor must simply make harder choices with their limited financial resources. As a result, orthodox cost-benefit analysis can count small welfare benefits to the rich more heavily than larger welfare benefits to the poor.

That mainstream cost-benefit analysis systematically favors the wealthy is well known among philosophers and economists. The language and formalisms of mainstream cost-benefit analysis nevertheless hide these consequences of regulatory choices from most people. It would be much more transparent if agencies were required to produce not only cost-benefit analyses, when proposing major regulatory changes, but also reports on the likely impacts on the thresholds of acceptability and tolerability for any basic capabilities that may be affected. It is not necessary to decide in advance what the right thresholds should be. Sunshine may often be a sufficient disinfectant.

A different solution is required when it comes to measuring the effects of regulations on non-basic capabilities. These are capabilities that different people may value differently (or not at all) in a free society. I believe that a different idea found in A Reliability-Based Capability Approach may help with this problem as well.

In particular, the article proposes using the standard deviation of indices as a way to measure the variability in achieved functionings that people exhibit with respect to different capabilities. Though the idea would need to be developed, I see in it the embryonic form of an index that could measure peoples’ effective abilities to choose between different achieved functionings and thus pursue different conceptions of the good.

An index of this kind would be just as value-neutral as cost-benefit analysis, but it would not systematically favor the wealthy. Use of it would also address another well-known limitation of cost-benefit analysis. Most people value some things—like community, friendship, and faith—that are neither sold on a market nor could maintain their value if they were. Some other goods and services—like domestic labor within a family—contribute great amounts to human welfare but are not well priced by markets because they are often freely given. Regulations that rely too heavily on cost-benefit analysis tend to overcount values that are commodified (or at least commodifiable) over values that are not. That cannot be good for a society, given everything that people actually value. An index that measures peoples’ capabilities to pursue their personal conceptions of the good, regardless of how much is commodified or commodifiable, would be extremely useful for the law.

Cite as: Robin Kar, Adapting Capabilities Approaches to Domestic Policy Problems, JOTWELL (October 17, 2018) (reviewing Armin Tabandeh, Paolo Gardoni & Colleen Murphy, A Reliability-Based Capability Approach, 38 Risk Anal. 410 (2018)), https://juris.jotwell.com/adapting-capabilities-approaches-to-domestic-policy-problems/.

Does the Center Want to Hold?

David Adler, The Centrist Paradox: Political Correlates of the Democratic Disconnect (May 01, 2018), available at SSRN.

The very idea of a meaningful left-center-right political spectrum always seemed suspect to me. Many commentators have warned against conflating cultural and economic “wings.” The cultural left wants to get the state out of the bedroom (so to speak). The economic left wants to get the state into the boardroom. The cultural right wants to inject the state into the bedroom, to regulate sexual and procreative matters. The economic right wants the state out of the boardroom, sweeping away pesky regulations of the workplace and the market.

Plainly, one might be on the economic right but on the cultural left, or vice versa. It would be a mistake to try to cram these different dimensions into one. Would someone who happened to fall simultaneously on the economic left and the cultural right count as…a centrist? An outlier? (Gene Debs called socialism “Christianity in action“—where does that put him?)

Set this worry aside, and assume that correlations with, say, attitudes about immigration serve to validate the use of a one-dimensional spectrum. Extensive surveys have been conducted that ask respondents where they place themselves. Some of these surveys go on to ask about attitudes toward democracy and elections and the importance of having a strong, decisive leader unfettered by a congress or parliament. David Adler, a young researcher who recently moved from London to Athens, has looked at this data and has uncovered what he calls the “Centrist Paradox.” Anyone who is concerned about the direction democracies are taking ought to take a careful look, too.

I had always assumed that if social science places a representative person on a left-center-right political spectrum, and independently measures that person’s attachment to democratic ideals, that social science would find that people toward the extremes tend to have a lesser attachment to the norms of democracy, while people in the middle are more attached. As Adler puts it, “there is an intuition that there is an elective affinity between extreme political views and support for strongman politics to implement them.” (P. 2.) (Lenin for the left/ Franco for the Right, as it were.) No research, he finds, has bothered to test this assumption. And—shockingly—it turns out that the reverse is likelier to be true. People in the center appear to be (for the most part) the least attached to democracy.

Adler reports his analysis of data representing the U.S., the U.K., and a number of E.U. countries from 2008 and 2010-16. He says his results are robust when controlling for variables such as income, education, and age (which have been suggested as factors tending toward “populism”). He is careful to distinguish support for democratic principles from satisfaction with democratic outcomes. (P. 7.) While the left and right wings may be less happy with outcomes, it is the center—paradoxically—that is the least happy with the process itself.2

The U.S. results are especially striking, and the heaviest gob-smacker of all is that “less than half of the political centrists in the United States view free elections as essential to democracy—over thirty percent less than their center-left neighbors.” (P. 4.) Free elections! This is far more disturbing than polls that indicate the Bill of Rights lacks majority support. Those amendments are meant to constrain majority power, so the majority can be expected to chafe. A Bill of Rights, like a separation of powers, is essential to liberal democracy, but not to democracy per se. But if free elections are not essential to democracy, what is? Even Hungarian Premier Viktor Orbán’s “illiberal democracy”—not to mention a host of sham democracies—is wedded to free elections. Yet, Adler’s analysis finds that a majority of self-identified U.S. centrists rejects the almost tautological proposition that free elections are the essence of democracy.

Trying to wrap my head around what Adler seems to have uncovered, I ask myself what other commonsense assurances have to be called in for re-examination if he is right. Many assume that, in “our” democracy, the center will tend to check the excesses of any extreme candidate. The landslide losses of “far-right” Barry Goldwater to “centrist” Lyndon Johnson in 1964, and “far-left” George McGovern to “centrist” Richard Nixon in 1972, are the cautionary tales directed at “fringe” insurgencies. A polarizing candidate is supposed to frighten and activate the center, and thus lose. That’s how the system works.

But is there an as-yet untried method by which a polarizing candidate might win over the American center? Perhaps by posturing as uncommonly strong and decisive, even if— especially if!—unfashionably and unapologetically “undemocratic”? If the strong, decisive figure also has an energized base on one extreme, so much the better. (I mean, so much the worse…for our received wisdom.) A strongman with an unshakable base might find polarization to be an effective tactic for exploiting the center’s relative indifference to democratic values.

Cite as: W.A. Edmundson, Does the Center Want to Hold?, JOTWELL (October 2, 2018) (reviewing David Adler, The Centrist Paradox: Political Correlates of the Democratic Disconnect (May 01, 2018), available at SSRN), https://juris.jotwell.com/does-the-center-want-to-hold/.

Does Belief Beyond a Reasonable Doubt Require Unanimity Among Jurors?

Youngjae Lee, Reasonable Doubt and Disagreement, 23 Legal Theory 203 (2017).

Although in most states and in the federal system, the law’s answer to the title question is “yes,” Youngjae Lee’s answer—with a qualification it will take the rest of this jot to explain—is “no.” To be more precise, his answer, surprisingly, is that it depends on the issue that is liable to disagreement. Making certain assumptions, Lee argues that unanimity is the best rule to adopt for juries reaching decisions about empirical facts in criminal cases. In these circumstances, requiring unanimity among jurors is both most faithful to the beyond-the-reasonable-doubt requirement for conviction and most faithful to the justification of this requirement. But juries must make decisions on all of the elements of crimes (and sometimes on affirmative defenses, I might add); to do this, often juries must make decisions on issues that are at least partly evaluative. (Lee calls them “moral issues.”) Some of his examples come from the core of criminal law: rape (reasonable belief in consent or a reasonable expectation that defendant recognize lack of consent) or homicide (depraved-heart murder, reckless homicide, self-defense). For these decisions, Lee argues, unanimity is not the rule to adopt.

He arrives at these conclusions by assuming a principle of rationality that has lately attracted attention from epistemologists: the “equal weight view.” That view says that if there is disagreement among persons with equal cognitive capabilities and equal access to information (“epistemic peers”), each belief is equally reasonable, and so, has equal weight. Each person should adjust his belief in the direction of those with whom he or she disagrees. In a simple case of 11-1 disagreement where eleven have the highest confidence about the defendant’s guilt, the equal weight view requires that they lower their confidence. Under some circumstances, lowering by the eleven results in an insufficient average level of confidence among all the jurors—insufficient to satisfy the requirement of being beyond a reasonable doubt—so a unanimous verdict of not guilty should be reached. If the sole dissenter is not very confident in his opinion for acquittal, the average belief in the probability of guilt may remain high enough to satisfy the standard of beyond a reasonable doubt and so, a unanimous verdict of guilty should be reached. But not if the level of confidence satisfying the beyond-a-reasonable-doubt standard is very stringent. Then any amount of dissent regarding conviction leads on the equal weight view to acquittal.

Lee then adds other assumptions. One is that jurors are likely to fail to apply the equal weight view consistently—i.e., they do not always adjust their confidence levels in the face of disagreement with those they recognize as epistemic peers. When this happens, he shows, assuming the equal weight view is correct, a supermajority voting standard will sometimes result in a false conviction. A unanimity rule would lead to either an acquittal or a mistrial, due to a hung jury. Something similar happens under Lee’s next assumption: that it is likely that jurors who are very confident of a defendant’s guilt and applying the equal weight rule will not recognize dissenters as epistemic peers. In both cases, given the undesirability of convicting the factually innocent, the unanimity rule leads to better results when jurors disagree. It generates decisions that approximate ones that jurors would reach if they were more rational, Lee claims. Plus, it is a way of enforcing the beyond-a-reasonable-doubt requirement.

But only for the finding of factual matters. On most of Lee’s earlier assumptions, in an 11-1 split on moral issues, the equal weight thesis would require acquittal. But moral disagreement is common. Lee thinks many splits among jurors on moral issues, with various numbers of dissenters, would, on the equal weight view, have to end in acquittals.

The mechanism that generates this outcome, however, seems wrong. It is inappropriate for disagreeing jurors to alter their opinions on moral issues in accordance with the equal weight view. Lee contends that doing so conflicts with the justification for the criminal jury: the jury reflects the community morality and is the community “conscience.” Lee takes the latter word seriously and tries to explain why respecting a juror’s conscience conflicts with instructing the juror to revise a moral judgment in the face of controversy. Simply put, in moral disagreement, it is not rational to treat another’s conscience and one’s own as equally reasonable.

I don’t think he convincingly pinpoints why, for reasons too lengthy to explain here. However, the case for the inappropriateness of an alternation-and-unanimity requirement for moral decisions can be strengthened. If there is a moral truth to which the community is committed, and if exposing that commitment requires advanced moral skills, then the alteration requirement is inappropriate; for rarely will there be twelve jurors with equal moral abilities. It is unlikely that disagreeing jurors are epistemic peers, contrary to one of Lee’s assumptions. (Lee has misgivings about this not-epistemic-peers response.) If, on the other hand, the question of community morality is about the application of a social norm, it is likely the jurors are epistemic peers. However, social norms are indeterminate at points. (Lee remarks that the evaluative terms in question are “vague.”) If the disputed issue falls into this region, a decision must be made, a precisifying. One can argue for the appropriateness of a majority, or a supermajority, on democratic grounds, perhaps; however, given that there are always deviants from social norms, there is no reason to require unanimity.

I said that Lee answers “no” to the title question, with a qualification. Lee ends his article by suggesting that if beyond a reasonable doubt requires the equal weight view (recall that he has made assumptions that are merely plausible), then it may turn out that jury decisions on moral issues should not be required to be beyond a reasonable doubt, after all.

Cite as: Barbara Levenbook, Does Belief Beyond a Reasonable Doubt Require Unanimity Among Jurors?, JOTWELL (September 5, 2018) (reviewing Youngjae Lee, Reasonable Doubt and Disagreement, 23 Legal Theory 203 (2017)), https://juris.jotwell.com/does-belief-beyond-a-reasonable-doubt-require-unanimity-among-jurors/.