The Journal of Things We Like (Lots)
Select Page

Adapting Capabilities Approaches to Domestic Policy Problems

Armin Tabandeh, Paolo Gardoni & Colleen Murphy, A Reliability-Based Capability Approach, 38 Risk Anal. 410 (2018).

Whether by statute or executive order, many agencies are required to produce cost-benefit analyses when proposing significant regulations and to justify decisions in its terms. The reason is not that cost-benefit analysis is perfect. Even its most thoughtful proponents recognize it has limitations. According to Matthew Adler and Eric Posner, for example, “[m]odern textbooks on [cost-benefit analysis] are plentiful, and some of them are optimistic about the usefulness of the procedure, but most of them frankly acknowledge its serious flaws and the inadequacy of standard methods for correcting these flaws.”1

Most proponents of cost-benefit analysis nevertheless suggest that when it comes to agency decision-making, no better and feasible alternative currently exists. Whether that is true depends on what the alternatives are. I have recently found A Reliability-Based Capabilities Approach useful in this regard. I believe it offers the right building blocks to articulate an alternative, capabilities approach to agency decision-making that may prove useful in a wide range of domestic policy contexts.

Capabilities approaches, as pioneered by Amartya Sen and Martha Nussbaum, are by now well known. Though there are many different ways to develop the idea, all begin with the conceptual claim that what is intrinsically valuable for people is not the resources they have, or just any subjective mental states, but rather what people are able to be or do. Whereas orthodox cost-benefit analysis relies heavily on willingness to pay to measure “costs” and “benefits” and thus typically uses market data or surveys to “price” most “costs” and “benefits,” capabilities approaches do not assume that everything of value must be priceable by a market. Capabilities approaches recognize that human welfare can also be multi-dimensional: deficits in one capability need not always be compensable through benefits to another. This means that it is not always useful to present things in terms of one aggregate measure.

Capabilities approaches have proven enormously influential in some contexts. The United Nations, for example, uses a capabilities approach to produce several metrics, like the Human Development Index and the Multi-Dimensional Poverty Index. These metrics have been widely used to guide policy decisions in many development contexts, but capabilities approaches have thus far had much less impact on domestic policy analysis.

What explains this difference in application? One reason relates to liberal concerns for value neutrality. Whatever its limitations, cost-benefit analysis at least has the merit of being sensitive to the changing preferences of a population, insofar as they are reflected in the market. By contrast, once one goes beyond the basic conceptual claims of capabilities approaches mentioned above, their application typically requires some method to settle which capabilities are intrinsically valuable and how to weigh them. This can pose a problem for liberal methods of decision-making because values are contested in free societies.

For some time now, I have thought that some of the conceptual claims made by capabilities are undeniable. I have nevertheless shared the concern that capabilities approaches may not be sufficiently value-neutral for widespread use in domestic policy contexts by federal agencies. A Reliability-Based Capability Approach has prompted me to reexamine that view. The article develops a mathematically rigorous method to quantify the societal impacts of certain hazards, using a capabilities approach. Though the piece is focused on hazards, I believe these methods could be extended to produce a capabilities approach to evaluate legal regulations that avoids the charge of illiberalism.

When assessing liberal concerns with capabilities approaches, it can help to distinguish between two different types of capabilities. There are some capabilities that almost everyone agrees are valuable or even necessary for a good life. I will call these “basic capabilities.” Examples would include the capability to be healthy, to avoid premature mortality, and to have shelter. Then there are other capabilities, which different people in a free society might choose to exercise in different amounts (or sometimes not at all) based on their different conceptions of the good. I will call these “non-basic capabilities.”

I see potentially useful aspects to A Reliability-Based Capability Approach when it comes to measuring the impacts of legal regulations on both basic and non-basic capabilities. The article begins with a mathematical formalism that uses vectors to represent different achieved functionings (which are valuable beings or doings) of individual persons. (A vector is just a quantity in an n-dimensional space that can be represented as an arrow with a direction and magnitude. In this case, the n dimensions reflect the n classes of achieved functionings that will be measured.) These vectors are then transformed into vectors of indices of these achieved functionings. Standard empirical methods can be used to predict the likely outcomes of hazards (or regulations, I suggest, by extension) on these indices.

The article allows for the definition of certain thresholds of “acceptability” and “tolerability” of any component of an index. It then offers a mathematical approach, based in systems analysis, which allows one to calculate the “acceptability” and “tolerability” of a predicted outcome and return a “non-acceptable” or “non-tolerable” conclusion if any predicted functioning for an individual falls below a set threshold for that type of functioning. It should be noted that “functionings,” in the language of capabilities approaches, are achieved beings or doings, whereas “capabilities” are abilities to achieve valuable beings or doings. Functionings can only be presumed to provide good proxies for capabilities when it comes to basic capabilities, which almost no one would fail to pursue if they were capable.

The authors suggest using democratic processes to determine what capabilities are valuable and what thresholds should be used to make these determinations. But there is another possibility. With suitable modification, these equations could be used to determine what thresholds of “acceptability” and “tolerability” are implied for each basic capability, within a larger group, by a proposed regulation. This might be done by combining information about the predicted average and standard deviations for each component. When it comes to basic capabilities, which everyone agrees are valuable, I believe it would provide useful information to know whether these implicit thresholds would be increased or lowered by a proposed regulation.

Consider, for example, a capabilities-based measure that is similar in spirit and might be integrated into such a framework: quality-adjusted life years (QALYs). An agency that is considering two different regulations, which decrease the overall costs of healthcare, might find that both are cost-benefit justified. One set of regulations might nevertheless be predicted to lower the implied minimal acceptability or tolerability thresholds for quality-adjusted years of life because it decreases the costs for certain luxury health services (i.e., services that some people may decide to purchase but do not extend QALYs) while making it harder for many other people, who have less financial resources, to obtain cheap health services that would greatly extend their quality-adjusted years of life. The other regulation might be predicted to raise these minimum thresholds. All else being equal, the second regulation should be preferred.

Instead of trying to decide in advance how to weigh all these factors, it might be sufficient to render all these facts transparent during the notice and comments period of a proposed regulation. Then more people could know what regulations are actually doing and could respond politically.

By contrast, cost-benefit analysis—at least as it is typically operationalized using willingness to pay to measure the relevant “costs” and “benefits”—tends to obscure some consequences of regulations. There is nothing inherently valuable about willingness to pay. Hence, reliance on this metric only makes sense if differences in willingness to pay are the best available proxies for differences in human welfare. But as the hypothetical example of healthcare in an unregulated market will now show, market prices are often poor indicators of the routes to human welfare.

The problem arises from a combination of cost-benefit analysis with wealth inequality. People who have more resources may be willing to pay relatively large amounts for some health services that do not contribute much at all to QALYs. But many poor people may be unable to afford even some basic healthcare services that are critical for their QALYs. This is not because the capability to be healthy or to avoid premature mortality is less intrinsically valuable to the poor. Nor is it because some lives are more valuable than others. People who are poor must simply make harder choices with their limited financial resources. As a result, orthodox cost-benefit analysis can count small welfare benefits to the rich more heavily than larger welfare benefits to the poor.

That mainstream cost-benefit analysis systematically favors the wealthy is well known among philosophers and economists. The language and formalisms of mainstream cost-benefit analysis nevertheless hide these consequences of regulatory choices from most people. It would be much more transparent if agencies were required to produce not only cost-benefit analyses, when proposing major regulatory changes, but also reports on the likely impacts on the thresholds of acceptability and tolerability for any basic capabilities that may be affected. It is not necessary to decide in advance what the right thresholds should be. Sunshine may often be a sufficient disinfectant.

A different solution is required when it comes to measuring the effects of regulations on non-basic capabilities. These are capabilities that different people may value differently (or not at all) in a free society. I believe that a different idea found in A Reliability-Based Capability Approach may help with this problem as well.

In particular, the article proposes using the standard deviation of indices as a way to measure the variability in achieved functionings that people exhibit with respect to different capabilities. Though the idea would need to be developed, I see in it the embryonic form of an index that could measure peoples’ effective abilities to choose between different achieved functionings and thus pursue different conceptions of the good.

An index of this kind would be just as value-neutral as cost-benefit analysis, but it would not systematically favor the wealthy. Use of it would also address another well-known limitation of cost-benefit analysis. Most people value some things—like community, friendship, and faith—that are neither sold on a market nor could maintain their value if they were. Some other goods and services—like domestic labor within a family—contribute great amounts to human welfare but are not well priced by markets because they are often freely given. Regulations that rely too heavily on cost-benefit analysis tend to overcount values that are commodified (or at least commodifiable) over values that are not. That cannot be good for a society, given everything that people actually value. An index that measures peoples’ capabilities to pursue their personal conceptions of the good, regardless of how much is commodified or commodifiable, would be extremely useful for the law.

  1. Rethinking Cost-Benefit Analysis, 109 Yale L.J. 165 (1999).
Cite as: Robin Kar, Adapting Capabilities Approaches to Domestic Policy Problems, JOTWELL (October 17, 2018) (reviewing Armin Tabandeh, Paolo Gardoni & Colleen Murphy, A Reliability-Based Capability Approach, 38 Risk Anal. 410 (2018)), https://juris.jotwell.com/adapting-capabilities-approaches-to-domestic-policy-problems/.

Does the Center Want to Hold?

David Adler, The Centrist Paradox: Political Correlates of the Democratic Disconnect (May 01, 2018), available at SSRN.

The very idea of a meaningful left-center-right political spectrum always seemed suspect to me. Many commentators have warned against conflating cultural and economic “wings.” The cultural left wants to get the state out of the bedroom (so to speak). The economic left wants to get the state into the boardroom. The cultural right wants to inject the state into the bedroom, to regulate sexual and procreative matters. The economic right wants the state out of the boardroom, sweeping away pesky regulations of the workplace and the market.

Plainly, one might be on the economic right but on the cultural left, or vice versa. It would be a mistake to try to cram these different dimensions into one. Would someone who happened to fall simultaneously on the economic left and the cultural right count as…a centrist? An outlier? (Gene Debs called socialism “Christianity in action“—where does that put him?)

Set this worry aside, and assume that correlations with, say, attitudes about immigration serve to validate the use of a one-dimensional spectrum. Extensive surveys have been conducted that ask respondents where they place themselves. Some of these surveys go on to ask about attitudes toward democracy and elections and the importance of having a strong, decisive leader unfettered by a congress or parliament. David Adler, a young researcher who recently moved from London to Athens, has looked at this data and has uncovered what he calls the “Centrist Paradox.” Anyone who is concerned about the direction democracies are taking ought to take a careful look, too.

I had always assumed that if social science places a representative person on a left-center-right political spectrum, and independently measures that person’s attachment to democratic ideals, that social science would find that people toward the extremes tend to have a lesser attachment to the norms of democracy, while people in the middle are more attached. As Adler puts it, “there is an intuition that there is an elective affinity between extreme political views and support for strongman politics to implement them.” (P. 2.) (Lenin for the left/ Franco for the Right, as it were.) No research, he finds, has bothered to test this assumption. And—shockingly—it turns out that the reverse is likelier to be true. People in the center appear to be (for the most part) the least attached to democracy.

Adler reports his analysis of data representing the U.S., the U.K., and a number of E.U. countries from 2008 and 2010-16. He says his results are robust when controlling for variables such as income, education, and age (which have been suggested as factors tending toward “populism”). He is careful to distinguish support for democratic principles from satisfaction with democratic outcomes. (P. 7.) While the left and right wings may be less happy with outcomes, it is the center—paradoxically—that is the least happy with the process itself.1

The U.S. results are especially striking, and the heaviest gob-smacker of all is that “less than half of the political centrists in the United States view free elections as essential to democracy—over thirty percent less than their center-left neighbors.” (P. 4.) Free elections! This is far more disturbing than polls that indicate the Bill of Rights lacks majority support. Those amendments are meant to constrain majority power, so the majority can be expected to chafe. A Bill of Rights, like a separation of powers, is essential to liberal democracy, but not to democracy per se. But if free elections are not essential to democracy, what is? Even Hungarian Premier Viktor Orbán’s “illiberal democracy”—not to mention a host of sham democracies—is wedded to free elections. Yet, Adler’s analysis finds that a majority of self-identified U.S. centrists rejects the almost tautological proposition that free elections are the essence of democracy.

Trying to wrap my head around what Adler seems to have uncovered, I ask myself what other commonsense assurances have to be called in for re-examination if he is right. Many assume that, in “our” democracy, the center will tend to check the excesses of any extreme candidate. The landslide losses of “far-right” Barry Goldwater to “centrist” Lyndon Johnson in 1964, and “far-left” George McGovern to “centrist” Richard Nixon in 1972, are the cautionary tales directed at “fringe” insurgencies. A polarizing candidate is supposed to frighten and activate the center, and thus lose. That’s how the system works.

But is there an as-yet untried method by which a polarizing candidate might win over the American center? Perhaps by posturing as uncommonly strong and decisive, even if— especially if!—unfashionably and unapologetically “undemocratic”? If the strong, decisive figure also has an energized base on one extreme, so much the better. (I mean, so much the worse…for our received wisdom.) A strongman with an unshakable base might find polarization to be an effective tactic for exploiting the center’s relative indifference to democratic values.

  1. Cf. Man In Center of Political Spectrum Under Impression He Less Obnoxious, The Onion (Aug. 18, 2017).
Cite as: W.A. Edmundson, Does the Center Want to Hold?, JOTWELL (October 2, 2018) (reviewing David Adler, The Centrist Paradox: Political Correlates of the Democratic Disconnect (May 01, 2018), available at SSRN), https://juris.jotwell.com/does-the-center-want-to-hold/.

Does Belief Beyond a Reasonable Doubt Require Unanimity Among Jurors?

Youngjae Lee, Reasonable Doubt and Disagreement, 23 Legal Theory 203 (2017).

Although in most states and in the federal system, the law’s answer to the title question is “yes,” Youngjae Lee’s answer—with a qualification it will take the rest of this jot to explain—is “no.” To be more precise, his answer, surprisingly, is that it depends on the issue that is liable to disagreement. Making certain assumptions, Lee argues that unanimity is the best rule to adopt for juries reaching decisions about empirical facts in criminal cases. In these circumstances, requiring unanimity among jurors is both most faithful to the beyond-the-reasonable-doubt requirement for conviction and most faithful to the justification of this requirement. But juries must make decisions on all of the elements of crimes (and sometimes on affirmative defenses, I might add); to do this, often juries must make decisions on issues that are at least partly evaluative. (Lee calls them “moral issues.”) Some of his examples come from the core of criminal law: rape (reasonable belief in consent or a reasonable expectation that defendant recognize lack of consent) or homicide (depraved-heart murder, reckless homicide, self-defense). For these decisions, Lee argues, unanimity is not the rule to adopt.

He arrives at these conclusions by assuming a principle of rationality that has lately attracted attention from epistemologists: the “equal weight view.” That view says that if there is disagreement among persons with equal cognitive capabilities and equal access to information (“epistemic peers”), each belief is equally reasonable, and so, has equal weight. Each person should adjust his belief in the direction of those with whom he or she disagrees. In a simple case of 11-1 disagreement where eleven have the highest confidence about the defendant’s guilt, the equal weight view requires that they lower their confidence. Under some circumstances, lowering by the eleven results in an insufficient average level of confidence among all the jurors—insufficient to satisfy the requirement of being beyond a reasonable doubt—so a unanimous verdict of not guilty should be reached. If the sole dissenter is not very confident in his opinion for acquittal, the average belief in the probability of guilt may remain high enough to satisfy the standard of beyond a reasonable doubt and so, a unanimous verdict of guilty should be reached. But not if the level of confidence satisfying the beyond-a-reasonable-doubt standard is very stringent. Then any amount of dissent regarding conviction leads on the equal weight view to acquittal.

Lee then adds other assumptions. One is that jurors are likely to fail to apply the equal weight view consistently—i.e., they do not always adjust their confidence levels in the face of disagreement with those they recognize as epistemic peers. When this happens, he shows, assuming the equal weight view is correct, a supermajority voting standard will sometimes result in a false conviction. A unanimity rule would lead to either an acquittal or a mistrial, due to a hung jury. Something similar happens under Lee’s next assumption: that it is likely that jurors who are very confident of a defendant’s guilt and applying the equal weight rule will not recognize dissenters as epistemic peers. In both cases, given the undesirability of convicting the factually innocent, the unanimity rule leads to better results when jurors disagree. It generates decisions that approximate ones that jurors would reach if they were more rational, Lee claims. Plus, it is a way of enforcing the beyond-a-reasonable-doubt requirement.

But only for the finding of factual matters. On most of Lee’s earlier assumptions, in an 11-1 split on moral issues, the equal weight thesis would require acquittal. But moral disagreement is common. Lee thinks many splits among jurors on moral issues, with various numbers of dissenters, would, on the equal weight view, have to end in acquittals.

The mechanism that generates this outcome, however, seems wrong. It is inappropriate for disagreeing jurors to alter their opinions on moral issues in accordance with the equal weight view. Lee contends that doing so conflicts with the justification for the criminal jury: the jury reflects the community morality and is the community “conscience.” Lee takes the latter word seriously and tries to explain why respecting a juror’s conscience conflicts with instructing the juror to revise a moral judgment in the face of controversy. Simply put, in moral disagreement, it is not rational to treat another’s conscience and one’s own as equally reasonable.

I don’t think he convincingly pinpoints why, for reasons too lengthy to explain here. However, the case for the inappropriateness of an alternation-and-unanimity requirement for moral decisions can be strengthened. If there is a moral truth to which the community is committed, and if exposing that commitment requires advanced moral skills, then the alteration requirement is inappropriate; for rarely will there be twelve jurors with equal moral abilities. It is unlikely that disagreeing jurors are epistemic peers, contrary to one of Lee’s assumptions. (Lee has misgivings about this not-epistemic-peers response.) If, on the other hand, the question of community morality is about the application of a social norm, it is likely the jurors are epistemic peers. However, social norms are indeterminate at points. (Lee remarks that the evaluative terms in question are “vague.”) If the disputed issue falls into this region, a decision must be made, a precisifying. One can argue for the appropriateness of a majority, or a supermajority, on democratic grounds, perhaps; however, given that there are always deviants from social norms, there is no reason to require unanimity.

I said that Lee answers “no” to the title question, with a qualification. Lee ends his article by suggesting that if beyond a reasonable doubt requires the equal weight view (recall that he has made assumptions that are merely plausible), then it may turn out that jury decisions on moral issues should not be required to be beyond a reasonable doubt, after all.

Cite as: Barbara Levenbook, Does Belief Beyond a Reasonable Doubt Require Unanimity Among Jurors?, JOTWELL (September 5, 2018) (reviewing Youngjae Lee, Reasonable Doubt and Disagreement, 23 Legal Theory 203 (2017)), https://juris.jotwell.com/does-belief-beyond-a-reasonable-doubt-require-unanimity-among-jurors/.

After Legal Positivism

Legal positivism—or one style of doing positivist legal theory—is dead. Of course, there are different types of legal positivists in the world. For example, some legal positivists take a page out of the book of their opposite number, natural law theorists. But natural law theory1 —belief in a single right moral answer to legal questions—is going nowhere. To believe otherwise is to evince embarrassingly bad aesthetic judgment. Better to revive/reframe legal positivism. The way to do that is to return to the work of the master, Hans Kelsen, for it is only through a rethinking of Kelsen that legal positivism can be saved from its most ardent supporters in Oxbridge and North America.

This is the opening gambit to one of the most intriguing books in legal theory in recent memory. Alexander Somek—who has written two brilliant books on EU law2 and an equally impressive book on global constitutionalism3 —has produced a book every Anglophone legal theorist should read. To be sure, Somek writes in a style most Anglophone legal philosophers will find off-putting. While references to Hegel and Fichte abound, I have never read anyone who has a comparable command of the secondary literature in Analytic Legal Theory. Somek has read everything (in legal theory, analytic philosophy, German philosophy and more) and his analysis of the work of contemporary analytic legal theorists is itself ample reward for the time needed to consider his arguments.

The book is composed of six chapters, each of which contains small subsections denominated by themes (many expressed in one or two words). Like his Anglo-American contemporaries, Somek wants to elucidate the nature of law. Eschewing social facts (Hartian and Razian positivism) and constructive interpretation (Dworkin), Somek maintains that “[l]aw is first and foremost a relation among people.” (P. 20.) Somek defends this claim with accounts of legal knowledge and sources of law that can broadly be described as “Kelsenian” in inspiration, if not style.

“Knowing the law is a business.” (P. 1.) Thus begins Somek’s account of the nature of law. Of course, money and power surround law. But law can be free of their undue or corrupt influence. The vehicle for this, Somek avers, is truth: “[o]nly by virtue of truth can legal knowledge emancipate itself from the undue influence of money and power.” (P. 2.) In addition to truth, there is a fact of the matter about what the law “really is.” (Id.) Thus, objectivity about law is possible but this is attained only if we understand what the law really is about.

Somek believes sources are an important dimension of the nature of law. But his conception of the role of sources in generating a concept of law is rather different from what one usually finds in the literature. Sources of law connect people through creation of a legal relation (e.g., buyer and seller). It is through these relations that agents mediate their presence with the world. Knowledge of the law is subjective (in the sense of individual agency): all sources of law (precedents, statutes, professional commentary) “give rise to law while drawing on other sources.” (P. 7.) While knowledge of the law is law’s knowledge of itself, “[i]ts point is to attain clarity in singular cases.” (Id.) Finally, when we invoke the law we do so through legal relations the categories of which mediate our relationship to others.

Somek describes his approach to legal theory as “constructivist” about law’s objectivity.  He wants to convince us that modern Anglophone positivism errs when it conceives of objectivity as a correct understanding of existing legal materials (think of Raz’s account of law’s authority). Knowledge is knowledge “of the law by the law, that is, of a prior source by a later source.” (P. 80.) This view of sources is misleading, for sources are just “devices that permit us to know what the law is.” (Id.) Recall Dworkin: law (principles) is a matter of “a sense of appropriateness developed in the profession and the public over time.” (P. 4, citing and quoting Ronald Dworkin, Taking Rights Seriously (1978).)

Nevertheless, a science of law is possible. A claim to legal knowledge “bearing the stamp of approval by the law necessarily flows from sources of law.” (P. 89.) But legal sources alone are not law, any more than law is the union of primary and secondary rules (according to Hart).  Law is more than rules: for one thing, sources require elaboration.  Systematic elaboration is as much law as its sources. Elaborations, like cases, require endorsement, specifically the assent of players in the practice: “[j]oined practice is the warrant for the shared belief that is a social fact.”4 (P. 95.) Not surprisingly, Somek sees the common law not as a system of legal knowledge, but “a system of endorsements.” (P. 104.)

The most intriguing chapter of this interesting book is the fourth one, on The Legal Relation. Somek’s goal in this chapter is to rethink the relationship between morality and law. Through a synthesis of Hegel and John Mackie—together with a clever hypothetical involving proper behavior at classical music concerts—Somek makes some insightful comments on the nature of reasons for action and how best to understand the role of authority in law. In contrast to Raz, whose widely-endorsed “service conception” of authority sees substantive reasons for actions displaced by law’s authority, Somek argues that the authority of law “that emerges from the legal relation is an authority of rights.” (P. 125.) I cannot police the poor conduct of fellow concertgoers because the law prohibits such an intervention. As such, law requires that I yield to another’s reasons for action even as I disdain the perspective which gives rise to it. As Darwall (who is cited and quoted) puts it, second-personal authority is authority to have wants respected. Authority, it turns out, is much more complex (morally and politically) than a technocratic (Somek’s word) account of the concept might indicate.

In the space of such a short review, it is difficult to convey the depth of argument in this engaging book. Somek’s sustained treatment of the secondary literature in contemporary analytic legal theory (Late Legal Positivism) is not to be missed. Somek is a hard-core positivist: there is a fact of the matter about what the law is. As always, his commitment to truth about law sits uneasily with his nod to the work of people like Stanley Fish5 and his embrace of a skeptical reading of Wittgenstein on rule-following.6 But these are minor blemishes on an otherwise compelling and engaging work.7

  1. Somek identifies Dworkin as the natural law theorist he has in mind: “Natural law is an extension of moral claims to the domain where we have to decide over questions of coercion.” (P. 4, citing Ronald Dworkin, Law’s Empire (1986).) Of course, there is much more to “natural law theory” than the Dworkinian view. Somek recognizes this and has a long and interesting footnote on the matter. See Somek at p. 3, fn. 6 (“Admittedly, the contours of ‘natural law theory’ as a position in legal philosophy are far from clear.”).
  2. Alexander Somek, Individualism: An Essay on the Authority of the European Union (2008) and Alexander Somek, Engineering Equality: An Essay on European Anti-discrimination Law (2011).
  3. Alexander Somek, The Cosmopolitan Constitution (2014).
  4. Obviously, this sounds a lot like Hart.
  5. Stanley Fish, Doing What Comes Naturally: Change, Rhetoric, and Theory in the Practice of Theory in Literary and Legal Studies (1989).
  6. I am identified as a member of the “Wittgensteinian Right.” P. 39, note 85. As Somek conceives of this group, he is correct in locating me there.
  7. My thanks to Bosko Tripkovic for comments on a draft of this Jot.
Cite as: Dennis Patterson, After Legal Positivism, JOTWELL (July 24, 2018) (reviewing Alexander Somek, The Legal Relation: Legal Theory After Legal Positivism (2017)), https://juris.jotwell.com/after-legal-positivism/.

Hohfeld and Property

Christopher M. Newman, Hohfeld and the Theory of In Rem Rights: An Attempted Mediation in The Legacy of Wesley Hohfeld (forthcoming 2018), available at SSRN.

Rights come in different types, and the failure to distinguish among them can lead one into errors. So argued Wesley Newcomb Hohfeld, who—in two articles published in the Yale Law Journal in 1913 and 1917—offered a highly influential categorization of rights by type. This marvelous collection of essays, edited by Shyam Balganesh, Ted Sichelman and Henry Smith, assesses the Hohfeldian legacy. I’ll largely focus on Christopher Newman’s contribution, which I found particularly helpful. Some property scholars have criticized Hohfeld’s approach as unable to account for the distinctive character of property rights. Newman argues, I think rightly, that the two are compatible.

That Hohfeld was correct to distinguish rights by their type is undisputed. The right that I have to be present on Blackacre by virtue of owning it and the right that I have as a boxer to punch my opponent are clearly different in structure. As Hohfeld would describe it, my right to punch is a privilege only, whereas my right to be on Blackacre includes privileges and claims. X has a privilege with respect to Y that X perform act φ if and only if, by φ-ing, X violates no duty to Y. X has a claim with respect to Y that Y φ if and only if Y has a duty to X to φ. I have a privilege to punch my opponent, because, by punching him, I do him no wrong. But this “right” to punch includes no claim with respect to him: he has no duty to let himself be punched. My right to be on Blackacre, by contrast, includes not only privileges (by being on Blackacre, I violate no duty to you) but also claims (you cannot interfere with my being on Blackacre, for example, by expelling me from it). (For the record, Hohfeld identified two other types of right—powers and immunities—and would say that my rights with respect to Blackacre include them too, but I leave these details aside.)

The cardinal Hohfeldian sin is to assume that a privilege is the same as (or necessarily entails) a claim. Courts really do commit this sin sometimes. Consider the reasoning of the Irish Supreme Court in Fleming v Ireland. (I borrow this example from an excellent article on the Hohfeldian framework by Luis Duarte d’Almeida.) The question the Court faced was whether people can be criminally prosecuted for assisting someone in committing suicide. They cannot be prosecuted if they have a right (read privilege) to assist. But the Court wrongly concluded that no such privilege exists, because if it did it would follow that those possessing the privilege would also have a right (read claim) against the government to defense in their exercise of their privilege. The government would not merely be prohibited from punishing them but also obligated to protect them when they assisted someone’s suicide. Since no such claim existed, the Court concluded that no privilege existed either. But that’s like concluding that a boxer cannot have a privilege to punch his opponent because his opponent has no duty to let himself be punched. That the Court made this mistake does not mean that its conclusion that there is no privilege to assist suicide was wrong, of course. But the conclusion must be justified by substantive arguments, not false claims of deontic necessity.

Although most everyone agrees that Hohfeld’s work is an important starting point in thinking clearly about rights, there is plenty of room for criticism. To bring up one that has always bothered me: Should a privilege be defined purely negatively, as the absence of a duty? The negative definition makes it impossible to distinguish between Jane, who has no legal duties with respect to anyone because she is subject to the legislative jurisdiction of no lawmaker, and Martha, who has no legal duties with respect to anyone because she has been privileged by a lawmaker to act however she wants. In addition, the negative definition makes certain normative conflicts impossible. Nothing about the Hohfeldian framework excludes conflicts of duties (and their correlative claims). Joe can have a duty with respect to Fred to wash Fred’s car and a duty with respect to Fred to not wash Fred’s car. (Perhaps a lawmaker obligated Joe to do both.) But the negative definition of privileges makes conflicts of duties and privileges impossible. If Joe has a duty with respect to Fred to wash Fred’s car, it simply can’t be that Joe has a privilege with respect to Fred to not wash Fred’s car, for the privilege is defined as the absence of the duty. But if duties can conflict with other duties, why can’t they conflict with privileges? Why can’t there be a normative conflict because a lawmaker put a duty on Joe to wash Fred’s car and gave Joe a privilege not to?

Property scholars have focused their criticism on two aspects of the Hohfeldian framework. The first is the granularity of property rights for Hohfeld—the fact that property rights can be individuated into countless privileges, claims, powers, and immunities. This appears to give support to the bundle-of-rights approach to property, which is sometimes understood as the view that the bundle is fundamentally arbitrary. The second is the correlativity of rights under Hohfeld’s framework—the idea that a person’s claim is necessarily correlated with another person’s duty, a person’s privilege with another person’s no-claim, and so on. The property scholars argue that this ignores the way that property rights are in rem, that is, focused on everyone’s relationships to things, instead of individuals’ relationships to one another.

Newman argues that the granularity of rights for Hohfeld can be defended if we understand him as not seeking to offer any account of why rights hang together normatively. To say that property is a bundle of rights is not to say that the bundle is arbitrary. There can be a good story about why those rights belong together (indeed, Newman offers such a story).

Newman’s response concerning Hohfeld’s correlativity thesis is similar. It is unquestionably true that Hohfeld was a vocal opponent of the idea that property rights are directed at things. Like all rights, property rights hold only between individual rights-bearers. To speak of a right in rem is simply a misleading way of describing “a large class of fundamentally similar yet separate rights, actual and potential, residing in a single person (or single group of persons) but availing respectively against persons constituting a very large and indefinite class of people.”

But Newman argues that the in rem character of property rights is actually compatible with the Hohfeldian approach, because it concerns the normative grounding of property rights. It is indeed true that property rights serve a distinctive function, different from the in personam rights of contract or tort. It is essential to have legal rules whose existence and content can be decided simply by looking at things rather than individuals’ relationships to one another. Property rights serve this role. Hohfeld can accept this normative grounding while still insisting that the rights it justifies always involve relationships between individuals.

Thanks to Luís Duarte d’Almeida, Christopher Newman, and James Stern for helpful comments.

Cite as: Michael Green, Hohfeld and Property, JOTWELL (June 27, 2018) (reviewing Christopher M. Newman, Hohfeld and the Theory of In Rem Rights: An Attempted Mediation in The Legacy of Wesley Hohfeld (forthcoming 2018), available at SSRN), https://juris.jotwell.com/hohfeld-and-property/.

The Turn to Pluralist Jurisprudence

Nicole Roughan and Andrew Halpin, In Pursuit of Pluralist Jurisprudence (2017).

Jurisprudence usually changes gradually and imperceptibly, with large-scale shifts recognizable only with the benefit of hindsight. Seldom does it occur that a single piece signals a dramatic turn in the field. A prime example of a transformation-signaling piece is Karl Llewellyn’s A Realistic Jurisprudence—the Next Stop,1 announcing the emergence of legal realism. Llewellyn’s article did not itself produce the transformation; rather, he identified a generational shift in jurisprudential thought that was already taking place, and he sought to bring attention to this shift and the themes around which it revolved. The article (and its follow-up, Some Realism About Realism: Responding to Dean Pound2) served to crystallize and give a label to what theretofore had been an inchoate development. Following this article, legal realism would be criticized, debated, and elaborated. A new school of jurisprudential thought thus was born.

In Pursuit of Pluralist Jurisprudence (2017), edited by Nicole Roughan and Andrew Halpin, might turn out to be another transformation-signaling piece in jurisprudence, though its impact will not be known until a generation has passed. There are several reasons to think it might achieve this stature.  For one, like Llewellyn’s piece, this book has a catchy descriptive title that dubs the nascent field “pluralist jurisprudence.” Furthermore, the volume contains ambitious original essays by established, as well as rising, jurisprudential figures from different parts of the world: Nicole Roughan and Andrew Halpin (Introduction and The Promises and Pursuits of Pluralist Jurisprudence), Roger Cotterrell (Do Lawyers Need a Theory of Legal Pluralism?), Maksymilian Del Mar (Legal Reasoning in Pluralist Jurisprudence), Cormac Mac Amhlaigh (Pluralising Constitutional Pluralism), Ralf Michaels (Law and Recognition—Toward a Relational Concept of Law), Sanne Takema (The Many Uses of Law), Joseph Raz (Why the State?), Detlef von Daniels (A Genealogical Perspective on Pluralist Jurisprudence), Stefan Sciaraffa (Two Conceptions of Pluralist Jurisprudence), Neil Walker (The Gap Between Global Law and Global Justice), Margaret Davies (Plural Pluralities of Law), Kirsten Anker (Postcolonial Jurisprudence and the Pluralist Turn), and Martin Krygier (Legal Pluralism and the Value of the Rule of Law). As their titles indicate, the essays cover a range of topics in relation to legal pluralism.

A central organizing theme of the collection, write Roughan and Halpin, is the contrast between monist and pluralist jurisprudence:

[T]raditional jurisprudence is municipal or state-centric jurisprudence.  Even if it touches upon international law, it does so from a state centric-Westphalian perspective of viewing international law through the agency or authority of states.  It remains, in that sense, monist.  By contrast, pluralist jurisprudence involves the recognition of non-state law in a way that is independent of both the agency and the authority of states.3

Pluralist jurisprudence recognizes the co-existence of multiple legal forms in social arenas with various sorts of relationships to state law and other forms of law, from integration, to mutual recognition, to fully autonomous and independent co-existence, to outright conflict, and further variations.  In addition to state law, these co-existing forms of law mainly include indigenous or customary law, religious law, international law, transnational law, and human rights law.

Another reason to think this volume might signal a transition in jurisprudence is that, like Llewellyn’s piece, it has been preceded by a significant body of jurisprudential work focused on plural legal phenomena. Concepts of Law (2015), edited by Sean P. Donlan and Lukas Heckendorn Urscheler, focuses on legal pluralism from comparative, jurisprudential, and social scientific perspectives. Two recent jurisprudential works that discuss legal pluralism in connection with transnational law are Nicole Roughan’s Authorities: Conflict, Cooperation, and Transnational Legal Theory (2013) and Detlev von Daniels’s The Concept of Law from a Transnational Perspective (2010).  Works on the topic by analytical jurisprudents include Keith Culver and Michael Giudice’s Legalities Borders: An Essay in General Jurisprudence (2010) and Emmanuel Melissaris’s Legal Theory and the Space for Legal Pluralism (2009).  Early social legal theory works on legal pluralism include William Twining’s Globalisation and Legal Theory (2000) and my book, A General Jurisprudence of Law and Society (2001). To the foregoing list of books can be added several dozen theoretical articles on legal pluralism published in the past three decades.

A further reason to think In Pursuit of Pluralist Jurisprudence is a transformation-marking piece is that, like Llewellyn’s article, the topics taken up within pluralist jurisprudence relate to pressing contemporary legal, political, economic, cultural and social problems and phenomena. Previous generations of Western legal theorists arguably could disregard or overlook religious law, customary law, and indigenous law as marginal legal phenomena not worthy of serious jurisprudential attention—through these are primary forms of law in many parts of the world—but major legal transformations wrought by contemporary globalization can no longer be ignored. A pluralistic jurisprudence is better equipped to deal with the issues of the day than a jurisprudence built exclusively around the state.

Perhaps the most telling indication that this collection reflects a shift in jurisprudence is Joseph Raz’s contribution, Why the State? Raz acknowledges other forms of law, including “international law, or the law of organizations like the European Union, but also Canon Law, Sharia law, the law of native nations, the rules and regulations governing the activities of voluntary associations, or those of legally recognized corporations, and more[.]”4 Jurisprudence heretofore has focused almost exclusively on state law, he asserts, because until recently the state has been “the most extensive law-like system that is independent or free from external constraints.”5 Today, however, state law is increasingly subject to external legal constraints by intrusions from transnational law, international law, and human rights law. Raz concludes: the “exclusive concentration on state law was, it now turns out, never justified, and is even less justified today.”6 This essay represents a remarkable turnaround for Raz, who for decades has advanced a universalistic theory of law built on the state law model.7

Whether this collection marks a genuine shift in jurisprudence cannot be known until some time has passed. The contours of what a pluralistic jurisprudence might or should look like are unclear and the contributions to this collection raise many complex questions that go unresolved. Upon completing this volume, a reader may well be left with a disorienting sense of conceptual messiness that recognition of legal pluralism brings, and instead long for the relative clarity of the focus on state law.  Discarding this longstanding dominant focus sets jurisprudents adrift with no obvious replacement or mooring. Rather than new approaches, it may turn out, existing jurisprudential theories (like legal positivism) can account for legal pluralism with relatively minor adjustments.8 Or perhaps entirely novel jurisprudential frameworks must be developed. Whatever occurs, this collection leaves little doubt that jurisprudents must now seriously consider and account for other coexisting legal forms besides state law.

  1. Karl Llewellyn, A Realist Jurisprudence—the Next Step, 30 Colum. L. Rev. 431 (1930).
  2. Karl Llewellyn, Some Realism About Realism: Responding to Dean Pound, 44 Harv. L. Rev. 1222 (1931).
  3. Nicole Roughan and Andrew Halpin, Introduction (P. 3).
  4. Joseph Raz, Why the State? (P. 138).
  5. Id. at 147.
  6. Id. at 161.
  7. See Brian Z. Tamanaha, A Realistic Theory of Law Chapter 3 (2017).
  8. For an argument that current legal theory can accommodate legal pluralism, see Cormac Mac Amhlaigh, Does Legal Theory Have a Legal Pluralism Problem?  I articulate an approach to legal pluralism based on legal positivism in Brian Z. Tamanaha, Socio-Legal Positivism and A General Jurisprudence, 21 Ox. J. Leg. Stud. 1 (2001).
Cite as: Brian Tamanaha, The Turn to Pluralist Jurisprudence, JOTWELL (May 30, 2018) (reviewing Nicole Roughan and Andrew Halpin, In Pursuit of Pluralist Jurisprudence (2017)), https://juris.jotwell.com/the-turn-to-pluralist-jurisprudence/.

It Can’t Happen Here, Has It?

Adam Przeworski, Why Bother With Elections? (2018).

This concise and lucid book “is a summary of our current collective understanding of the method by which some societies decide who would govern them.…” (P. VIII.) The author is a professor of politics and economics at NYU, and an esteemed authority in the field of political economy. The book could not be timelier: many of us simply cannot understand how elections got us to where we are now. Bafflement can beget both anger and apathy. Much of the collective social-scientific understanding Przeworski relates will be deflating even for those who have already cast aside illusions. Nonetheless, he urges us to keep on bothering.

The book begins with a reminder that “elections are a modern phenomenon.” (P. 13.) The first national legislative election was held in 1788, to the United States Congress. Since then, elections have become an almost universal norm: today, “all but a handful of countries have legislatures elected by universal [qualified] suffrage and chief executives either elected in popular elections or indirectly by elected parliaments.” (P. 17.) The elections boom was accompanied by, and surely to some degree motivated by, the Rousseavian aspiration to reconcile humankind’s innate freedom with the fact that coercive government is here to stay. This yearning for self-government finds its expression in the rituals of popular elections.

Election rituals vary widely in their forms. Przeworki explains the differences between parliamentary, presidential, and “semi-presidential” electoral systems, and between proportional representation and first-past-the-post methods of composing a legislative assembly. The latter rules tend to determine how many political parties are viable: according to “Duverger’s Law,” presidential systems tend to be two-party, and parliamentary systems to be multi-party. The United States is unusual in using first-past-the-post in a presidential system, and unique among presidential systems in its indirect mode of electing presidents. The upshot is the permanent possibility of a divided government in which a legislative majority opposes a president, and the further possibility that such a president lacks the support of a popular plurality even when elected.

Madison wrote, in Federalist #10, that “the dangers to the holders of property can not be disguised, if they are undefended against a majority without property,” or even against a majority with less property than it thinks fair. This, Przeworki adds, is one thing on which conservatives and socialists have long agreed. Election systems have been devised in various ways to defang the threat while preserving the myth of popular —rather than propertied—sovereignty. The obvious expedient of excluding the unpropertied from the franchise has almost disappeared. Over the two and a half centuries of the electoral era, universal adult suffrage has steadily become the norm all across the globe. But economic inequality persists, and increases, even as formal political equality has become the norm. Why?

Various devices, such as indirect rather than direct election of representatives, tend to entrench incumbents and, with them, the status quo. Counter-majoritarian institutions—such as the presidential veto and judicial review— dampen majority power. The historical trend is toward constitutional judicial review and central-bank independence of the electorate. Another device is a super-majoritarian voting requirement, which bicameralism, in effect, is. Przeworski reports an estimate that congressional legislation needs to have 75% support across the two chambers in order to succeed. (Compare Article V’s requirement a valid constitutional amendment must be ratified by that three-quarters of the states.) With caveats, Przeworski offers “a conjecture about the mechanisms that drive this history. Given the extant trenches, those in power make concessions either when they face a foreboding threat from without or when some of them expect to improve their competitive position by finding allies among those currently excluded [and] whenever particular trenches are conquered, the elites find substitutes to protect their interests. These cycles are repeated over and over.” (Pp. 45-46.) Przeworski does not mention mass incarceration, penal disenfranchisement, and voter ID laws as instances of these obstacles, but he does note the curious fact that of the modern industrialized nations only the US holds national elections on a workday.

Incumbents win, on average, four times out of five. Rejecting more benign explanations, Przeworski suggests that the advantages of incumbency itself account for their success in getting themselves re-elected. He recounts an experience he had one winter living Chicago, trying to get authorities to free his car from the ice encasing it. Calling the City got no response, so his wife called the Democratic precinct captain. “He was at our door in minutes, pointing out that we had not voted in the last municipal election.” (P. 58.) Przeworski instantly gave the expected assurances. Problem solved. Obviously, this anecdote does not itself explain the incumbent advantage. Partisan “clientism” is not so blatant in other US cities; but studies of campaign contributions show that there is an incumbent advantage here, too. Is contribution the reported “pro quo” that indicates an implicit “quid” that the public at large can only guess at?

Przeworski is cautious about the widely held view that money in politics is mainly what stands between us and more representative government: “to date, no consensus has been reached regarding the effectiveness of campaign spending on vote shares.” (P. 65.) Nevertheless, he endorses his colleague Ann Harvey’s finding that “removal of state limits on campaign financing in the United States led to increased Republican vote shares and the election of more conservative candidates.” (P. 66.) But the real problem is structural: “Private ownership of productive resources limits the range of outcomes that can ensue from the democratic process … crucial economic decisions, those affecting employment and investment, are a private prerogative.” (P. 74.) Campaign spending limits don’t touch that.

How do democracies and autocracies compare in terms of economic performance? Autocracies tend to occupy the tails of the bell curve: “while the fastest-growing economies tend to be poor autocracies, so are the economic basket cases.” (P. 101.) Przeworski reports that at any given level of total output, democracies exhibit higher wages better life-expectancy, and lower economic volatility. But he sees an ominous development in the combination of stagnant median income growth and dramatically increased inequality in the democracies: “the erosion of the belief in intergradational [sic] progress may well be historically unprecedented and its political consequences are ominous.” (P. 99.) How have elections allowed this to occur?

Assume majority rule is in force, and the distribution of wealth is on the voting agenda. If we assume further that everyone votes according to self-interest and nothing else (i.e., no one cares what her relative share is), the so-called “median-voter theorem” tells us that the resulting distribution of wealth will tend to equate to that of the the voter whose wealth is such that half of the electorate has more and half has less. Thus, “the coexistence of universal suffrage with economic inequality is hard to fathom.” (P. 104.) Yet, surprisingly, “at every level of per capita income, the extent of inequality is not lower in democracies than in autocracies.” (P. 105.) And he reports the astonishing further fact that although “democratic governments redistribute more income … as income inequality increases from very low to intermediate levels, …they redistribute less as inequality increases once it is already high.” (P. 105 and Fig. 10.1) In other words, across a range of different democracies, low levels of inequality tend to call forth mitigating efforts to dampen further inequality, but high inequality leads to ever higher inequality. Dismissing various other explanations, Przeworski concludes that “the main culprit is that people are not politically equal in economically unequal societies” —meaning, in economically unequal “democratic” societies. (P. 106) Merely formal political equality does not prevent massive inequality of political influence in conditions of high economic inequality; and as substantive political inequality increases, so too does the economic inequality that begat it: “a vicious circle.” (P. 111.)

Elections neither presuppose, nor do they tend to promote, either political or economic equality. As economic inequality increases, elections do less and less to brake it. Nonetheless, elections make civic peace possible. How is this? Przeworski observes that elections facilitate peaceful transitions of power only if the stakes are not too high for the incumbent. “Just imagine that the … incumbent fears that if he loses the forthcoming election, he will exit the office through a window.” (P. 49.) Competitive elections presuppose more than tolerating opposition: they also presuppose that the incumbent may relinquish power without anxiety. Ordinary politics must be low-stakes politics, in this sense. Once the election habit has taken hold, it becomes stronger with each cycle. The “outs” know their turn will come and the “ins” know that if turned out they will be allowed to win themselves back in. So long as the populace also retains its faith in “intergradational progress,” the electoral stakes stay manageably low for all, and the election habit strengthens with repetition.

But, as Przeworski indicates, confidence in intergradational progress is eroding in the western democracies. And his analysis suggests a latent dilemma. The opposition, to be genuine, must give voice to discontent; and, to be effective, it must stoke discontent. But the election habit is based on an unspoken promise of amnesty for the incumbent rascal. Otherwise, an incumbent, who faces the prospect of prison for perceived crimes, will self-defensively misuse the incumbent’s advantage to rig (if not abort) the election process. Where amnesty is the norm, as it must be to keep the stakes low, elections can do little to check top-level official corruption and simony, which can be expected ever to increase unless some breaking point exists. On the other hand, once the amnesty norm is breached, elections cease to be the low-stakes affairs that orderly transitions of power presuppose.

Przeworki, in concluding, reassuringly dismisses the possibility of democracy collapsing in wealthy countries where the election habit has taken hold. But he is concerned about “deterioration” —“although we should not be desperate, we should also not be sanguine. Something profound is going on.” (P. 132). Anyone who shares this concern would do well to bother with this illuminating book.

Cite as: W.A. Edmundson, It Can’t Happen Here, Has It?, JOTWELL (April 26, 2018) (reviewing Adam Przeworski, Why Bother With Elections? (2018)), https://juris.jotwell.com/it-cant-happen-here-has-it/.

The Transformation in Kelsen’s Last Works

Stanley L. Paulson, Metamorphosis in Hans Kelsen’s Legal Philosophy, 80 Modern L. Rev. 860 (2017), available at SSRN.

Though Hans Kelsen is arguably the best-known and most influential legal philosopher of the 20th century world-wide, he is not especially well known among American scholars, and when his work is discussed in this country, it is often misunderstood.See D. A. Jeremy Telman, Hans Kelsen in America – Selected Affinities and the Mysteries of Academic Influence (2016). One scholar who has worked tirelessly for decades to make Kelsen better known and better understood on these shores is Stanley L. Paulson. He has (with the help of Bonnie Litschewski Paulson) translated Kelsen’s works,See, e.g., Hans Kelsen, Introduction to the Problems of Legal Theory (Bonnie Litschewski Paulson & Stanley L. Paulson, trans.,1992). written numerous articles summarizing and evaluating Kelsen’s work, and translated and compiled other significant commentaries on Kelsen.See, e.g., Stanley L. Paulson & Bonnie Litschewski Paulson, Normativity and Norms: Critical Perspectives on Kelsenian Themes (1998). Paulson’s most recent article, “Metamorphosis in Hans Kelsen’s Legal Philosophy,” (a) explains the neo-Kantian approach of most of Kelsen’s works (Pp. 876-880), (b) discerns certain weaknesses in the argument (Pp. 880-881, 893), and (c) investigates when and why Kelsen ultimately abandoned a neo-Kantian approach, and also changed his views about the application of logic to (legal) norms (Pp. 861-865, 882-892).

Anglo-American legal scholars are accustomed to a more empirical and pragmatic approach to philosophy in general, and to the study of law in particular, which is why H. L. A. Hart’s approach has been well received.See, e.g., H. L. A. Hart, The Concept of Law (Oxford, 2012). What has made Kelsen’s works so difficult for us is that his best-known writings are grounded in a very different approach, one based on Kant’s transcendental argument. As Paulson explains, Kelsen’s neo-Kantian argument goes along the following lines: We need to ask what follows from the fact that we (or “legal science”) view the acts of officials as valid legal norms. The mystery is grounded in the fact that the actions of officials are in the empirical realm (facts about what legislators, judges, administrators, and other officials have done or said), while legal rules are in the normative (non-empirical) realm. A standard philosophical view is that normative conclusions cannot be derived from strictly empirical premises.

Kelsen died in 1973; a lengthy manuscript that he left unfinished was published posthumously in 1979, with an English translation appearing in 1991.Hans Kelsen, General Theory of Norms (Michael Hartney, trans., 1991). That work caused a sensation among legal theorists, because it involved a sharp departure from Kelsen’s longstanding neo-Kantian approach to understanding law. In that final work, Kelsen presented his “Basic Norm” now as a fiction (in the spirit of Hans Vaihinger’s workSee Hans Vaihinger, Philosophy of “As If” (C. K. Ogden, trans., 1965). “The acts of will of legal organs are to be treated as if they could be understood normatively, not empirically.” (P. 884, footnote omitted)., and offered an approach based on skeptical empiricism, in the spirit of David Hume.

In the present article, Paulson shows that Kelsen’s break in fact occurred in 1960, many years before his death, and, more surprisingly, that this break in fact had a precursor, in some of Kelsen’s writings in 1939-1940 (Pp. 885-892). Paulson argues that Kelsen’s switch from Kant to Hume may have been tied to a different dispute, regarding whether it makes sense to apply logic to norms.

Paulson’s article ultimately makes a great deal about the transformations in Kelsen’s views clearer, but, in the process, creates new mysteries. For example, the switch away from neo-Kantian views in 1939 and 1960 is left clearer than the return to that approach in 1941, and its continued use until 1960. Paulson speculates (P. 891) that Kelsen’s return to his neo-Kantian approach in 1941 may have been due to his practical circumstances (in exile from Europe, looking for a permanent position), but more may be needed to explain his persistence with that approach for two more decades.

For those who wish to study Kelsen (with or without the post-1960 works), Paulson’s publications, including the present article, are the best places to start.

Cite as: Brian Bix, The Transformation in Kelsen’s Last Works, JOTWELL (March 28, 2018) (reviewing Stanley L. Paulson, Metamorphosis in Hans Kelsen’s Legal Philosophy, 80 Modern L. Rev. 860 (2017), available at SSRN), https://juris.jotwell.com/the-transformation-in-kelsens-last-works/.

A Story of Jurisprudence and True Philosophy

This is a shaggy dog jot. It starts in the conventional way by identifying a recent piece of work that is recommended for the attention of the reader. However, in the course of justifying that recommendation there is a series of diversions to other works, which distracts from any sustained understanding of where exactly the virtue of the piece is to be found. But, like a shaggy dog story which finally reveals its point, there does eventually emerge a point to the recommendation; although, as with the simile, not the point that might have been anticipated.

The work cited is a response by Gerald Postema to critics in a symposium on the volume covering common-law legal philosophy in the twentieth century he has contributed to Springer’s multi-volume Treatise of Legal Philosophy and General Jurisprudence. Within his coverage of that subject matter, Postema cites an article as a resource for unpacking the notion of true philosophy (vera philosophia) which he then utilizes in his own assessment of the achievements of common-law legal philosophy. This article by Donald Kelley in the 1976 Journal of the History of Philosophy,1 has not, as far as I am aware, received much attention in mainstream Jurisprudence; and I have failed to uncover any real enthusiasm for it within more exclusive, niche jurisprudential concerns. It is, nevertheless, central to our present concerns.

The recent availability of Postema’s volume in a far more affordable paperback version appears to have provoked discussion in the review pages of the core understanding of legal philosophy expounded by Postema in that work. Notably, Kevin Walton has challenged the basis of Postema’s evaluation of the failings of common-law legal philosophy, and has in turn offered his own account of at least a virtuous legal philosophy (if not an account of true philosophy) by which the efforts of legal philosophers can be measured.2 Postema has responded to Walton’s criticism (Pp. 611-12), providing us with further thoughts on where the value of legal philosophy is to be found. The disagreement between Postema and Walton turns on how, and the extent to which, legal philosophy has attained the status of a “sociable science”. This alternative depiction of what can be expected to be achieved by Jurisprudence, picked up by Walton, had been amplified in an article by Postema in the 2015 Virginia Law Review, subsequent to the initial publication of his Treatise volume.

There is a close connection in Postema’s writing between a sociable science and vera philosophia, as signalling the appropriate aspiration for Jurisprudence. However described, the aspiration finds fulfillment in engagement with the matters that are core to understanding the human social condition. The debate between Postema and Walton fixes on which of these matters have been taken into account by twentieth-century anglophone legal philosophy, and the level of  emphasis they have received. Their discussion also encompasses the extent to which legal philosophy has been receptive to the contributions of related disciplines in constructing a full and informative picture of sociability.

At this point, the prized accolade of vera philosophia appears attainable in different ways by Jurisprudence. Either by displaying an adequate grasp of its appropriate subject matter (sociability), or through a willingness to borrow from adjoining disciplines their coverage of the subject matter. Seen in  this way, the prize might equally be offered to other disciplines, if only they provide adequate coverage, or form suitable alliances, to deal effectively with their own particular subject matters.

On returning to Kelley’s article, we find a different criterion for awarding the honour of vera philosophia to Jurisprudence. It is not on the basis of effective coverage, but on the provision of singular insight. Postema cites Kelley’s article in advancing a noble philosophical vision for Jurisprudence contrasted with a narrow perspective on law found in blinkered professionalism – taking this to be the perspective of the iurisperiti (those merely learned in the law, or unthinking lawyers) in the denunciation from Budé that Kelley quotes. More than that is found in Kelley’s article in distinguishing the true calling of vera philosophia. Kelley also quotes the words of Louis le Caron (at 270), to make the point that it is also to be distinguished from mere philosophy: “true philosophy is contained in the books of law and not in the useless and inarticulate libraries of philosophers, who in effect are men of great learning but incompetent in public affairs”.

On this account, Jurisprudence is not prized because it displays virtues that any branch of learning might attain. Far from it. Jurisprudence is distinct from general philosophy precisely because it can offer the insights that philosophy misses. The idea emerging from Kelley’s survey of renaissance jurisprudence that Jurisprudence might have a unique perspective to offer between mere practice and mere philosophy is a tantalizing one. It seems plausible to suggest, on the one hand, that some kind of deeper theoretical reflection on the practice of law in regulating the social condition might yield more illumination than unthinking adherence to practice, while suggesting on the other hand that lofty philosophical speculation uninformed by a practical grounding is likely to be limited in the insight it can deliver.

It is one thing to state this as an aspiration for Jurisprudence; it is quite another thing to realize it. Kelley’s article, to be blunt, does not get us beyond reporting an aspiration that culminated in the sixteenth century. There is no evidence for supposing that such an exalted aspiration for Jurisprudence was maintained over succeeding centuries, either in the common-law world or in the civil-law world, as the relevant volumes of the Treatise reveal.3 Instead of providing the clarity of vera philosophia to inform us of the human social condition, Jurisprudence or legal philosophy has fragmented into discordant voices. In his general editorial preface to volumes 9 and 10 of the Treatise, Enrico Pattaro suggests that one can detect different versions of legal philosophy, which he attributes to general philosophers, jurists, and legal philosophers in a narrow sense. More recently, quite bitter rivalries have surfaced between those who see Jurisprudence as, or not (just) as, legal philosophy.

One possible clue to explaining this state of affairs is furnished by another of Donald Kelley’s works. In The Problem of Knowledge and the Concept of Discipline,4 Kelley suggests that the real factors shaping the substance of a discipline are to be found in the attitudes of those who exhibit mastery of the discipline and influence “disciples”. Different masters, different disciples, divided disciplines. If that line of argument is accepted, then one might reasonably question the value of noting efforts by its masters of a bygone age to elevate the discipline of Jurisprudence to a status it never actually achieved. And at the same time, question the emphasis given in a jot to an article that notes this very thing.

On further reflection, however, if we do take seriously Kelley’s thesis in his later piece, over the responsibility of the masters of a discipline for shaping the ambition and scope of that discipline, then it might after all be relevant to have our attention drawn to an era when those leading the discipline had a far more ambitious vision for Jurisprudence than it enjoys today.

  1. Donald Kelley, Vera Philosophia: The Philosophical Significance of Renaissance JurisprudenceJournal of the History of Philosophy 267 (1976).
  2. Kevin Walton, Gerald Postema on Genuinely Philosophical Jurisprudence, 8 Jurisprudence 604 (2017). My own combined review of Postema’s volume and Lobban’s volume on the common-law legal philosophy of the preceding centuries can be found at the Singapore Journal of Legal Studies 387 (2017).
  3. Volumes 8, 9, 11 & 12 cover both common-law and civil-law worlds from 1600 to the twentieth century, with volume 10 dealing with the “The Philosophers’ Philosophy of Law” for the same period.
  4. The opening chapter of his edited collection, History and the Disciplines: The Reclassification of Knowledge in Early Modern Europe (1997).
Cite as: Andrew Halpin, A Story of Jurisprudence and True Philosophy, JOTWELL (March 7, 2018) (reviewing Gerald J. Postema, The Perils and Prospects of Critical History: Comments on Bernal, Naffine, Vatter, and Walton, 8 Jurisprudence 609 (2017)), https://juris.jotwell.com/story-jurisprudence-true-philosophy/.

A New Blackstone

William Blackstone, Commentaries on the Laws of England (Ruth Paley & Wilfrid Prest eds., Oxford University Press, 2016).

William Blackstone was for a long time one of the central figures of both British and American legal thought. His Commentaries on the Laws of England was the text by which many learned law in England. In the United States, Blackstone was equally authoritative, though often read with additional commentary (e.g., by St. George Tucker1).

Blackstone’s Commentaries has also played a significant role within legal theory—especially for theorists critical of certain features of the approach to adjudication and judicial reasoning that he espoused and are large parts of Anglo-American tradition. Criticism of Blackstone and his Commentaries is, for example, integral to much of Jeremy Bentham’s writings on law.2 Bentham was an opponent of judicial law-making in general and the common law approach in particular. In a small piece called “Truth versus Ashurst” he compared the way that common law judges make law to the way people make laws for their dogs: “When your dog does anything you want to break him of, you wait till he does it, and then beat him for it.”3 This attack on the unpredictability and retroactivity of common law decision-making remains important to the present day.

Bentham’s student, John Austin, also made a famous attack on natural law theory, using a passage in the Commentaries as his stalking horse. Blackstone stated that “no human laws are of any validity if contrary to [the law of nature].”4 Austin countered that “to say that human laws which conflict with the Divine Law are not binding, that is to say, are not laws, is to talk stark nonsense. The most pernicious laws, and therefore those which are most opposed to the will of God, have been and are continually enforced by judicial tribunals.”5 To Austin, Blackstone’s approach to legal reasoning—and hence many aspects of Anglo-American judicial tradition—conflated what the law is with what it ought to be.

In more recent work, Duncan Kennedy, a critical legal theorist, offered an extended discussion of Blackstone as the occasion for propounding his idea of “the fundamental contradiction.”6 This is a contradiction he perceives between individual liberty and the community coercion that both safeguards and threatens it. Kennedy used details from the Commentaries to support the conclusion that there is this ongoing “contradiction” in law and government.

As these examples show, Blackstone’s work has provided both a guidepost for a great amount of Anglo-American judicial practice and an object of criticism for a wide array of jurisprudential thinkers concerned with aspects of those traditions.

Until recently, those who wanted to study or cite to Blackstone’s Commentaries usually made use of a very good 1979 University of Chicago Press edition (Stanley N. Katz, ed.), which was a facsimile of the (1765-1769) first edition of the Commentaries. While this version of the Commentaries remains good enough for many purposes, the new Oxford University Press edition has significant advantages for those who want to dive deeper into Blackstone’s text, with particular attention to historical context and how Blackstone varied his text over the various editions up to the ninth (and first posthumous) edition (1783). Each volume of the new OUP Commentaries contains a helpful introduction by a different historical expert in the field (David Lemmings for Book I — Of the Rights of Persons; Simon Stern for Book II — Of the Rights of Things; Thomas P. Gallanis for Book III — Of Private Wrongs; and Ruth Paley for Book IV — Of Public Wrongs), along with the varia, and helpful tables of statutes and cases mentioned in Blackstone’s text, at the end of each volume.

Blackstone’s Commentaries remain a central source for understanding both historical and modern Anglo-American law – as well as many debates in Anglo-American legal philosophy – and Oxford University Press’s new edition of the Commentaries offers a valuable new resource for studying them.

  1. See, e.g., Robert M. Cover, Book Review, 70 Colum. L. Rev. 1475 (1970).
  2. Jeremy Bentham, A Comment on the Commentaries (1776).
  3. Jeremy Bentham, Truth versus Ashurst, The Works of Jeremy Bentham, Vol. V (1843), P. 235.
  4. William Blackstone, Commentaries on the Laws of England, Vol. 1 (1768), P. 41 .
  5. John Austin, The Province of Jurisprudence Determined (Wilfrid Rumble, ed., 1995) [1832], Lecture V, P. 158.
  6. Duncan Kennedy, The Structure of Blackstone’s Commentaries, 28 Buffalo L. Rev. 208 (1979).
Cite as: Brian Bix, A New Blackstone, JOTWELL (January 16, 2018) (reviewing William Blackstone, Commentaries on the Laws of England (Ruth Paley & Wilfrid Prest eds., Oxford University Press, 2016)), https://juris.jotwell.com/a-new-blackstone/.