4,868 research outputs found

    The Law and Economics of Entrenchment

    Full text link
    Should law respond readily to society’s evolving views, or should it remain fixed? This is the question of entrenchment, meaning the insulation of law from change through supermajority rules and other mechanisms. Entrenchment stabilizes law, which promotes reliance and predictability, but it also frustrates democratic majorities. Legal scholars have long studied this tension but made little progress in resolving it. This Article considers the problem from the perspective of law and economics. Three arguments follow. First, majority rule can systematically harm society—even when voters are rational (i.e., not passionate) and no intense minority is present. This is because of a collective action problem created by transition costs. Second, entrenchment is unnecessary when bargaining is easy, but it offers a second-best solution when bargaining is hard. This helps explain why some laws are entrenched but not others. Third, the optimal degree of entrenchment depends on a distinction existing scholarship ignores: whether the transition costs associated with a change in law are variable or fixed. Given variable costs, the argument for entrenchment is even stronger than scholars realize. But given fixed costs, the argument weakens. To overcome fixed costs, outdated laws require major change, but entrenchment encourages only minor change. This mismatch relates to an age-old question: when, if ever, should judges update entrenched law through interpretation? In one sense, judges can beneficially update in a way that democracy cannot. These ideas cast doubt on work by originalists, living constitutionalists, and others. They have implications for legal design and constitutional law, and they plant seeds for a new and fruitful field: the law and economics of entrenchment

    Interpreting Initiatives

    Get PDF

    Judicial Independence and Social Welfare

    Get PDF
    Judicial independence is a cornerstone of American constitutionalism. It empowers judges to check the other branches of government and resolve cases impartially and in accordance with law. Yet independence comes with a hazard. Precisely because they are independent, judges can ignore law and pursue private agendas. For two centuries, scholars have debated those ideas and the underlying tradeoff: independence versus accountability. They have achieved little consensus, in part because independence raises difficult antecedent questions. We cannot decide how independent to make a judge until we agree on what a judge is supposed to do. That depends on one’s views about complicated issues like minority rights, the determinacy of law, and the nature of legalism itself. These complications have paralyzed the debate. This Article presents a way forward. It reduces the debate about independence to a small set of intuitive parameters and shows how they interact. The result is a framework for identifying the optimal degree of judicial independence. The framework transcends the thorny issues bogging down the debate by allowing scholars with diverse views and methodologies to input whatever assumptions they like and get an answer to the question “how independent should judges be?” This framework generates important insights. It shows that independence can implicate a new and fundamental trade-off. Independent judges make some nonlegalistic decisions, and each such decision imposes a high cost on society. Dependent judges make more nonlegalistic decisions, but each imposes a low cost on society. The framework also shows that society may prefer a dependent judge to adjudicate minority rights and that the determinacy of law can be irrelevant to the choice between an independent and a dependent judge. Finally, it shows that the debate rests on deep and contestable assumptions about the value of law. The question is not whether the legalistic decisions that independence is supposed to facilitate are better than nonlegalistic alternatives. The question is “how much better?

    Aggregate Corruption

    Get PDF

    Truth Bounties: A Market Solution to Fake News

    Get PDF
    False information poses a threat to individuals, groups, and society. Many people struggle to judge the veracity of the information around them, whether that information travels through newspapers, talk radio, TV, or social media. Concerned with the spread of misinformation and harmful falsehoods, much of the policy, popular, and scholarly conversation today revolves around proposals to expand the regulation of individuals, platforms, and the media. While more regulation may seem inevitable, it faces constitutional and political hurdles. Furthermore, regulation can have undesirable side effects and be ripe for abuse by powerful actors, public and private. This Article presents an alternative for fighting misinformation that avoids many pitfalls of regulation: truth bounties. We develop a contractual mechanism that would enable individuals, media, and others to pledge money to support the credibility of their communications. Any person could claim the bounty by presenting evidence of the falsity of the communication before a dedicated body of private arbitrators. Under the system we envision, anyone consuming information on the internet would know immediately ifa given communication had a bounty attached, whether the communication had been challenged, and whether the challenge succeeded orfailed. As John Stuart Mill recognized, we can trust our grasp of the truth only by putting it to the fire of challenge. Truth bounties open the challenge to all

    Active Virtues

    Get PDF
    Constitutional theory has long been influenced by the idea that the Supreme Court exercises “passive virtues,” avoiding politically divisive cases that threaten its legitimacy. The Article inverts the logic. Supreme Court Justices (and other judges too) do more than avoid divisive cases that could weaken the Court. They seek “unity” cases—meaning cases where law and politics align—that could strengthen the Court. When judges seek unity cases to enhance their legitimacy, they exercise active virtues. We develop the theory of active virtues and demonstrate its use. Our case studies come from the U.S. Supreme Court and tribunals worldwide, and they involve issues like voting, piracy, and police. Following the case studies, we situate active virtues in a broader theory of judicial power. According to our theory, courts balance divisive and unity cases like investors balance stocks and bonds. This portfolio theory of judicial power illuminates a range of topics, including docket control, activism, the counter-majoritarian difficulty, and the rule of law. Recognizing active virtues may have implications for today’s Supreme Court, which faces a legitimacy crisis

    Do unbalanced data have a negative effect on LDA?

    Get PDF
    For two-class discrimination, Xie and Qiu [The effect of imbalanced data sets on LDA: a theoretical and empirical analysis, Pattern Recognition 40 (2) (2007) 557–562] claimed that, when covariance matrices of the two classes were unequal, a (class) unbalanced data set had a negative effect on the performance of linear discriminant analysis (LDA). Through re-balancing 10 real-world data sets, Xie and Qiu [The effect of imbalanced data sets on LDA: a theoretical and empirical analysis, Pattern Recognition 40 (2) (2007) 557–562] provided empirical evidence to support the claim using AUC (Area Under the receiver operating characteristic Curve) as the performance metric. We suggest that such a claim is vague if not misleading, there is no solid theoretical analysis presented in Xie and Qiu [The effect of imbalanced data sets on LDA: a theoretical and empirical analysis, Pattern Recognition 40 (2) (2007) 557–562], and AUC can lead to a quite different conclusion from that led to by misclassification error rate (ER) on the discrimination performance of LDA for unbalanced data sets. Our empirical and simulation studies suggest that, for LDA, the increase of the median of AUC (and thus the improvement of performance of LDA) from re-balancing is relatively small, while, in contrast, the increase of the median of ER (and thus the decline in performance of LDA) from re-balancing is relatively large. Therefore, from our study, there is no reliable empirical evidence to support the claim that a (class) unbalanced data set has a negative effect on the performance of LDA. In addition, re-balancing affects the performance of LDA for data sets with either equal or unequal covariance matrices, indicating that having unequal covariance matrices is not a key reason for the difference in performance between original and re-balanced data

    Aryl Phosphoramidates of 5-Phospho Erythronohydroxamic Acid, A New Class of Potent Trypanocidal Compounds

    Get PDF
    RNAi and enzymatic studies have shown the importance of 6-phosphogluconate dehydrogenase (6-PGDH) in Trypanosoma brucei for the parasite survival and make it an attractive drug target for the development of new treatments against human African trypanosomiasis. 2,3-O-Isopropylidene-4-erythrono hydroxamate is a potent inhibitor of parasite Trypanosoma brucei 6-phosphogluconate dehydrogenase (6-PGDH), the third enzyme of the pentose phosphate pathway. However, this compound does not have trypanocidal activity due to its poor membrane permeability. Consequently, we have previously reported a prodrug approach to improve the antiparasitic activity of this inhibitor by converting the phosphate group into a less charged phosphate prodrug. The activity of prodrugs appeared to be dependent on their stability in phosphate buffer. Here we have successfully further extended the development of the aryl phosphoramidate prodrugs of 2,3-O-isopropylidene-4-erythrono hydroxamate by synthesizing a small library of phosphoramidates and evaluating their biological activity and stability in a variety of assays. Some of the compounds showed high trypanocidal activity and good correlation of activity with their stability in fresh mouse blood
    corecore