1,020 research outputs found

    The Utility of Text: The Case of Amicus Briefs and the Supreme Court

    Full text link
    We explore the idea that authoring a piece of text is an act of maximizing one's expected utility. To make this idea concrete, we consider the societally important decisions of the Supreme Court of the United States. Extensive past work in quantitative political science provides a framework for empirically modeling the decisions of justices and how they relate to text. We incorporate into such a model texts authored by amici curiae ("friends of the court" separate from the litigants) who seek to weigh in on the decision, then explicitly model their goals in a random utility model. We demonstrate the benefits of this approach in improved vote prediction and the ability to perform counterfactual analysis.Comment: Working draf

    WYBRANE PROBLEMY WARTOŚCIOWANIA I KLASYFIKACJI BUDOWLI ZABYTKOWYCH Z WYKORZYSTANIEM ZBIORÓW PRZYBLIŻONYCH

    Get PDF
    The paper presents the problems associated with multicriteria evaluation of historic buildings. The capabilities of modeling the monuments in order to use the Rough Sets approach for their evaluation were presented. The problems of selection criteria for the evaluation and taking into account the structure of the object, as well as the problem of discretization and its impact on the generating of the rules were discussed.W artykule zaprezentowano problemy związane z wielokryterialną oceną budowli zabytkowych. Przedstawione zostały możliwości modelowania obiektu zabytkowego w celu wykorzystania podejścia Zbiorów Przybliżonych dla ich wartościowania. Omówiono problemy doboru kryteriów oceny oraz uwzględnienia struktury obiektu, jak również problem dyskretyzacji i jego wpływ na generowanie reguł

    Voting margin: A scheme for error-tolerant k nearest neighbors classifiers for machine learning

    Get PDF
    Machine learning (ML) techniques such as classifiers are used in many applications, some of which are related to safety or critical systems. In this case, correct processing is a strict requirement and thus ML algorithms (such as for classification) must be error tolerant. A naive approach to implement error tolerant classifiers is to resort to general protection techniques such as modular redundancy. However, modular redundancy incurs in large overheads in many metrics such as hardware utilization and power consumption that may not be acceptable in applications that run on embedded or battery powered systems. Another option is to exploit the algorithmic properties of the classifier to provide protection and error tolerance at a lower cost. This paper explores this approach for a widely used classifier, the k Nearest Neighbors ( k NNs), and proposes an efficient scheme to protect it against errors. The proposed technique is based on a time-based modular redundancy (TBMR) scheme. The proposed scheme exploits the intrinsic redundancy of k NNs to drastically reduce the number of re-computations needed to detect errors. This is achieved by noting that when voting among the k nearest neighbors has a large majority, an error in one of the voters cannot change the result, hence voting margin (VM). This observation has been refined and extended in the proposed VM scheme to also avoid re-computations in some cases in which the majority vote is tight. The VM scheme has been implemented and evaluated with publicly available data sets that cover a wide range of applications and settings. The results show that by exploiting the intrinsic redundancy of the classifier, the proposed scheme is able to reduce the cost compared to modular redundancy by more than 60 percent in all configurations evaluated.Pedro Reviriego and Josée Alberto Hernández would like to acknowledge the support of the TEXEO project TEC2016-80339-R funded by the Spanish Ministry of Economy and Competitivity and of the Madrid Community research project TAPIR-CM Grant no. P2018/TCS-4496

    Finding Order in the Morass: The Three Real Justifications for Piercing the Corporate Veil

    Get PDF
    Few doctrines are more shrouded in mystery or litigated more often than piercing the corporate veil. We develop a new theoretical framework that posits that veil piercing is done to achieve three discrete public policy goals, each of which is consistent with economic efficiency: (1) achieving the purpose of an existing statute or regulation; (2) preventing shareholders from obtaining credit by misrepresentation; and (3) promoting the bankruptcy values of achieving the orderly, efficient resolution of a bankrupt\u27s estate. We analyze the facts of veil-piercing cases to show how the outcomes are explained by our taxonomy. We demonstrate that a supposed justification for veil piercing--undercapitalization--in fact rarely, if ever, provides an independent basis for piercing the corporate veil. Finally, we employ modern quantitative machine learning methods never before utilized in legal scholarship to analyze the full text of 9,380judicial opinions. We demonstrate that our theories systematically predict veil-piercing outcomes, that the widely invoked rationale of undercapitalization of the business poorly explains these cases, and that our theories most closely reflect the actual textual structure of the opinion
    corecore