699 research outputs found

    The Minimum Description Length Principle for Pattern Mining: A Survey

    Full text link
    This is about the Minimum Description Length (MDL) principle applied to pattern mining. The length of this description is kept to the minimum. Mining patterns is a core task in data analysis and, beyond issues of efficient enumeration, the selection of patterns constitutes a major challenge. The MDL principle, a model selection method grounded in information theory, has been applied to pattern mining with the aim to obtain compact high-quality sets of patterns. After giving an outline of relevant concepts from information theory and coding, as well as of work on the theory behind the MDL and similar principles, we review MDL-based methods for mining various types of data and patterns. Finally, we open a discussion on some issues regarding these methods, and highlight currently active related data analysis problems

    Reconceptualizing Non-Article III Tribunals

    Get PDF
    The Supreme Court’s Article III doctrine is built upon an explicit assumption that Article III must accommodate non-Article III tribunals in order to allow Congress to “innovate” by creating new procedural structures to further its substantive regulatory goals. In this Article, I challenge that fundamental assumption. I argue that each of the types of non-Article III innovation and the underlying procedural goals cited by the Court can be obtained through our Article III courts. The Article then demonstrates that these are not theoretical or hypothetical solutions, but instead are existing structures already in place within Article III. Demonstrating that the foundation of our existing Article III doctrine cannot stand does not necessarily require the invalidation of all non-Article III tribunals. Instead, it requires a new generation of theory, built upon a more accurate conception of the forms of adjudication. This Article proposes two pillars upon which this new jurisprudence may rest. This Article undertakes to build the first piece of that foundation by demonstrating that tribunals only have unique institutional capacities when fulfilling an executive or legislative function — not when fulfilling purely adjudicative roles. This observation comports with the intuition of the early Article III doctrine. While this early intuition was abandoned to accommodate the modern administrative state, the Article reveals that these intuitions can not only be undertaken without undermining the modern administrative state, but would better satisfy the normative goals identified by the modern Court. This robustness suggests that this approach may provide a solid foundation for Article III doctrine, consonant not only with the existing architecture but also the innovation and evolution of law to come. But, equally important, correcting these mistaken assumptions reshapes many of the leading Article III theories in ways that provide answers to heretofore-unanswered critiques, as these insights have the capacity to demonstrate the feasibility of stricter constitutional approaches, while providing a constitutional basis for pragmatic doctrines. The second component of the foundation lays in the exception to Article III — consent of the parties. The consent of the parties to a non-Article III structure has become a foundational premise in our jurisprudence, invoked as recently as last Term by the Court. But this Article argues that this doctrine is undertheorized and seeks to establish that mere consent is not sufficient to protect Article III’s individual rights and structural role. Specifically, the Article explores the ways in which Congress has utilized its constitutional power both to create law and to structure the courts to devalue substantive rights or litigation outcomes to pressure individuals to consent to the non-article III determination of state and common law claims. Viewed through this lens, permitting non-Article III adjudication based on party consent may incentivize precisely the types of exertions of power by Congress to undermine the constitutional courts that Article III sought to preclude. This Article suggests that the doctrine must take a harder look at consent if it is to protect not only the structural role of Article III, but even the individual’s Article III rights from encroachment by Congress

    The Elastics of Snap Removal: An Empirical Case Study of Textualism

    Get PDF
    This article reports the findings of an empirical study of textualism as applied by federal judges interpreting the statute that permits removal of diversity cases from state to federal court. The “snap removal” provision in the statute is particularly interesting because its application forces judges into one of two interpretive camps—which are fairly extreme versions of textualism and purposivism, respectively. We studied characteristics of cases and judges to find predictors of textualist outcomes. In this article we offer a narrative discussion of key variables and we detail the results of our logistic regression analysis. The most salient predictive variable was the party of the president who appointed the judge. Female judges and young judges were also more likely to reach textualist outcomes. Cases involving torts were substantially more likely to be removed even though the statute raises a pure legal question upon which the subject matter of the case should have no bearing. Our most surprising finding was the impact of a judges’ undergraduate and legal education: the eliteness of the educational institution was positively correlated with removal for judges appointed by Republicans, but negatively correlated for judges appointed by Democrats. This disordinal interaction was especially striking since there was no party effect among judges who attended non-elite institutions. In addition to the aforementioned variables which were significant, several variables that were not predictive are also discussed; these include race, seniority, state court experience, and the prospect of multi-district case consolidations

    On the correspondence between dream content and target material under laboratory conditions: a meta-analysis of dream-ESP studies, 1966-2016

    Get PDF
    In order to further our understanding about the limits of human consciousness and the dream state, we report meta-analytic results on experimental dream-ESP studies for the period 1966 to 2016. Dream-ESP can be defined as a form of extra-sensory perception (ESP) in which a dreaming perceiver ostensibly gains information about a randomly selected target without using the normal sensory modalities or logical inference. Studies fell into two categories: the Maimonides Dream Lab (MDL) studies (n = 14), and independent (non-MDL) studies (n = 36). The MDL dataset yielded mean ES = .33 (SD = 0.37); the non-MDL studies yielded mean ES = .14 (SD = 0.27). The difference between the two mean values was not significant. A homogeneous dataset (N = 50) yielded a mean z of 0.75 (ES = .20, SD = 0.31), with corresponding significant Stouffer Z = 5.32, p = 5.19 × 10-8, suggesting that dream content can be used to identify target materials correctly and more often than would be expected by chance. No significant differences were found between: (a) three modes of ESP (telepathy, clairvoyance, precognition), (b) senders, (c) perceivers, or (d) REM/non-REM monitoring. The ES difference between dynamic targets (e.g., movie-film) and static (e.g., photographs) targets was not significant. We also found that significant improvements in the quality of the studies was not related to ES, but ES did decline over the 51-year period. Bayesian analysis of the same homogeneous dataset yielded results supporting the ‘frequentist’ finding that the null hypothesis should be rejected. We conclude that the dream-ESP paradigm in parapsychology is worthy of continued investigation, but we recommend design improvements

    Class Conflicts

    Get PDF
    The approach of the twentieth anniversary of the Supreme Court’s landmark decision in Amchem Products, Inc. v. Windsor provides the opportunity to reflect on the collapse of the framework it announced for managing intra-class conflicts. That framework, reinforced two years later in Ortiz v. Fibreboard Corp., was bold, in that it broadly defined actionable conflicts to include divergent interests with regard to settlement allocation; market-based, in that it sought to regulate such conflicts by harnessing competing subclass counsel’s financial incentives; and committed to intrinsic process values, insofar as, to assure structural fairness, the Court was willing to upend a settlement that would have solved the asbestos litigation crisis. Since the 1990s, the lower federal courts have chipped away at the foundation of that conflicts management regime by limiting Amchem and Ortiz to their facts, narrowly defining the kinds of conflicts that warrant subclassing, and turning to alternative assurances of fairness that do not involve fostering competition among subclass counsel. A new model of managing class conflicts is emerging from the trenches of federal trial courts. It is modest, insofar as it has a high tolerance for allocation conflicts; regulatory, rather than market or incentive-based, in that it relies on judicial officers to police conflicts; and utilitarian, because settlement outcomes provide convincing evidence of structurally fair procedures. In short, the new model is fundamentally the mirror image of the conflicts management framework the Court created at end of the last century. This Article provides an institutional account of this transformation, examining how changes in the way mass tort and other large-scale wrongs are litigated make it inconvenient to adhere to the Supreme Court’s twentieth century conflicts management blueprint. There is a lesson here: a jurisprudential edifice built without regard to the practical realities of resolving large-scale litigation cannot stand

    The Visible Trial: Judicial Assessment as Adjudication

    Get PDF
    Only a small fraction of lawsuits ends in trial—a phenomenon termed the “vanishing trial.” Critics of the declining trial rate see a remote, increasingly regressive judicial system. Defenders see a system that allows parties to resolve disputes independently. Analyzing criminal and civil filings in federal district court for the forty-year period from 1980 to 2019, we confirm a steady decline in the absolute and relative number of trials. We find, however, this emphasis on trial rate obscures courts’ vital role and ignores parties’ goals. Judges adjudicate disputes directly by ruling or effectively through other assessments of the parties’ cases. Even as their absolute and relative numbers decrease, trials remain the most visible event in trial courts. The visible trial serves effectively as a guide star. Our findings warrant a fundamental reconceptualization of litigation as primarily about educating parties rather than about trying cases. The assessment theory proposed here views adjudication as a continuous, information-disclosing process that is guided by but not destined for trial. Our evaluation and expectations of the modern justice system should be focused on the effectiveness of judges as teachers

    Judging Aggregate Settlement

    Get PDF
    While courts historically have taken a hands-off approach to settlement, judges across the legal spectrum have begun to intervene actively in “aggregate settlements”—repeated settlements between the same parties or institutions that resolve large groups of claims in a lockstep manner. In large-scale litigation, for example, courts have invented, without express authority, new “quasi-class action” doctrines to review the adequacy of massive settlements brokered by similar groups of attorneys. In recent and prominent agency settlements, including ones involving the SEC and EPA, courts have scrutinized the underlying merits to ensure settlements adequately reflect the interests of victims and the public at large. Even in criminal law, which has lagged behind other legal systems in acknowledging the primacy of negotiated outcomes, judges have taken additional steps to review iterant settlement decisions routinely made by criminal defense attorneys and prosecutors. Increasingly, courts intervene in settlements out of a fear commonly associated with class action negotiations—that the “aggregate” nature of the settlement process undermines the courts’ ability to promote legitimacy, loyalty, accuracy and the development of substantive law. Unfortunately, when courts step in to review the substance of settlements on their own, they may frustrate the parties’ interests, upset the separation of powers, or stretch the limits of their ability. The phenomenon of aggregate settlement thus challenges the judiciary’s duty to preserve the integrity of the civil, administrative, and criminal justice systems. This Article maps the new and critical role that courts must play in policing aggregate settlements. We argue that judicial review should exist to alert and press other institutions—private associations of attorneys, government lawyers, and the coordinate branches of government—to reform bureaucratic approaches to settling cases. Such review would not mean interfering with the final outcome of any given settlement. Rather, judicial review would instead mean demanding more information about the parties’ competing interests in settlement, more participation by outside stakeholders, and more reasoned explanations for the trade-offs made by counsel on behalf of similarly situated parties. In so doing, courts can provide an important failsafe that helps protect the procedural, substantive, and rule-of-law values threatened by aggregate settlements
    • 

    corecore