38,370 research outputs found

    Panel Assignment in the Federal Courts of Appeals

    Get PDF
    It is common knowledge that the federal courts of appeals typically hear cases in panels of three judges and that the composition of the panel can have significant consequences for case outcomes and for legal doctrine more generally. Yet neither legal scholars nor social scientists have focused on the question of how judges are selected for their panels. Instead, a substantial body of scholarship simply assumes that panel assignment is random. This Article provides what, up until this point, has been a missing account of panel assignment. Drawing on a multiyear qualitative study of five circuit courts, including in-depth interviews with thirty-five judges and senior administrators, I show that strictly random selection is a myth, and an improbable one at that—in many instances, it would have been impossible as a practical matter for the courts studied here to create their panels by random draw. Although the courts generally tried to “mix up” the judges, the chief judges and clerks responsible for setting the calendar also took into account various other factors, from collegiality to efficiency-based considerations. Notably, those factors differed from one court to the next; no two courts approached the challenge of panel assignment in precisely the same way. These findings pose an important challenge to the widespread assumption of panel randomness and reveal key normative questions that have been largely ignored in the literature. Although randomness is regarded as the default selection method across much of judicial administration, there is little exposition of why it is valuable. What, exactly, is desirable about having judges brought together randomly in the first place? What, if anything, is problematic about nonrandom methods of selection? This Article sets out to clarify both the costs and benefits of randomness, arguing that there can be valid reasons to depart from it. As such, it provides a framework for assessing different panel assignment practices and the myriad other court practices that rely, to some extent, on randomness

    Institutions and economic research: a case of location externalities on agricultural resource allocation in the Kat River basin, South Africa

    Get PDF
    The Physical Externality Model is used to illustrate the potential limitations of blindly adopting formal models for economic investigation and explanation in varied geographical contexts. As argued by institutional economists for the last hundred years the practice limits the value and relevance of most general economic inquiry. This model postulates that the geographical location of farmers along a given watercourse, in which water is diverted individually, leads to structural inefficiencies that negatively affect the whole farming community. These effects are felt more severely at downstream sites and lead to a status quo where upstream farmers possess relative economic and political advantages over their counterparts elsewhere. In the study of the Kat River basin these predictions appear to be true only in as far as they relate to legal and political allocations and use of water resources. In terms of lawful uses of land resources aimed at expanding citrus production, the model’s predictions are not met. The status quo is however fully explained by the implications of having adopted formal water scheduling rights by upstream farmers as well as other geographical factors. Hence, the case for investigating the effects of important institutions within general economic research is strengthened.institutions, water allocation, physical externality, Kat River Valley,

    SLO-aware Colocation of Data Center Tasks Based on Instantaneous Processor Requirements

    Full text link
    In a cloud data center, a single physical machine simultaneously executes dozens of highly heterogeneous tasks. Such colocation results in more efficient utilization of machines, but, when tasks' requirements exceed available resources, some of the tasks might be throttled down or preempted. We analyze version 2.1 of the Google cluster trace that shows short-term (1 second) task CPU usage. Contrary to the assumptions taken by many theoretical studies, we demonstrate that the empirical distributions do not follow any single distribution. However, high percentiles of the total processor usage (summed over at least 10 tasks) can be reasonably estimated by the Gaussian distribution. We use this result for a probabilistic fit test, called the Gaussian Percentile Approximation (GPA), for standard bin-packing algorithms. To check whether a new task will fit into a machine, GPA checks whether the resulting distribution's percentile corresponding to the requested service level objective, SLO is still below the machine's capacity. In our simulation experiments, GPA resulted in colocations exceeding the machines' capacity with a frequency similar to the requested SLO.Comment: Author's version of a paper published in ACM SoCC'1

    Risk Limiting Dispatch with Ramping Constraints

    Full text link
    Reliable operation in power systems is becoming more difficult as the penetration of random renewable resources increases. In particular, operators face the risk of not scheduling enough traditional generators in the times when renewable energies becomes lower than expected. In this paper we study the optimal trade-off between system and risk, and the cost of scheduling reserve generators. We explicitly model the ramping constraints on the generators. We model the problem as a multi-period stochastic control problem, and we show the structure of the optimal dispatch. We then show how to efficiently compute the dispatch using two methods: i) solving a surrogate chance constrained program, ii) a MPC-type look ahead controller. Using real world data, we show the chance constrained dispatch outperforms the MPC controller and is also robust to changes in the probability distribution of the renewables.Comment: Shorter version submitted to smartgrid comm 201

    Panel Assignment in the Federal Courts of Appeals

    Get PDF
    It is common knowledge that the federal courts of appeals typically hear cases in panels of three judges and that the composition of the panel can have significant consequences for case outcomes and for legal doctrine more generally. Yet neither legal scholars nor social scientists have focused on the question of how judges are selected for their panels. Instead, a substantial body of scholarship simply assumes that panel assignment is random. This Article provides what, up until this point, has been a missing account of panel assignment. Drawing on a multiyear qualitative study of five circuit courts, including in-depth interviews with thirty-five judges and senior administrators, I show that strictly random selection is a myth, and an improbable one at that—in many instances, it would have been impossible as a practical matter for the courts studied here to create their panels by random draw. Although the courts generally tried to “mix up” the judges, the chief judges and clerks responsible for setting the calendar also took into account various other factors, from collegiality to efficiency-based considerations. Notably, those factors differed from one court to the next; no two courts approached the challenge of panel assignment in precisely the same way. These findings pose an important challenge to the widespread assumption of panel randomness and reveal key normative questions that have been largely ignored in the literature. Although randomness is regarded as the default selection method across much of judicial administration, there is little exposition of why it is valuable. What, exactly, is desirable about having judges brought together randomly in the first place? What, if anything, is problematic about nonrandom methods of selection? This Article sets out to clarify both the costs and benefits of randomness, arguing that there can be valid reasons to depart from it. As such, it provides a framework for assessing different panel assignment practices and the myriad other court practices that rely, to some extent, on randomness

    An Empirical Approach to Temporal Reference Resolution

    Full text link
    This paper presents the results of an empirical investigation of temporal reference resolution in scheduling dialogs. The algorithm adopted is primarily a linear-recency based approach that does not include a model of global focus. A fully automatic system has been developed and evaluated on unseen test data with good results. This paper presents the results of an intercoder reliability study, a model of temporal reference resolution that supports linear recency and has very good coverage, the results of the system evaluated on unseen test data, and a detailed analysis of the dialogs assessing the viability of the approach.Comment: 13 pages, latex using aclap.st

    Power efficient job scheduling by predicting the impact of processor manufacturing variability

    Get PDF
    Modern CPUs suffer from performance and power consumption variability due to the manufacturing process. As a result, systems that do not consider such variability caused by manufacturing issues lead to performance degradations and wasted power. In order to avoid such negative impact, users and system administrators must actively counteract any manufacturing variability. In this work we show that parallel systems benefit from taking into account the consequences of manufacturing variability when making scheduling decisions at the job scheduler level. We also show that it is possible to predict the impact of this variability on specific applications by using variability-aware power prediction models. Based on these power models, we propose two job scheduling policies that consider the effects of manufacturing variability for each application and that ensure that power consumption stays under a system-wide power budget. We evaluate our policies under different power budgets and traffic scenarios, consisting of both single- and multi-node parallel applications, utilizing up to 4096 cores in total. We demonstrate that they decrease job turnaround time, compared to contemporary scheduling policies used on production clusters, up to 31% while saving up to 5.5% energy.Postprint (author's final draft
    • …
    corecore