102,958 research outputs found

    Negatively Biased Relevant Subsets Induced by the Most-Powerful One-Sided Upper Confidence Limits for a Bounded Physical Parameter

    Full text link
    Suppose an observable x is the measured value (negative or non-negative) of a true mean mu (physically non-negative) in an experiment with a Gaussian resolution function with known fixed rms deviation s. The most powerful one-sided upper confidence limit at 95% C.L. is UL = x+1.64s, which I refer to as the "original diagonal line". Perceived problems in HEP with small or non-physical upper limits for x<0 historically led, for example, to substitution of max(0,x) for x, and eventually to abandonment in the Particle Data Group's Review of Particle Physics of this diagonal line relationship between UL and x. Recently Cowan, Cranmer, Gross, and Vitells (CCGV) have advocated a concept of "power constraint" that when applied to this problem yields variants of diagonal line, including UL = max(-1,x)+1.64s. Thus it is timely to consider again what is problematic about the original diagonal line, and whether or not modifications cure these defects. In a 2002 Comment, statistician Leon Jay Gleser pointed to the literature on recognizable and relevant subsets. For upper limits given by the original diagonal line, the sample space for x has recognizable relevant subsets in which the quoted 95% C.L. is known to be negatively biased (anti-conservative) by a finite amount for all values of mu. This issue is at the heart of a dispute between Jerzy Neyman and Sir Ronald Fisher over fifty years ago, the crux of which is the relevance of pre-data coverage probabilities when making post-data inferences. The literature describes illuminating connections to Bayesian statistics as well. Methods such as that advocated by CCGV have 100% unconditional coverage for certain values of mu and hence formally evade the traditional criteria for negatively biased relevant subsets; I argue that concerns remain. Comparison with frequentist intervals advocated by Feldman and Cousins also sheds light on the issues.Comment: 22 pages, 7 figure

    Choice of Law in Online Legal Ethics: Changing a Vague Standard for Attorney Advertising on the Internet

    Get PDF

    Prototype system for supporting the incremental modelling of vague geometric configurations

    Get PDF
    In this paper the need for Intelligent Computer Aided Design (Int.CAD) to jointly support design and learning assistance is introduced. The paper focuses on presenting and exploring the possibility of realizing learning assistance in Int.CAD by introducing a new concept called Shared Learning. Shared Learning is proposed to empower CAD tools with more useful learning capabilities than that currently available and thereby provide a stronger interaction of learning between a designer and a computer. Controlled computational learning is proposed as a means whereby the Shared Learning concept can be realized. The viability of this new concept is explored by using a system called PERSPECT. PERSPECT is a preliminary numerical design tool aimed at supporting the effective utilization of numerical experiential knowledge in design. After a detailed discussion of PERSPECT's numerical design support, the paper presents the results of an evaluation that focuses on PERSPECT's implementation of controlled computational learning and ability to support a designer's need to learn. The paper then discusses PERSPECT's potential as a tool for supporting the Shared Learning concept by explaining how a designer and PERSPECT can jointly learn. There is still much work to be done before the full potential of Shared Learning can be realized. However, the authors do believe that the concept of Shared Learning may hold the key to truly empowering learning in Int.CAD

    From Barcelona Process to Neighbourhood Policy: Assessments and Open Issues. CEPS Working Documents No. 220, 1 March 2005

    Get PDF
    The Barcelona process so far has been a valuable systemic/institutional advance in Euro-Med relations and a confidence-building measure on a large scale. But it has not been a sufficient driving force to have created a momentum of economic, political and social advance in the partner states. It is therefore quite plausible that the EU should seek some new advance – through the European Neighbourhood Policy (ENP) – to build on the positive features of Barcelona and so try to introduce some new driving force. The Action Plans currently being adopted seek to make the often vague intentions of the Association Agreements of the Barcelona process more operational by linking them to either domestic policy programmes of the partner state or to EU policy norms and standards as an external anchor. In this paper we first crystallise alternative approaches for the ENP to become a real driving force under the headings of ‘conditionality’ and ‘socialisation’. The conditionality concept would mean that the EU sets out i) what incentives it offers, and ii) the conditions on which these incentives would be delivered. The socialisation concept relies essentially on a learning process that comes from the extensive interaction between actors in the partner states and the EU, which induces the partner states to engage in policy reforms that are to a degree modelled on EU norms or derive some inspiration from them. For the EU to become a driving force for reform in the region also requires that it does not have to face an uphill struggle against negative tendencies, for example in the widening and deepening of radical Islam – and here the issue of coherence in the approaches of the EU and US together is paramount

    Should There Be a Specialized Ethics Code for Death-Penalty Defense Lawyers

    Get PDF
    State ethics codes based on the ABA Model Rules of Professional Conduct address lawyers\u27 work in advocacy but do not target lawyers\u27 work in particular areas of advocacy or in other specialized practice areas. For more than forty years, critics have asserted that existing ethics rules are too superficial and should be supplemented by specialized rules. This article examines the utility of specialized ethics rules for one particular sub-specialty-death-penalty defense practice. After identifying arguments for and against a specialized ethics code for death-penalty cases, the article analyzes the arguments in the context of a particular ethics dilemma that some death-penalty defense lawyers have encountered-namely, whether to pursue post-conviction relief on behalf of an ambivalent or unexpressive mentally-ill death-row inmate. The article finds persuasive reasons for courts to develop specialized rules that would provide death-penalty defense lawyers more clarity in how to address this and other ethics dilemmas. Recognizing that courts will likely remain indifferent to the idea of developing specialized ethics rules, however, the article concludes by identifying other ways for courts to mitigate the uncertainties that specialized rules would address

    A statistical inference method for the stochastic reachability analysis.

    Get PDF
    The main contribution of this paper is the characterization of reachability problem associated to stochastic hybrid systems in terms of imprecise probabilities. This provides the connection between reachability problem and Bayesian statistics. Using generalised Bayesian statistical inference, a new concept of conditional reach set probabilities is defined. Then possible algorithms to compute the reach set probabilities are derived

    Originalism: A Critical Introduction

    Get PDF
    The theory of originalism is now well into its second wave. Originalism first came to prominence in the 1970s and 1980s as conservative critics reacted to the decisions of the Warren Court, and the Reagan Administration embraced originalism as a check on judicial activism. A second wave of originalism has emerged since the late 1990s, responding to earlier criticisms and reconsidering earlier assumptions and conclusions. This Article assesses where originalist theory currently stands. It outlines the points of agreement and disagreement within the recent originalist literature and highlights the primary areas of continuing separation between originalists and their critics

    Fuzzy Logic and Its Uses in Finance: A Systematic Review Exploring Its Potential to Deal with Banking Crises

    Get PDF
    The major success of fuzzy logic in the field of remote control opened the door to its application in many other fields, including finance. However, there has not been an updated and comprehensive literature review on the uses of fuzzy logic in the financial field. For that reason, this study attempts to critically examine fuzzy logic as an effective, useful method to be applied to financial research and, particularly, to the management of banking crises. The data sources were Web of Science and Scopus, followed by an assessment of the records according to pre-established criteria and an arrangement of the information in two main axes: financial markets and corporate finance. A major finding of this analysis is that fuzzy logic has not yet been used to address banking crises or as an alternative to ensure the resolvability of banks while minimizing the impact on the real economy. Therefore, we consider this article relevant for supervisory and regulatory bodies, as well as for banks and academic researchers, since it opens the door to several new research axes on banking crisis analyses using artificial intelligence techniques

    Bayesian learning of models for estimating uncertainty in alert systems: application to air traffic conflict avoidance

    Get PDF
    Alert systems detect critical events which can happen in the short term. Uncertainties in data and in the models used for detection cause alert errors. In the case of air traffic control systems such as Short-Term Conflict Alert (STCA), uncertainty increases errors in alerts of separation loss. Statistical methods that are based on analytical assumptions can provide biased estimates of uncertainties. More accurate analysis can be achieved by using Bayesian Model Averaging, which provides estimates of the posterior probability distribution of a prediction. We propose a new approach to estimate the prediction uncertainty, which is based on observations that the uncertainty can be quantified by variance of predicted outcomes. In our approach, predictions for which variances of posterior probabilities are above a given threshold are assigned to be uncertain. To verify our approach we calculate a probability of alert based on the extrapolation of closest point of approach. Using Heathrow airport flight data we found that alerts are often generated under different conditions, variations in which lead to alert detection errors. Achieving 82.1% accuracy of modelling the STCA system, which is a necessary condition for evaluating the uncertainty in prediction, we found that the proposed method is capable of reducing the uncertain component. Comparison with a bootstrap aggregation method has demonstrated a significant reduction of uncertainty in predictions. Realistic estimates of uncertainties will open up new approaches to improving the performance of alert systems
    • …
    corecore