100,789 research outputs found

    Imprecise Probability and Chance

    Get PDF
    Understanding probabilities as something other than point values (e.g., as intervals) has often been motivated by the need to find more realistic models for degree of belief, and in particular the idea that degree of belief should have an objective basis in “statistical knowledge of the world.” I offer here another motivation growing out of efforts to understand how chance evolves as a function of time. If the world is “chancy” in that there are non-trivial, objective, physical probabilities at the macro-level, then the chance of an event e that happens at a given time is e goes to one continuously or not is left open. Discontinuities in such chance trajectories can have surprising and troubling consequences for probabilistic analyses of causation and accounts of how events occur in time. This, coupled with the compelling evidence for quantum discontinuities in chance’s evolution, gives rise to a “(dis)continuity bind” with respect to chance probability trajectories. I argue that a viable option for circumventing the (dis)continuity bind is to understand the probabilities “imprecisely,” that is, as intervals rather than point values. I then develop and motivate an alternative kind of continuity appropriate for interval-valued chance probability trajectories

    Robust Classification for Imprecise Environments

    Get PDF
    In real-world environments it usually is difficult to specify target operating conditions precisely, for example, target misclassification costs. This uncertainty makes building robust classification systems problematic. We show that it is possible to build a hybrid classifier that will perform at least as well as the best available classifier for any target conditions. In some cases, the performance of the hybrid actually can surpass that of the best known classifier. This robust performance extends across a wide variety of comparison frameworks, including the optimization of metrics such as accuracy, expected cost, lift, precision, recall, and workforce utilization. The hybrid also is efficient to build, to store, and to update. The hybrid is based on a method for the comparison of classifier performance that is robust to imprecise class distributions and misclassification costs. The ROC convex hull (ROCCH) method combines techniques from ROC analysis, decision analysis and computational geometry, and adapts them to the particulars of analyzing learned classifiers. The method is efficient and incremental, minimizes the management of classifier performance data, and allows for clear visual comparisons and sensitivity analyses. Finally, we point to empirical evidence that a robust hybrid classifier indeed is needed for many real-world problems.Comment: 24 pages, 12 figures. To be published in Machine Learning Journal. For related papers, see http://www.hpl.hp.com/personal/Tom_Fawcett/ROCCH

    Causation, Probability, and the Continuity Bind

    Get PDF
    Analyses of singular (token-level) causation often make use of the idea that a cause increases the probability of its effect. Of particular salience in such accounts are the values of the probability function of the effect, conditional on the presence and absence of the putative cause, analysed around the times of the events in question: causes are characterized by the effect’s probability function being greater when conditionalized upon them. Put this way, it becomes clearer that the ‘behaviour’ (continuity) of probability functions in small intervals about the times in question ought to be of concern. In this article, I make an extended case that causal theorists employing the ‘probability raising’ idea should pay attention to the continuity question. Specifically, if the probability functions are ‘jumping about’ in ways typical of discontinuous functions, then the stability of the relevant probability increase is called into question. The rub, however, is that sweeping requirements for either continuity or discontinuity are problematic and, as I argue, this constitutes a ‘continuity bind’. Hence more subtle considerations and constraints are needed, two of which I consider: (1) utilizing discontinuous first derivatives of continuous probability functions, and (2) abandoning point probability for imprecise (interval) probability

    Bayesian Learning for a Class of Priors with Prescribed Marginals

    Get PDF
    We present Bayesian updating of an imprecise probability measure, represented by a class of precise multidimensional probability measures. Choice and analysis of our class are motivated by expert interviews that we conducted with modelers in the context of climatic change. From the interviews we deduce that generically, experts hold a much more informed opinion on the marginals of uncertain parameters rather than on their correlations. Accordingly, we specify the class by prescribing precise measures for the marginals while letting the correlation structure subject to complete ignorance. For sake of transparency, our discussion focuses on the tutorial example of a linear two-dimensional Gaussian model. We operationalize Bayesian learning for that class by various updating rules, starting with (a modified version of) the generalized Bayes' rule and the maximum likelihood update rule (after Gilboa and Schmeidler). Over a large range of potential observations, the generalized Bayes' rule would provide non-informative results. We restrict this counter-intuitive and unnecessary growth of uncertainty by two means, the discussion of which refers to any kind of imprecise model, not only to our class. First, we find our class of priors too inclusive and, hence, require certain additional properties of prior measures in terms of smoothness of probability density functions. Second, we argue that both updating rules are dissatisfying, the generalized Bayes' rule being too conservative, i.e., too inclusive, the maximum likelihood rule being too exclusive. Instead, we introduce two new ways of Bayesian updating of imprecise probabilities: a ``weighted maximum likelihood method'' and a ``semi-classical method.'' The former bases Bayesian updating on the whole set of priors, however, with weighted influence of its members. By referring to the whole set, the weighted maximum likelihood method allows for more robust inferences than the standard maximum likelihood method and, hence, is better to justify than the latter.Furthermore, the semi-classical method is more objective than the weighted maximum likelihood method as it does not require the subjective definition of a weighting function. Both new methods reveal much more informative results than the generalized Bayes' rule, what we demonstrate for the example of a stylized insurance model

    A robust Bayesian analysis of the impact of policy decisions on crop rotations.

    Get PDF
    We analyse the impact of a policy decision on crop rotations, using the imprecise land use model that was developed by the authors in earlier work. A specific challenge in crop rotation models is that farmer’s crop choices are driven by both policy changes and external non-stationary factors, such as rainfall, temperature and agricultural input and output prices. Such dynamics can be modelled by a non-stationary stochastic process, where crop transition probabilities are multinomial logistic functions of such external factors. We use a robust Bayesian approach to estimate the parameters of our model, and validate it by comparing the model response with a non-parametric estimate, as well as by cross validation. Finally, we use the resulting predictions to solve a hypothetical yet realistic policy problem

    An Empirical Test of the Reder Hypothesis

    Get PDF
    A firm that faces insufficient supply of labor can either increase the wage offer to attract more applicants, or reduce the hiring standard to enlarge the pool of potential employees, or do both. This simultaneous adjustment of wages and hiring standards in response to changes in market conditions has been emphasized in a classical contribution by Reder and leads to the effect that wage reactions to employment changes can be expected to be more pronounced for low wage workers than for high wage workers. This is the `Reder Hypothesis'. The present contribution sets out to test this hypothesis using German employment register data and a censored panel quantile regression approach. Our findings support the Reder Hypothesis, suggesting that market clearing in labor markets is achieved by a combination of wage adjustments and changes in hiring standards

    Bayesian Learning in Financial Markets: Testing for the Relevance of Information Precision in Price Discovery

    Get PDF
    An important claim of Bayesian learning and a standard assumption in price discovery models is that the strength of the price impact of unanticipated information depends on the precision of the news. In this paper, we test for this assumption by analyzing intra-day price responses of CBOT T-bond futures to U.S. employment announcements. By employing additional detail information besides the widely used headline figures, we extract release-specific precision measures which allow to test for the claim of Bayesian updating. We find that the price impact of more precise information is significantly stronger. The results remain stable even after controlling for an asymmetric price response to 'good' and 'bad' news.Bayesian learning; information precision; macroeconomic announcements; asymmetric price response; ¯nancial markets; high-frequency data
    corecore