270,484 research outputs found

    Disagreement, equal weight and commutativity

    Get PDF
    How should we respond to cases of disagreement where two epistemic agents have the same evidence but come to different conclusions? Adam Elga has provided a Bayesian framework for addressing this question. In this paper, I shall highlight two unfortunate consequences of this framework, which Elga does not anticipate. Both problems derive from a failure of commutativity between application of the equal weight view and updating in the light of other evidence

    Learning to Select Pre-Trained Deep Representations with Bayesian Evidence Framework

    Full text link
    We propose a Bayesian evidence framework to facilitate transfer learning from pre-trained deep convolutional neural networks (CNNs). Our framework is formulated on top of a least squares SVM (LS-SVM) classifier, which is simple and fast in both training and testing, and achieves competitive performance in practice. The regularization parameters in LS-SVM is estimated automatically without grid search and cross-validation by maximizing evidence, which is a useful measure to select the best performing CNN out of multiple candidates for transfer learning; the evidence is optimized efficiently by employing Aitken's delta-squared process, which accelerates convergence of fixed point update. The proposed Bayesian evidence framework also provides a good solution to identify the best ensemble of heterogeneous CNNs through a greedy algorithm. Our Bayesian evidence framework for transfer learning is tested on 12 visual recognition datasets and illustrates the state-of-the-art performance consistently in terms of prediction accuracy and modeling efficiency.Comment: Appearing in CVPR-2016 (oral presentation

    A practical Bayesian framework for backpropagation networks

    Get PDF
    A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian "evidence" automatically embodies "Occam's razor," penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained

    Structural change in the forward discount: a Bayesian analysis of forward rate unbiasedness hypothesis

    Get PDF
    Using Bayesian methods, we reexamine the empirical evidence from Sakoulis et al. (2010) regarding structural breaks in the forward discount for G-7 countries. Our Bayesian framework allows the number and pattern of structural changes in level and variance to be endogenously determined. We find different locations of breakpoints for each currency; mostly, fewer breaks are present. We find little evidence of moving toward stationarity in the forward discount after accounting for structural change. Our findings suggest that the existence of structural change is not a viable justification for the forward discount anomaly.Bayesian method, structural change, forward discount anomaly, Gibbs-sampling

    Bayesian interpolation

    Get PDF
    Although Bayesian analysis has been in use since Laplace, the Bayesian method of model-comparison has only recently been developed in depth. In this paper, the Bayesian approach to regularization and model-comparison is demonstrated by studying the inference problem of interpolating noisy data. The concepts and methods described are quite general and can be applied to many other data modeling problems. Regularizing constants are set by examining their posterior probability distribution. Alternative regularizers (priors) and alternative basis sets are objectively compared by evaluating the evidence for them. “Occam's razor” is automatically embodied by this process. The way in which Bayes infers the values of regularizing constants and noise levels has an elegant interpretation in terms of the effective number of parameters determined by the data set. This framework is due to Gull and Skilling

    On Probability and Cosmology: Inference Beyond Data?

    Get PDF
    Modern scientific cosmology pushes the boundaries of knowledge and the knowable. This is prompting questions on the nature of scientific knowledge. A central issue is what defines a 'good' model. When addressing global properties of the Universe or its initial state this becomes a particularly pressing issue. How to assess the probability of the Universe as a whole is empirically ambiguous, since we can examine only part of a single realisation of the system under investigation: at some point, data will run out. We review the basics of applying Bayesian statistical explanation to the Universe as a whole. We argue that a conventional Bayesian approach to model inference generally fails in such circumstances, and cannot resolve, e.g., the so-called 'measure problem' in inflationary cosmology. Implicit and non-empirical valuations inevitably enter model assessment in these cases. This undermines the possibility to perform Bayesian model comparison. One must therefore either stay silent, or pursue a more general form of systematic and rational model assessment. We outline a generalised axiological Bayesian model inference framework, based on mathematical lattices. This extends inference based on empirical data (evidence) to additionally consider the properties of model structure (elegance) and model possibility space (beneficence). We propose this as a natural and theoretically well-motivated framework for introducing an explicit, rational approach to theoretical model prejudice and inference beyond data

    Bayesian model selection and isocurvature perturbations

    Get PDF
    Present cosmological data are well explained assuming purely adiabatic perturbations, but an admixture of isocurvature perturbations is also permitted. We use a Bayesian framework to compare the performance of cosmological models including isocurvature modes with the purely adiabatic case; this framework automatically and consistently penalizes models which use more parameters to fit the data. We compute the Bayesian evidence for fits to a data set comprised of WMAP and other microwave anisotropy data, the galaxy power spectrum from 2dFGRS and SDSS, and Type Ia supernovae luminosity distances. We find that Bayesian model selection favors the purely adiabatic models, but so far only at low significance

    Testing and Estimating Persistence in Canadian Unemployment.

    Get PDF
    A vital implication of unemployment persistence applies to the Bank of Canada's disinflation policies since it adversely influences unemployment and considerably lengthens recessions. This paper tests for persistence in Canadian sectoral unemployment, using the modified rescaled-range test. Our results show evidence of persistence in sectoral unemployment that translates to persistence in aggregate unemployment. To quantify this aggregate-level persistence, we estimate it within the framework of Bayesian ARFIMA class of models. The results conclude that Canadian unemployment exhibits persistence in the short and intermediate run.ARFIMA, Fractional Integrated, Bayesian, Unemployment Persistence, Canada, Rescaled-Range Statistic

    Probing dynamics of dark energy with latest observations

    Get PDF
    We examine the validity of the Λ\LambdaCDM model, and probe for the dynamics of dark energy using latest astronomical observations. Using the Om(z)Om(z) diagnosis, we find that different kinds of observational data are in tension within the Λ\LambdaCDM framework. We then allow for dynamics of dark energy and investigate the constraint on dark energy parameters. We find that for two different kinds of parametrisations of the equation of state parameter ww, a combination of current data mildly favours an evolving ww, although the significance is not sufficient for it to be supported by the Bayesian evidence. A forecast of the DESI survey shows that the dynamics of dark energy could be detected at 7σ7\sigma confidence level, and will be decisively supported by the Bayesian evidence, if the best fit model of ww derived from current data is the true model.Comment: 4.5 pages, 3 figures, 1 table; references adde
    corecore