52 research outputs found

    Algorithmic Identification of Probabilities

    Full text link
    TThe problem is to identify a probability associated with a set of natural numbers, given an infinite data sequence of elements from the set. If the given sequence is drawn i.i.d. and the probability mass function involved (the target) belongs to a computably enumerable (c.e.) or co-computably enumerable (co-c.e.) set of computable probability mass functions, then there is an algorithm to almost surely identify the target in the limit. The technical tool is the strong law of large numbers. If the set is finite and the elements of the sequence are dependent while the sequence is typical in the sense of Martin-L\"of for at least one measure belonging to a c.e. or co-c.e. set of computable measures, then there is an algorithm to identify in the limit a computable measure for which the sequence is typical (there may be more than one such measure). The technical tool is the theory of Kolmogorov complexity. We give the algorithms and consider the associated predictions.Comment: 19 pages LaTeX.Corrected errors and rewrote the entire paper. arXiv admin note: text overlap with arXiv:1208.500

    Identification of probabilities

    Get PDF
    Within psychology, neuroscience and artificial intelligence, there has been increasing interest in the proposal that the brain builds probabilistic models of sensory and linguistic input: that is, to infer a probabilistic model from a sample. The practical problems of such inference are substantial: the brain has limited data and restricted computational resources. But there is a more fundamental question: is the problem of inferring a probabilistic model from a sample possible even in principle? We explore this question and find some surprisingly positive and general results. First, for a broad class of probability distributions characterized by computability restrictions, we specify a learning algorithm that will almost surely identify a probability distribution in the limit given a finite i.i.d. sample of sufficient but unknown length. This is similarly shown to hold for sequences generated by a broad class of Markov chains, subject to computability assumptions. The technical tool is the strong law of large numbers. Second, for a large class of dependent sequences, we specify an algorithm which identifies in the limit a computable measure for which the sequence is typical, in the sense of Martin-L\xc3\xb6f (there may be more than one such measure). The technical tool is the theory of Kolmogorov complexity. We analyze the associated predictions in both cases. We also briefly consider special cases, including language learning, and wider theoretical implications for psychology

    Algorithmic Identification of Probabilities

    Get PDF

    Quantum theory from four of Hardy's axioms

    Get PDF
    In a recent paper [e-print quant-ph/0101012], Hardy has given a derivation of "quantum theory from five reasonable axioms." Here we show that Hardy's first axiom, which identifies probability with limiting frequency in an ensemble, is not necessary for his derivation. By reformulating Hardy's assumptions, and modifying a part of his proof, in terms of Bayesian probabilities, we show that his work can be easily reconciled with a Bayesian interpretation of quantum probability.Comment: 5 page

    Wave functions for arbitrary operator ordering in the de Sitter minisuperspace approximation

    Get PDF
    We derive exact series solutions for the Wheeler-DeWitt equation corresponding to a spatially closed Friedmann-Robertson-Walker universe with cosmological constant for arbitrary operator ordering of the scale factor of the universe. The resulting wave functions are those relevant to the approximation which has been widely used in two-dimensional minisuperspace models with an inflationary scalar field for the purpose of predicting the period of inflation which results from competing boundary condition proposals for the wave function of the universe. The problem that Vilenkin's tunneling wave function is not normalizable for general operator orderings, is shown to persist for other values of the spatial curvature, and when additional matter degrees of freedom such as radiation are included.Comment: 12 pages, revTeX-3.

    Process reconstruction from incomplete and/or inconsistent data

    Full text link
    We analyze how an action of a qubit channel (map) can be estimated from the measured data that are incomplete or even inconsistent. That is, we consider situations when measurement statistics is insufficient to determine consistent probability distributions. As a consequence either the estimation (reconstruction) of the channel completely fails or it results in an unphysical channel (i.e., the corresponding map is not completely positive). We present a regularization procedure that allows us to derive physically reasonable estimates (approximations) of quantum channels. We illustrate our procedure on specific examples and we show that the procedure can be also used for a derivation of optimal approximations of operations that are forbidden by the laws of quantum mechanics (e.g., the universal NOT gate).Comment: 9pages, 5 figure

    Multicriteria analysis under uncertainty with IANUS - method and empirical results

    Get PDF
    IANUS is a method for aiding public decision-making that supports efforts towards sustainable development and has a wide range of application. IANUS stands for Integrated Assessment of Decisions uNder Uncertainty for Sustainable Development. This paper introduces the main features of IANUS and illustrates the method using the results of a case study in the Torgau region (eastern Germany). IANUS structures the decision process into four steps: scenario derivation, criteria selection, modeling, evaluation. Its overall aim is to extract the information needed for a sound, responsible decision in a clear, transparent manner. The method is designed for use in conflict situations where environmental and socioeconomic effects need to be considered and so an interdisciplinary approach is required. Special emphasis is placed on a broad perception and consideration of uncertainty. Three types of uncertainty are explicitly taken into account by IANUS: development uncertainty (uncertainty about the social, economic and other developments that affect the consequences of decision), model uncertainty (uncertainty associated with the prediction of the effects of decisions), and weight uncertainty (uncertainty about the appropriate weighting of the criteria). The backbone of IANUS is a multicriteria method with the ability to process uncertain information. In the case study the multicriteria method PROMETHEE is used. Since PROMETHEE in its basic versions is not able to process uncertain information an extension of this method is developed here and described in detail. --
    • 

    corecore