37,632 research outputs found

    Effective Theories for Circuits and Automata

    Full text link
    Abstracting an effective theory from a complicated process is central to the study of complexity. Even when the underlying mechanisms are understood, or at least measurable, the presence of dissipation and irreversibility in biological, computational and social systems makes the problem harder. Here we demonstrate the construction of effective theories in the presence of both irreversibility and noise, in a dynamical model with underlying feedback. We use the Krohn-Rhodes theorem to show how the composition of underlying mechanisms can lead to innovations in the emergent effective theory. We show how dissipation and irreversibility fundamentally limit the lifetimes of these emergent structures, even though, on short timescales, the group properties may be enriched compared to their noiseless counterparts.Comment: 11 pages, 9 figure

    Probabilistic Methodology and Techniques for Artefact Conception and Development

    Get PDF
    The purpose of this paper is to make a state of the art on probabilistic methodology and techniques for artefact conception and development. It is the 8th deliverable of the BIBA (Bayesian Inspired Brain and Artefacts) project. We first present the incompletness problem as the central difficulty that both living creatures and artefacts have to face: how can they perceive, infer, decide and act efficiently with incomplete and uncertain knowledge?. We then introduce a generic probabilistic formalism called Bayesian Programming. This formalism is then used to review the main probabilistic methodology and techniques. This review is organized in 3 parts: first the probabilistic models from Bayesian networks to Kalman filters and from sensor fusion to CAD systems, second the inference techniques and finally the learning and model acquisition and comparison methodologies. We conclude with the perspectives of the BIBA project as they rise from this state of the art

    Doubly Optimized Calibrated Support Vector Machine (DOC-SVM): an algorithm for joint optimization of discrimination and calibration.

    Get PDF
    Historically, probabilistic models for decision support have focused on discrimination, e.g., minimizing the ranking error of predicted outcomes. Unfortunately, these models ignore another important aspect, calibration, which indicates the magnitude of correctness of model predictions. Using discrimination and calibration simultaneously can be helpful for many clinical decisions. We investigated tradeoffs between these goals, and developed a unified maximum-margin method to handle them jointly. Our approach called, Doubly Optimized Calibrated Support Vector Machine (DOC-SVM), concurrently optimizes two loss functions: the ridge regression loss and the hinge loss. Experiments using three breast cancer gene-expression datasets (i.e., GSE2034, GSE2990, and Chanrion's datasets) showed that our model generated more calibrated outputs when compared to other state-of-the-art models like Support Vector Machine (p=0.03, p=0.13, and p<0.001) and Logistic Regression (p=0.006, p=0.008, and p<0.001). DOC-SVM also demonstrated better discrimination (i.e., higher AUCs) when compared to Support Vector Machine (p=0.38, p=0.29, and p=0.047) and Logistic Regression (p=0.38, p=0.04, and p<0.0001). DOC-SVM produced a model that was better calibrated without sacrificing discrimination, and hence may be helpful in clinical decision making

    Tractability through Exchangeability: A New Perspective on Efficient Probabilistic Inference

    Full text link
    Exchangeability is a central notion in statistics and probability theory. The assumption that an infinite sequence of data points is exchangeable is at the core of Bayesian statistics. However, finite exchangeability as a statistical property that renders probabilistic inference tractable is less well-understood. We develop a theory of finite exchangeability and its relation to tractable probabilistic inference. The theory is complementary to that of independence and conditional independence. We show that tractable inference in probabilistic models with high treewidth and millions of variables can be understood using the notion of finite (partial) exchangeability. We also show that existing lifted inference algorithms implicitly utilize a combination of conditional independence and partial exchangeability.Comment: In Proceedings of the 28th AAAI Conference on Artificial Intelligenc

    Units of rotational information

    Full text link
    Entanglement in angular momentum degrees of freedom is a precious resource for quantum metrology and control. Here we study the conversions of this resource, focusing on Bell pairs of spin-J particles, where one particle is used to probe unknown rotations and the other particle is used as reference. When a large number of pairs are given, we show that every rotated spin-J Bell state can be reversibly converted into an equivalent number of rotated spin one-half Bell states, at a rate determined by the quantum Fisher information. This result provides the foundation for the definition of an elementary unit of information about rotations in space, which we call the Cartesian refbit. In the finite copy scenario, we design machines that approximately break down Bell states of higher spins into Cartesian refbits, as well as machines that approximately implement the inverse process. In addition, we establish a quantitative link between the conversion of Bell states and the simulation of unitary gates, showing that the fidelity of probabilistic state conversion provides upper and lower bounds on the fidelity of deterministic gate simulation. The result holds not only for rotation gates, but also to all sets of gates that form finite-dimensional representations of compact groups. For rotation gates, we show how rotations on a system of given spin can simulate rotations on a system of different spin.Comment: 25 pages + appendix, 7 figures, new results adde

    Probabilistic Models over Ordered Partitions with Application in Learning to Rank

    Get PDF
    This paper addresses the general problem of modelling and learning rank data with ties. We propose a probabilistic generative model, that models the process as permutations over partitions. This results in super-exponential combinatorial state space with unknown numbers of partitions and unknown ordering among them. We approach the problem from the discrete choice theory, where subsets are chosen in a stagewise manner, reducing the state space per each stage significantly. Further, we show that with suitable parameterisation, we can still learn the models in linear time. We evaluate the proposed models on the problem of learning to rank with the data from the recently held Yahoo! challenge, and demonstrate that the models are competitive against well-known rivals.Comment: 19 pages, 2 figure
    • 

    corecore