36,855 research outputs found

    Analysing and Comparing Encodability Criteria

    Get PDF
    Encodings or the proof of their absence are the main way to compare process calculi. To analyse the quality of encodings and to rule out trivial or meaningless encodings, they are augmented with quality criteria. There exists a bunch of different criteria and different variants of criteria in order to reason in different settings. This leads to incomparable results. Moreover it is not always clear whether the criteria used to obtain a result in a particular setting do indeed fit to this setting. We show how to formally reason about and compare encodability criteria by mapping them on requirements on a relation between source and target terms that is induced by the encoding function. In particular we analyse the common criteria full abstraction, operational correspondence, divergence reflection, success sensitiveness, and respect of barbs; e.g. we analyse the exact nature of the simulation relation (coupled simulation versus bisimulation) that is induced by different variants of operational correspondence. This way we reduce the problem of analysing or comparing encodability criteria to the better understood problem of comparing relations on processes.Comment: In Proceedings EXPRESS/SOS 2015, arXiv:1508.06347. The Isabelle/HOL source files, and a full proof document, are available in the Archive of Formal Proofs, at http://afp.sourceforge.net/entries/Encodability_Process_Calculi.shtm

    Learning Generative Models across Incomparable Spaces

    Full text link
    Generative Adversarial Networks have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety. However, in some cases, we may want to only learn some aspects (e.g., cluster or manifold structure), while modifying others (e.g., style, orientation or dimension). In this work, we propose an approach to learn generative models across such incomparable spaces, and demonstrate how to steer the learned distribution towards target properties. A key component of our model is the Gromov-Wasserstein distance, a notion of discrepancy that compares distributions relationally rather than absolutely. While this framework subsumes current generative models in identically reproducing distributions, its inherent flexibility allows application to tasks in manifold learning, relational learning and cross-domain learning.Comment: International Conference on Machine Learning (ICML

    A Comparison of Well-Quasi Orders on Trees

    Get PDF
    Well-quasi orders such as homeomorphic embedding are commonly used to ensure termination of program analysis and program transformation, in particular supercompilation. We compare eight well-quasi orders on how discriminative they are and their computational complexity. The studied well-quasi orders comprise two very simple examples, two examples from literature on supercompilation and four new proposed by the author. We also discuss combining several well-quasi orders to get well-quasi orders of higher discriminative power. This adds 19 more well-quasi orders to the list.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455

    Decoy Bandits Dueling on a Poset

    Full text link
    We adress the problem of dueling bandits defined on partially ordered sets, or posets. In this setting, arms may not be comparable, and there may be several (incomparable) optimal arms. We propose an algorithm, UnchainedBandits, that efficiently finds the set of optimal arms of any poset even when pairs of comparable arms cannot be distinguished from pairs of incomparable arms, with a set of minimal assumptions. This algorithm relies on the concept of decoys, which stems from social psychology. For the easier case where the incomparability information may be accessible, we propose a second algorithm, SlicingBandits, which takes advantage of this information and achieves a very significant gain of performance compared to UnchainedBandits. We provide theoretical guarantees and experimental evaluation for both algorithms

    A Structural Equation Approach to Spatial Dependence Models

    Get PDF
    A strong increase in the availability of space-time data has occurred during the past decades. This has led to the development of a substantial literature dealing with the two particular problems inherent to this kind of data, i.e. serial dependence between the observations on each spatial unit over time, and spatial dependence between the observations on the spatial units at each point in time (e.g. Elhorst, 2001, 2003). Typical for spatial panel data models is that the causal direction cannot be based on instantaneous relationships between simultaneously measured variables. Rather the so-called cross-lagged panel design studies compare the effects of variables on each other across time. Although they circumvent the difficult problem of assessing causal direction in cross-sectional research, the cross-lagged panel design studies are usually performed in discrete time (Oud, 2002). Because of different discrete time observation intervals within and between studies, outcomes are often incomparable or appear to be contradictory (Gollob & Reichardt, 1987). This paper will describe the problems of cross-lagged space-time models in discrete time and propose how these problems can be solved through a continuous time approach. In this regard special attention will be paid to structural equation modelling (SEM). In addition, we shall describe how space-time dependence can he handled in a SEM framework

    Accessibility of physical states and non-uniqueness of entanglement measure

    Full text link
    Ordering physical states is the key to quantifying some physical property of the states uniquely. Bipartite pure entangled states are totally ordered under local operations and classical communication (LOCC) in the asymptotic limit and uniquely quantified by the well-known entropy of entanglement. However, we show that mixed entangled states are partially ordered under LOCC even in the asymptotic limit. Therefore, non-uniqueness of entanglement measure is understood on the basis of an operational notion of asymptotic convertibility.Comment: 8 pages, 1 figure. v2: main result unchanged but presentation extensively changed. v3: figure added, minor correction

    The Spectrum of Strong Behavioral Equivalences for Nondeterministic and Probabilistic Processes

    Full text link
    We present a spectrum of trace-based, testing, and bisimulation equivalences for nondeterministic and probabilistic processes whose activities are all observable. For every equivalence under study, we examine the discriminating power of three variants stemming from three approaches that differ for the way probabilities of events are compared when nondeterministic choices are resolved via deterministic schedulers. We show that the first approach - which compares two resolutions relatively to the probability distributions of all considered events - results in a fragment of the spectrum compatible with the spectrum of behavioral equivalences for fully probabilistic processes. In contrast, the second approach - which compares the probabilities of the events of a resolution with the probabilities of the same events in possibly different resolutions - gives rise to another fragment composed of coarser equivalences that exhibits several analogies with the spectrum of behavioral equivalences for fully nondeterministic processes. Finally, the third approach - which only compares the extremal probabilities of each event stemming from the different resolutions - yields even coarser equivalences that, however, give rise to a hierarchy similar to that stemming from the second approach.Comment: In Proceedings QAPL 2013, arXiv:1306.241

    Partial Preferences for Mediated Bargaining

    Full text link
    In this work we generalize standard Decision Theory by assuming that two outcomes can also be incomparable. Two motivating scenarios show how incomparability may be helpful to represent those situations where, due to lack of information, the decision maker would like to maintain different options alive and defer the final decision. In particular, a new axiomatization is given which turns out to be a weakening of the classical set of axioms used in Decision Theory. Preliminary results show how preferences involving complex distributions are related to judgments on single alternatives.Comment: In Proceedings SR 2014, arXiv:1404.041

    Flexibility of Choice Versus Reduction of Ambiguity

    Get PDF
    This paper explores the problem of a social planner willing to improve the welfare of individuals who are unable to compare all available alternatives. The optimal decision trades off the individuals' desire for flexibility versus their aversion towards ambiguous choice situations. We introduce an axiom system that formalizes this idea. Our main result characterizes the preference maximizing opportunity set. It is a maximal set that consists of mutually comparable alternatives. It also has the property that it maximizes the sum of the distances between its ordered elements for some appropriate metric imposed on the set of possible choices.Incomplete preferences, ambiguity, ?exibility of choice, opportunity sets, uncertainty
    • 

    corecore