62,776 research outputs found

    Explicit fairness in testing semantics

    Get PDF
    In this paper we investigate fair computations in the pi-calculus. Following Costa and Stirling's approach for CCS-like languages, we consider a method to label process actions in order to filter out unfair computations. We contrast the existing fair-testing notion with those that naturally arise by imposing weak and strong fairness. This comparison provides insight about the expressiveness of the various `fair' testing semantics and about their discriminating power.Comment: 27 pages, 1 figure, appeared in LMC

    Field-aware Calibration: A Simple and Empirically Strong Method for Reliable Probabilistic Predictions

    Full text link
    It is often observed that the probabilistic predictions given by a machine learning model can disagree with averaged actual outcomes on specific subsets of data, which is also known as the issue of miscalibration. It is responsible for the unreliability of practical machine learning systems. For example, in online advertising, an ad can receive a click-through rate prediction of 0.1 over some population of users where its actual click rate is 0.15. In such cases, the probabilistic predictions have to be fixed before the system can be deployed. In this paper, we first introduce a new evaluation metric named field-level calibration error that measures the bias in predictions over the sensitive input field that the decision-maker concerns. We show that existing post-hoc calibration methods have limited improvements in the new field-level metric and other non-calibration metrics such as the AUC score. To this end, we propose Neural Calibration, a simple yet powerful post-hoc calibration method that learns to calibrate by making full use of the field-aware information over the validation set. We present extensive experiments on five large-scale datasets. The results showed that Neural Calibration significantly improves against uncalibrated predictions in common metrics such as the negative log-likelihood, Brier score and AUC, as well as the proposed field-level calibration error.Comment: WWW 202

    Making Random Choices Invisible to the Scheduler

    Get PDF
    When dealing with process calculi and automata which express both nondeterministic and probabilistic behavior, it is customary to introduce the notion of scheduler to solve the nondeterminism. It has been observed that for certain applications, notably those in security, the scheduler needs to be restricted so not to reveal the outcome of the protocol's random choices, or otherwise the model of adversary would be too strong even for ``obviously correct'' protocols. We propose a process-algebraic framework in which the control on the scheduler can be specified in syntactic terms, and we show how to apply it to solve the problem mentioned above. We also consider the definition of (probabilistic) may and must preorders, and we show that they are precongruences with respect to the restricted schedulers. Furthermore, we show that all the operators of the language, except replication, distribute over probabilistic summation, which is a useful property for verification

    Testing axioms for Quantum Mechanics on Probabilistic toy-theories

    Full text link
    In Ref. [1] one of the authors proposed postulates for axiomatizing Quantum Mechanics as a "fair operational framework", namely regarding the theory as a set of rules that allow the experimenter to predict future events on the basis of suitable tests, having local control and low experimental complexity. In addition to causality, the following postulates have been considered: PFAITH (existence of a pure preparationally faithful state), and FAITHE (existence of a faithful effect). These postulates have exhibited an unexpected theoretical power, excluding all known nonquantum probabilistic theories. Later in Ref. [2] in addition to causality and PFAITH, postulate LDISCR (local discriminability) and PURIFY (purifiability of all states) have been considered, narrowing the probabilistic theory to something very close to Quantum Mechanics. In the present paper we test the above postulates on some nonquantum probabilistic models. The first model, "the two-box world" is an extension of the Popescu-Rohrlich model, which achieves the greatest violation of the CHSH inequality compatible with the no-signaling principle. The second model "the two-clock world" is actually a full class of models, all having a disk as convex set of states for the local system. One of them corresponds to the "the two-rebit world", namely qubits with real Hilbert space. The third model--"the spin-factor"--is a sort of n-dimensional generalization of the clock. Finally the last model is "the classical probabilistic theory". We see how each model violates some of the proposed postulates, when and how teleportation can be achieved, and we analyze other interesting connections between these postulate violations, along with deep relations between the local and the non-local structures of the probabilistic theory.Comment: Submitted to QIP Special Issue on Foundations of Quantum Informatio
    • …
    corecore