93,582 research outputs found

    Fair Π

    Get PDF
    AbstractIn this paper, we define fair computations in the π-calculus [Milner, R., Parrow, J. & Walker, D., A Calculus of Mobile Processes, Part I and II, Information and Computation 100 (1992) 1–78]. We follow Costa and Stirling's approach for CCS-like languages [Costa, G. & Stirling, C., A Fair Calculus of Communicating Systems, Acta Informatica 21 (1984) 417–441, Costa, G. & Stirling, C., Weak and Strong Fairness in CCS, Information and Computation 73 (1987) 207–244] but exploit a more natural labeling method of process actions to filter out unfair process executions. The new labeling allows us to prove all the significant properties of the original one, such as unicity, persistence and disappearance of labels. It also turns out that the labeled π-calculus is a conservative extension of the standard one. We contrast the existing fair testing [Brinksma, E., Rensink, A. & Vogler, W., Fair Testing, Proc. of CONCUR'95, LNCS, 962 (1995) 313–327, Natarajan, V. & Cleaveland, R., Divergence and Fair Testing, Proc. of ICALP '95, LNCS, 944 (1995) 648–659] with those that naturally arise by imposing weak and strong fairness as defined by Costa and Stirling. This comparison provides the expressiveness of the various fair testing-based semantics and emphasizes the discriminating power of the one already proposed in the literature

    Fair Testing

    Get PDF
    In this paper we present a solution to the long-standing problem of characterising the coarsest liveness-preserving pre-congruence with respect to a full (TCSP-inspired) process algebra. In fact, we present two distinct characterisations, which give rise to the same relation: an operational one based on a De Nicola-Hennessy-like testing modality which we call should-testing, and a denotational one based on a refined notion of failures. One of the distinguishing characteristics of the should-testing pre-congruence is that it abstracts from divergences in the same way as Milner¿s observation congruence, and as a consequence is strictly coarser than observation congruence. In other words, should-testing has a built-in fairness assumption. This is in itself a property long sought-after; it is in notable contrast to the well-known must-testing of De Nicola and Hennessy (denotationally characterised by a combination of failures and divergences), which treats divergence as catrastrophic and hence is incompatible with observation congruence. Due to these characteristics, should-testing supports modular reasoning and allows to use the proof techniques of observation congruence, but also supports additional laws and techniques. Moreover, we show decidability of should-testing (on the basis of the denotational characterisation). Finally, we demonstrate its advantages by the application to a number of examples, including a scheduling problem, a version of the Alternating Bit-protocol, and fair lossy communication channel

    On the Decreasing Power of Kernel and Distance based Nonparametric Hypothesis Tests in High Dimensions

    Full text link
    This paper is about two related decision theoretic problems, nonparametric two-sample testing and independence testing. There is a belief that two recently proposed solutions, based on kernels and distances between pairs of points, behave well in high-dimensional settings. We identify different sources of misconception that give rise to the above belief. Specifically, we differentiate the hardness of estimation of test statistics from the hardness of testing whether these statistics are zero or not, and explicitly discuss a notion of "fair" alternative hypotheses for these problems as dimension increases. We then demonstrate that the power of these tests actually drops polynomially with increasing dimension against fair alternatives. We end with some theoretical insights and shed light on the \textit{median heuristic} for kernel bandwidth selection. Our work advances the current understanding of the power of modern nonparametric hypothesis tests in high dimensions.Comment: 19 pages, 9 figures, published in AAAI-15: The 29th AAAI Conference on Artificial Intelligence (with author order reversed from ArXiv

    Predictive hypothesis identification

    Get PDF
    While statistics focusses on hypothesis testing and on estimating (properties of) the true sampling distribution, in machine learning the performance of learning algorithms on future data is the primary issue. In this paper we bridge the gap with a general principle (PHI) that identifies hypotheses with best predictive performance. This includes predictive point and interval estimation, simple and composite hypothesis testing, (mixture) model selection, and others as special cases. For concrete instantiations we will recover well-known methods, variations thereof, and new ones. PHI nicely justifies, reconciles, and blends (a reparametrization invariant variation of) MAP, ML, MDL, and moment estimation. One particular feature of PHI is that it can genuinely deal with nested hypotheses

    Stepwise refinement of processes

    Get PDF
    Industry is looking to create a market in reliable "plug-and-play" components. To model components in a modular style it would be useful to combine event-based and state-based reasoning. One of the first steps in building an event-based model is to decide upon a set of atomic actions. This choice will depend on the formalism used, and may restrict in quite unexpected ways what we are able to formalise. In this paper we illustrate some limits to developing real world processes using existing formalisms, and we define a new notion of refinement, vertical refinement, which addresses some of these limitations. We show that using vertical refinement we can rewrite specification into a different formalism, allowing us to move between handshake processes, broadcast processes and abstract data types

    Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search

    Full text link
    We present a framework for quantifying and mitigating algorithmic bias in mechanisms designed for ranking individuals, typically used as part of web-scale search and recommendation systems. We first propose complementary measures to quantify bias with respect to protected attributes such as gender and age. We then present algorithms for computing fairness-aware re-ranking of results. For a given search or recommendation task, our algorithms seek to achieve a desired distribution of top ranked results with respect to one or more protected attributes. We show that such a framework can be tailored to achieve fairness criteria such as equality of opportunity and demographic parity depending on the choice of the desired distribution. We evaluate the proposed algorithms via extensive simulations over different parameter choices, and study the effect of fairness-aware ranking on both bias and utility measures. We finally present the online A/B testing results from applying our framework towards representative ranking in LinkedIn Talent Search, and discuss the lessons learned in practice. Our approach resulted in tremendous improvement in the fairness metrics (nearly three fold increase in the number of search queries with representative results) without affecting the business metrics, which paved the way for deployment to 100% of LinkedIn Recruiter users worldwide. Ours is the first large-scale deployed framework for ensuring fairness in the hiring domain, with the potential positive impact for more than 630M LinkedIn members.Comment: This paper has been accepted for publication at ACM KDD 201
    corecore