25,108 research outputs found

    Fairness of Exposure in Rankings

    Full text link
    Rankings are ubiquitous in the online world today. As we have transitioned from finding books in libraries to ranking products, jobs, job applicants, opinions and potential romantic partners, there is a substantial precedent that ranking systems have a responsibility not only to their users but also to the items being ranked. To address these often conflicting responsibilities, we propose a conceptual and computational framework that allows the formulation of fairness constraints on rankings in terms of exposure allocation. As part of this framework, we develop efficient algorithms for finding rankings that maximize the utility for the user while provably satisfying a specifiable notion of fairness. Since fairness goals can be application specific, we show how a broad range of fairness constraints can be implemented using our framework, including forms of demographic parity, disparate treatment, and disparate impact constraints. We illustrate the effect of these constraints by providing empirical results on two ranking problems.Comment: In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, London, UK, 201

    Causal inference using the algorithmic Markov condition

    Full text link
    Inferring the causal structure that links n observables is usually based upon detecting statistical dependences and choosing simple graphs that make the joint measure Markovian. Here we argue why causal inference is also possible when only single observations are present. We develop a theory how to generate causal graphs explaining similarities between single objects. To this end, we replace the notion of conditional stochastic independence in the causal Markov condition with the vanishing of conditional algorithmic mutual information and describe the corresponding causal inference rules. We explain why a consistent reformulation of causal inference in terms of algorithmic complexity implies a new inference principle that takes into account also the complexity of conditional probability densities, making it possible to select among Markov equivalent causal graphs. This insight provides a theoretical foundation of a heuristic principle proposed in earlier work. We also discuss how to replace Kolmogorov complexity with decidable complexity criteria. This can be seen as an algorithmic analog of replacing the empirically undecidable question of statistical independence with practical independence tests that are based on implicit or explicit assumptions on the underlying distribution.Comment: 16 figure

    An Algorithmic Approach to Information and Meaning

    Get PDF
    I will survey some matters of relevance to a philosophical discussion of information, taking into account developments in algorithmic information theory (AIT). I will propose that meaning is deep in the sense of Bennett's logical depth, and that algorithmic probability may provide the stability needed for a robust algorithmic definition of meaning, one that takes into consideration the interpretation and the recipient's own knowledge encoded in the story attached to a message.Comment: preprint reviewed version closer to the version accepted by the journa

    Algorithmic Randomness as Foundation of Inductive Reasoning and Artificial Intelligence

    Full text link
    This article is a brief personal account of the past, present, and future of algorithmic randomness, emphasizing its role in inductive inference and artificial intelligence. It is written for a general audience interested in science and philosophy. Intuitively, randomness is a lack of order or predictability. If randomness is the opposite of determinism, then algorithmic randomness is the opposite of computability. Besides many other things, these concepts have been used to quantify Ockham's razor, solve the induction problem, and define intelligence.Comment: 9 LaTeX page

    Uncovering missing links with cold ends

    Get PDF
    To evaluate the performance of prediction of missing links, the known data are randomly divided into two parts, the training set and the probe set. We argue that this straightforward and standard method may lead to terrible bias, since in real biological and information networks, missing links are more likely to be links connecting low-degree nodes. We therefore study how to uncover missing links with low-degree nodes, namely links in the probe set are of lower degree products than a random sampling. Experimental analysis on ten local similarity indices and four disparate real networks reveals a surprising result that the Leicht-Holme-Newman index [E. A. Leicht, P. Holme, and M. E. J. Newman, Phys. Rev. E 73, 026120 (2006)] performs the best, although it was known to be one of the worst indices if the probe set is a random sampling of all links. We further propose an parameter-dependent index, which considerably improves the prediction accuracy. Finally, we show the relevance of the proposed index on three real sampling methods.Comment: 16 pages, 5 figures, 6 table

    Compressibility, laws of nature, initial conditions and complexity

    Get PDF
    We critically analyse the point of view for which laws of nature are just a mean to compress data. Discussing some basic notions of dynamical systems and information theory, we show that the idea that the analysis of large amount of data by means of an algorithm of compression is equivalent to the knowledge one can have from scientific laws, is rather naive. In particular we discuss the subtle conceptual topic of the initial conditions of phenomena which are generally incompressible. Starting from this point, we argue that laws of nature represent more than a pure compression of data, and that the availability of large amount of data, in general, is not particularly useful to understand the behaviour of complex phenomena.Comment: 19 Pages, No figures, published on Foundation of Physic
    • …
    corecore