51 research outputs found

    On the Aggregation of Subjective Inputs from Multiple Sources

    Get PDF
    When we have a population of individuals or artificially intelligent agents possessing diverse subjective inputs (e.g. predictions, opinions, etc.) about a common topic, how should we collect and combine them into a single judgment or estimate? This has long been a fundamental question across disciplines that concern themselves with forecasting and decision-making, and has attracted the attention of computer scientists particularly on account of the proliferation of online platforms for electronic commerce and the harnessing of collective intelligence. In this dissertation, I study this problem through the lens of computational social science in three main parts: (1) Incentives in information aggregation: In this segment, I analyze mechanisms for the elicitation and combination of private information from strategic participants, particularly crowdsourced forecasting tools called prediction markets. I show that (a) when a prediction market implemented with a widely used family of algorithms called market scoring rules (MSRs) interacts with myopic risk-averse traders, the price process behaves like an opinion pool, a classical family of belief combination rules, and (b) in an MSR-based game-theoretic model of prediction markets where participants can influence the predicted outcome but some of them have a non-zero probability of being non-strategic, the equilibrium is one of two types, depending on this probability -- either collusive and uninformative or partially revealing; (2) Aggregation with non-strategic agents: In this part, I am agnostic to incentive issues, and focus on algorithms that uncover the ground truth from a sequence of noisy versions. In particular, I present the design and analysis of an approximately Bayesian algorithm for learning a real-valued target given access only to censored Gaussian signals, that performs asymptotically almost as well as if we had uncensored signals; (3) Market making in practice: This component, although tied to the two previous themes, deals more directly with practical aspects of aggregation mechanisms. Here, I develop an adaptation of an MSR to a nancial market setting called a continuous double auction, and document its experimental evaluation in a simulated market ecosystem

    Near-Optimal Target Learning With Stochastic Binary Signals

    Full text link
    We study learning in a noisy bisection model: specifically, Bayesian algorithms to learn a target value V given access only to noisy realizations of whether V is less than or greater than a threshold theta. At step t = 0, 1, 2, ..., the learner sets threshold theta t and observes a noisy realization of sign(V - theta t). After T steps, the goal is to output an estimate V^ which is within an eta-tolerance of V . This problem has been studied, predominantly in environments with a fixed error probability q < 1/2 for the noisy realization of sign(V - theta t). In practice, it is often the case that q can approach 1/2, especially as theta -> V, and there is little known when this happens. We give a pseudo-Bayesian algorithm which provably converges to V. When the true prior matches our algorithm's Gaussian prior, we show near-optimal expected performance. Our methods extend to the general multiple-threshold setting where the observation noisily indicates which of k >= 2 regions V belongs to

    A Study of the Grunwald-Letnikov Definition for Minimizing the Effects of Random Noise on Fractional Order Differential Equations

    Full text link
    Of the many definitions for fractional order differintegral, the Grunwald-Letnikov definition is arguably the most important one. The necessity of this definition for the description and analysis of fractional order systems cannot be overstated. Unfortunately, the Fractional Order Differential Equation (FODE) describing such a systems, in its original form, highly sensitive to the effects of random noise components inevitable in a natural environment. Thus direct application of the definition in a real-life problem can yield erroneous results. In this article, we perform an in-depth mathematical analysis the Grunwald-Letnikov definition in depth and, as far as we know, we are the first to do so. Based on our analysis, we present a transformation scheme which will allow us to accurately analyze generalized fractional order systems in presence of significant quantities of random errors. Finally, by a simple experiment, we demonstrate the high degree of robustness to noise offered by the said transformation and thus validate our scheme.Comment: 4th IEEE International Conference on Information and Automation for Sustainability, 200
    • …
    corecore