1,415 research outputs found

    Betting and Belief: Prediction Markets and Attribution of Climate Change

    Full text link
    Despite much scientific evidence, a large fraction of the American public doubts that greenhouse gases are causing global warming. We present a simulation model as a computational test-bed for climate prediction markets. Traders adapt their beliefs about future temperatures based on the profits of other traders in their social network. We simulate two alternative climate futures, in which global temperatures are primarily driven either by carbon dioxide or by solar irradiance. These represent, respectively, the scientific consensus and a hypothesis advanced by prominent skeptics. We conduct sensitivity analyses to determine how a variety of factors describing both the market and the physical climate may affect traders' beliefs about the cause of global climate change. Market participation causes most traders to converge quickly toward believing the "true" climate model, suggesting that a climate market could be useful for building public consensus.Comment: All code and data for the model is available at http://johnjnay.com/predMarket/. Forthcoming in Proceedings of the 2016 Winter Simulation Conference. IEEE Pres

    Machine Learning for Ad Publishers in Real Time Bidding

    Get PDF

    FORETELL: Aggregating Distributed, Heterogeneous Information from Diverse Sources Using Market-based Techniques

    Get PDF
    Predicting the outcome of uncertain events that will happen in the future is a frequently indulged task by humans while making critical decisions. The process underlying this prediction and decision making is called information aggregation, which deals with collating the opinions of different people, over time, about the future event’s possible outcome. The information aggregation problem is non-trivial as the information related to future events is distributed spatially and temporally, the information gets changed dynamically as related events happen, and, finally, people’s opinions about events’ outcomes depends on the information they have access to and the mechanism they use to form opinions from that information. This thesis addresses the problem of distributed information aggregation by building computational models and algorithms for different aspects of information aggregation so that the most likely outcome of future events can be predicted with utmost accuracy. We have employed a commonly used market-based framework called a prediction market to formally analyze the process of information aggregation. The behavior of humans performing information aggregation within a prediction market is implemented using software agents which employ sophisticated algorithms to perform complex calculations on behalf of the humans, to aggregate information efficiently. We have considered five different yet crucial problems related to information aggregation, which include: (i) the effect of variations in the parameters of the information being aggregated, such as its reliability, availability, accessibility, etc., on the predicted outcome of the event, (ii) improving the prediction accuracy by having each human (software-agent) build a more accurate model of other humans’ behavior in the prediction market, (iii) identifying how various market parameters effect its dynamics and accuracy, (iv) applying information aggregation to the domain of distributed sensor information fusion, and, (v) aggregating information on an event while considering dissimilar, but closely-related events in different prediction markets. We have verified all of our proposed techniques through analytical results and experiments while using commercially available data from real prediction markets within a simulated, multi-agent based prediction market. Our results show that our proposed techniques for information aggregation perform more efficiently or comparably with existing techniques for information aggregation using prediction markets

    False Consensus, Information Theory, and Prediction Markets

    Get PDF

    Informational Substitutes

    Full text link
    We propose definitions of substitutes and complements for pieces of information ("signals") in the context of a decision or optimization problem, with game-theoretic and algorithmic applications. In a game-theoretic context, substitutes capture diminishing marginal value of information to a rational decision maker. We use the definitions to address the question of how and when information is aggregated in prediction markets. Substitutes characterize "best-possible" equilibria with immediate information aggregation, while complements characterize "worst-possible", delayed aggregation. Game-theoretic applications also include settings such as crowdsourcing contests and Q\&A forums. In an algorithmic context, where substitutes capture diminishing marginal improvement of information to an optimization problem, substitutes imply efficient approximation algorithms for a very general class of (adaptive) information acquisition problems. In tandem with these broad applications, we examine the structure and design of informational substitutes and complements. They have equivalent, intuitive definitions from disparate perspectives: submodularity, geometry, and information theory. We also consider the design of scoring rules or optimization problems so as to encourage substitutability or complementarity, with positive and negative results. Taken as a whole, the results give some evidence that, in parallel with substitutable items, informational substitutes play a natural conceptual and formal role in game theory and algorithms.Comment: Full version of FOCS 2016 paper. Single-column, 61 pages (48 main text, 13 references and appendix

    Superhuman science: How artificial intelligence may impact innovation

    Get PDF
    New product innovation in fields like drug discovery and material science can be characterized as combinatorial search over a vast range of possibilities. Modeling innovation as a costly multi-stage search process, we explore how improvements in Artificial Intelligence (AI) could affect the productivity of the discovery pipeline in allowing improved prioritization of innovations that flow through that pipeline. We show how AI aided prediction can increase the expected value of innovation and can increase or decrease the demand for downstream testing, depending on the type of innovation, and examine how AI can reduce costs associated with well-defined bottlenecks in the discovery pipeline. Finally, we discuss the critical role that policy can play to mitigate potential market failures associated with access to and provision of data as well as the provision of training necessary to more closely approach the socially optimal level of productivity enhancing innovations enabled by this technology

    Institutional Forecasting: The Performance of Thin Virtual Stock Markets

    Get PDF
    We study the performance of Virtual Stock Markets (VSMs) in an institutional forecasting environment. We compare VSMs to the Combined Judgmental Forecast (CJF) and the Key Informant (KI) approach. We find that VSMs can be effectively applied in an environment with a small number of knowledgeable informants, i.e., in thin markets. Our results show that none of the three approaches differ in forecasting accuracy in a low knowledge-heterogeneity environment. However, where there is high knowledge-heterogeneity, the VSM approach outperforms the CJF approach, which in turn outperforms the KI approach. Hence, our results provide useful insight into when each of the three approaches might be most effectively applied.Forecasting;Electronic Markets;Information Markets;Virtual Stock Markets
    • …
    corecore