7,795 research outputs found

    Information Aggregation in Exponential Family Markets

    Full text link
    We consider the design of prediction market mechanisms known as automated market makers. We show that we can design these mechanisms via the mold of \emph{exponential family distributions}, a popular and well-studied probability distribution template used in statistics. We give a full development of this relationship and explore a range of benefits. We draw connections between the information aggregation of market prices and the belief aggregation of learning agents that rely on exponential family distributions. We develop a very natural analysis of the market behavior as well as the price equilibrium under the assumption that the traders exhibit risk aversion according to exponential utility. We also consider similar aspects under alternative models, such as when traders are budget constrained

    Delay in Strategic Information Aggregation

    Get PDF
    We study a model of collective decision making in which agents vote on the decision repeatedly until they agree, with the agents receiving no exogenous new information between two voting rounds but incurring a delay cost. Although preference conflict between the agents makes information aggregation impossible in a single round of voting, in the equilibrium of the repeated voting game agents are increasingly more willing to vote their private information after each disagreement. Information is efficiently aggregated within a finite number of rounds. As delay becomes less costly, agents are less willing to vote their private information, and efficient information aggregation takes longer. Even as the delay cost converges to zero, agents are strictly better off in the repeated voting game than in any single round game for moderate degrees of initial conflict.repeated voting; gradual concessions; small delay cost

    Information Aggregation Under Strategic Delay

    Get PDF
    In this paper, we show that consumers delay their buying to learn the unknown quality of a product. Agents receive imperfect but informative signals about the unknown quality. Then, each one simultaneously decides whether or not to buy the product in one of the two periods. Consumers with moderate tastes will strategically delay their buying to the second period even though they receive a good signal. They deduce the true quality by observing the mass of first period buyers. We avoid equilibrium non-existence problem by using agents with different private values.Intertemporal price discrimination

    Econometric Modeling as Information Aggregation

    Get PDF
    A forecast produced by an econometric model is a weighted aggregate of predetermined variables in the model. In many models the number of predetermined variables used is very large, often exceeding the number of observations. A method is proposed in this paper for testing an econometric model as an aggregator of the information in these predetermined variables relative to a specified subset of them. The test, called the "information aggregation" (IA) test, tests whether the model makes effective use of the information in the predetermined variables or whether a smaller information set carries as much information. The method can also be used to test one model against another. The method is used to test the Fair model as an information aggregator. The Fair model is also tested against two relatively non theoretical models: a VAR model and an "autoregressive components" (AC) model. The AC model, which is new in this paper, estimates an autoregressive equation for each component of real GNP, with real GNP being identically determined as the sum of the components. The results show that the AC model dominates the VAR model, although both models are dominated by the Fair model. The results also show that the Fair model seems to be a good information aggregator.

    Market microstructure, information aggregation and equilibrium uniqueness in a global game

    Get PDF
    Speculators contemplating an attack (e.g., on a currency peg) must guess the beliefs of other speculators, which they can do by looking at the stock market. As shown in earlier work, this information-gathering process may be destabilising by creating multiple equilibria. This paper studies the role played by the microstructure of the asset market in the emergence of multiple equilibria driven by information aggregation. To do so, we study the outcome of a two-stage global game wherein an asset price determined at the trading stage of the game provides an endogenous public signal about the fundamental that affects traders’ decision to attack in the coordination stage of the game. In the trading stage, placing a full demand schedule (i.e., a continuum of limit orders) is costly, but traders may use riskier (and cheaper) market orders, i.e., order to sell or buy a fixed quantity of assets unconditional on the execution price. Price execution risk reduces traders aggressiveness and hence slows down information aggregation, which ultimately makes multiple equilibria in the coordination stage less likely. In this sense, microstructure frictions that lead to greater individual exposure (to price execution risk) may reduce aggregate uncertainty (by pinning down a unique equilibrium outcome)

    Information Aggregation, Investment, and Managerial Incentives

    Get PDF
    We study the interplay of share prices and firm decisions when share prices aggregate and convey noisy information about fundamentals to investors and managers. First, we show that the informational feedback between the firm's share price and its investment decisions leads to a systematic premium in the firm's share price relative to expected dividends. Noisy information aggregation leads to excess price volatility, over-valuation of shares in response to good news, and undervaluation in response to bad news. By optimally increasing its exposure to fundamental risks when the market price conveys good news, the firm shifts its dividend risk to the upside, which amplifies the overvaluation and explains the premium. Second, we argue that explicitly linking managerial compensation to share prices gives managers an incentive to manipulate the firm's decisions to their own benefit. The managers take advantage of shareholders by taking excessive investment risks when the market is optimistic, and investing too little when the market is pessimistic. The amplified upside exposure is rewarded by the market through a higher share price, but is inefficient from the perspective of dividend value.

    Time-Sensitive Bayesian Information Aggregation for Crowdsourcing Systems

    Get PDF
    Crowdsourcing systems commonly face the problem of aggregating multiple judgments provided by potentially unreliable workers. In addition, several aspects of the design of efficient crowdsourcing processes, such as defining worker's bonuses, fair prices and time limits of the tasks, involve knowledge of the likely duration of the task at hand. Bringing this together, in this work we introduce a new time--sensitive Bayesian aggregation method that simultaneously estimates a task's duration and obtains reliable aggregations of crowdsourced judgments. Our method, called BCCTime, builds on the key insight that the time taken by a worker to perform a task is an important indicator of the likely quality of the produced judgment. To capture this, BCCTime uses latent variables to represent the uncertainty about the workers' completion time, the tasks' duration and the workers' accuracy. To relate the quality of a judgment to the time a worker spends on a task, our model assumes that each task is completed within a latent time window within which all workers with a propensity to genuinely attempt the labelling task (i.e., no spammers) are expected to submit their judgments. In contrast, workers with a lower propensity to valid labeling, such as spammers, bots or lazy labelers, are assumed to perform tasks considerably faster or slower than the time required by normal workers. Specifically, we use efficient message-passing Bayesian inference to learn approximate posterior probabilities of (i) the confusion matrix of each worker, (ii) the propensity to valid labeling of each worker, (iii) the unbiased duration of each task and (iv) the true label of each task. Using two real-world public datasets for entity linking tasks, we show that BCCTime produces up to 11% more accurate classifications and up to 100% more informative estimates of a task's duration compared to state-of-the-art methods
    corecore