9,012,651 research outputs found

    The Value of Information Concealment

    Full text link
    We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is kept private, arbitrarily more revenue could potentially be extracted than if the signal is leaked or revealed. We then show that, if the seller is not allowed to make payments to the buyer, the gap between the two is bounded by a multiplicative factor of 3, if the value distribution conditioning on each signal is regular. We give examples showing that both conditions are necessary for a constant bound to hold. We connect this scenario to multi-bidder single-item auctions where bidders' values are correlated. Similarly to the setting above, we show that the revenue of a Bayesian incentive compatible, ex post individually rational auction can be arbitrarily larger than that of a dominant strategy incentive compatible auction, whereas the two are no more than a factor of 5 apart if the auctioneer never pays the bidders and if each bidder's value conditioning on the others' is drawn according to a regular distribution. The upper bounds in both settings degrade gracefully when the distribution is a mixture of a small number of regular distributions

    Value of Information in Feedback Control

    Get PDF
    In this article, we investigate the impact of information on networked control systems, and illustrate how to quantify a fundamental property of stochastic processes that can enrich our understanding about such systems. To that end, we develop a theoretical framework for the joint design of an event trigger and a controller in optimal event-triggered control. We cover two distinct information patterns: perfect information and imperfect information. In both cases, observations are available at the event trigger instantly, but are transmitted to the controller sporadically with one-step delay. For each information pattern, we characterize the optimal triggering policy and optimal control policy such that the corresponding policy profile represents a Nash equilibrium. Accordingly, we quantify the value of information VoIk\operatorname{VoI}_k as the variation in the cost-to-go of the system given an observation at time kk. Finally, we provide an algorithm for approximation of the value of information, and synthesize a closed-form suboptimal triggering policy with a performance guarantee that can readily be implemented

    Rational Value of Information Estimation for Measurement Selection

    Full text link
    Computing value of information (VOI) is a crucial task in various aspects of decision-making under uncertainty, such as in meta-reasoning for search; in selecting measurements to make, prior to choosing a course of action; and in managing the exploration vs. exploitation tradeoff. Since such applications typically require numerous VOI computations during a single run, it is essential that VOI be computed efficiently. We examine the issue of anytime estimation of VOI, as frequently it suffices to get a crude estimate of the VOI, thus saving considerable computational resources. As a case study, we examine VOI estimation in the measurement selection problem. Empirical evaluation of the proposed scheme in this domain shows that computational resources can indeed be significantly reduced, at little cost in expected rewards achieved in the overall decision problem.Comment: 7 pages, 2 figures, presented at URPDM2010; plots fixe

    Speculative Trade and the Value of Public Information

    Get PDF
    In environments with expected utility, it has long been established that speculative trade cannot occur (Milgrom and Stokey [1982]), and that the value of public information is negative in economies with risk-sharing and no aggregate uncertainty (Hirshleifer [1971], Schlee [2001]). We show that these results are still true even if we relax expected utility, so that either Dynamic Consistency (DC) or Consequentialism is violated. We characterise no speculative trade in terms of a weakening of DC and find that Consequentialism is not required. Moreover, we show that a weakening of both DC and Consequentialism is sufficient for the value of public information to be negative. We therefore generalise these important results for convex preferences which contain several classes of ambiguity averse preferences

    The Value-of-Information in Matching with Queues

    Full text link
    We consider the problem of \emph{optimal matching with queues} in dynamic systems and investigate the value-of-information. In such systems, the operators match tasks and resources stored in queues, with the objective of maximizing the system utility of the matching reward profile, minus the average matching cost. This problem appears in many practical systems and the main challenges are the no-underflow constraints, and the lack of matching-reward information and system dynamics statistics. We develop two online matching algorithms: Learning-aided Reward optimAl Matching (LRAM\mathtt{LRAM}) and Dual-LRAM\mathtt{LRAM} (DRAM\mathtt{DRAM}) to effectively resolve both challenges. Both algorithms are equipped with a learning module for estimating the matching-reward information, while DRAM\mathtt{DRAM} incorporates an additional module for learning the system dynamics. We show that both algorithms achieve an O(ϵ+δr)O(\epsilon+\delta_r) close-to-optimal utility performance for any ϵ>0\epsilon>0, while DRAM\mathtt{DRAM} achieves a faster convergence speed and a better delay compared to LRAM\mathtt{LRAM}, i.e., O(δz/ϵ+log(1/ϵ)2))O(\delta_{z}/\epsilon + \log(1/\epsilon)^2)) delay and O(δz/ϵ)O(\delta_z/\epsilon) convergence under DRAM\mathtt{DRAM} compared to O(1/ϵ)O(1/\epsilon) delay and convergence under LRAM\mathtt{LRAM} (δr\delta_r and δz\delta_z are maximum estimation errors for reward and system dynamics). Our results reveal that information of different system components can play very different roles in algorithm performance and provide a systematic way for designing joint learning-control algorithms for dynamic systems

    The Value of Public Information in Monopoly

    Get PDF
    The logic of the linkage principle of Milgrom and Weber (1982) extends to price discrimination. A non-linear pricing monopolist who sells to a single buyer always prefers to commit to publicly reveal information affiliated to the valuation of the buyer.

    Economic Value of Information: Wheat Protein Measurement

    Get PDF
    In this paper we study U.S. wheat farmers’ willingness to pay for near infrared (NIR) sensor that can segregates wheat grains according to their protein concentration. We first develop a microeconomic optimization model of wheat farmers’ segregating and commingling decisions. Then we use U.S. wheat prices and stocks to estimate a wheat protein stock demand system. This allows us to establish the effects of changes in the protein profile of wheat stocks on protein premiums. The paper’s simulation section combines the results from the microeconomic optimization model and from the econometric estimations to simulate wheat farmers’ WTP for the sorting technology. Preliminary findings from the simulation show that a typical hard red winter (hard red spring) wheat farmer’s WTP for the sorting technology is 5.6 (4.8) cents per bushel.information, economic value, wheat, protein, market structure, Crop Production/Industries, Production Economics, Q12, Q16, D81,

    Multiphase sampling using expected value of information

    Get PDF
    This paper explores multiphase or infill sampling to reduce uncertainty after an initial sample has been taken and analysed to produce a map of the probability of some hazard. New observations are iteratively added by maximising the global expected value of information of the points. This is equivalent to minimisation of global misclassification costs. The method accounts for measurement error and different costs of type I and type II errors. Constraints imposed by a mobile sensor web can be accommodated using cost distances rather than Euclidean distances to decide which sensor moves to the next sample location. Calculations become demanding when multiple sensors move simultaneously. In that case, a genetic algorithm can be used to find sets of suitable new measurement locations. The method was implemented using R software for statistical computing and contributed libraries and it is demonstrated using a synthetic data set
    corecore