20,304 research outputs found

    The Complexity of User Retention

    Get PDF
    This paper studies families of distributions T that are amenable to retentive learning, meaning that an expert can retain users that seek to predict their future, assuming user attributes are sampled from T and exposed gradually over time. Limited attention span is the main problem experts face in our model. We make two contributions. First, we formally define the notions of retentively learnable distributions and properties. Along the way, we define a retention complexity measure of distributions and a natural class of retentive scoring rules that model the way users evaluate experts they interact with. These rules are shown to be tightly connected to truth-eliciting "proper scoring rules" studied in Decision Theory since the 1950\u27s [McCarthy, PNAS 1956]. Second, we take a first step towards relating retention complexity to other measures of significance in computational complexity. In particular, we show that linear properties (over the binary field) are retentively learnable, whereas random Low Density Parity Check (LDPC) codes have, with high probability, maximal retention complexity. Intriguingly, these results resemble known results from the field of property testing and suggest that deeper connections between retentive distributions and locally testable properties may exist

    Eliciting density ratio classes

    Get PDF
    AbstractThe probability distributions of uncertain quantities needed for predictive modelling and decision support are frequently elicited from subject matter experts. However, experts are often uncertain about quantifying their beliefs using precise probability distributions. Therefore, it seems natural to describe their uncertain beliefs using sets of probability distributions. There are various possible structures, or classes, for defining set membership of continuous random variables. The Density Ratio Class has desirable properties, but there is no established procedure for eliciting this class. Thus, we propose a method for constructing Density Ratio Classes that builds on conventional quantile or probability elicitation, but allows the expert to state intervals for these quantities. Parametric shape functions, ideally also suggested by the expert, are then used to bound the nonparametric set of shapes of densities that belong to the class and are compatible with the stated intervals. This leads to a natural metric for the size of the class based on the ratio of the total areas under upper and lower bounding shape functions. This ratio will be determined by the characteristics of the shape functions, the scatter of the elicited values, and the explicit expert imprecision, as characterized by the width of the stated intervals. We provide some examples, both didactic and real, and conclude with recommendations for the further development and application of the Density Ratio Class

    Information Aggregation in Exponential Family Markets

    Full text link
    We consider the design of prediction market mechanisms known as automated market makers. We show that we can design these mechanisms via the mold of \emph{exponential family distributions}, a popular and well-studied probability distribution template used in statistics. We give a full development of this relationship and explore a range of benefits. We draw connections between the information aggregation of market prices and the belief aggregation of learning agents that rely on exponential family distributions. We develop a very natural analysis of the market behavior as well as the price equilibrium under the assumption that the traders exhibit risk aversion according to exponential utility. We also consider similar aspects under alternative models, such as when traders are budget constrained

    Expert Elicitation for Reliable System Design

    Full text link
    This paper reviews the role of expert judgement to support reliability assessments within the systems engineering design process. Generic design processes are described to give the context and a discussion is given about the nature of the reliability assessments required in the different systems engineering phases. It is argued that, as far as meeting reliability requirements is concerned, the whole design process is more akin to a statistical control process than to a straightforward statistical problem of assessing an unknown distribution. This leads to features of the expert judgement problem in the design context which are substantially different from those seen, for example, in risk assessment. In particular, the role of experts in problem structuring and in developing failure mitigation options is much more prominent, and there is a need to take into account the reliability potential for future mitigation measures downstream in the system life cycle. An overview is given of the stakeholders typically involved in large scale systems engineering design projects, and this is used to argue the need for methods that expose potential judgemental biases in order to generate analyses that can be said to provide rational consensus about uncertainties. Finally, a number of key points are developed with the aim of moving toward a framework that provides a holistic method for tracking reliability assessment through the design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287], [arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore