3,453 research outputs found

    Learning in the Repeated Secretary Problem

    Full text link
    In the classical secretary problem, one attempts to find the maximum of an unknown and unlearnable distribution through sequential search. In many real-world searches, however, distributions are not entirely unknown and can be learned through experience. To investigate learning in such a repeated secretary problem we conduct a large-scale behavioral experiment in which people search repeatedly from fixed distributions. In contrast to prior investigations that find no evidence for learning in the classical scenario, in the repeated setting we observe substantial learning resulting in near-optimal stopping behavior. We conduct a Bayesian comparison of multiple behavioral models which shows that participants' behavior is best described by a class of threshold-based models that contains the theoretically optimal strategy. Fitting such a threshold-based model to data reveals players' estimated thresholds to be surprisingly close to the optimal thresholds after only a small number of games

    Single-object Imaging and Spectroscopy to Enhance Dark Energy Science from LSST

    Get PDF
    Single-object imaging and spectroscopy on telescopes with apertures ranging from ~4 m to 40 m have the potential to greatly enhance the cosmological constraints that can be obtained from LSST. Two major cosmological probes will benefit greatly from LSST follow-up: accurate spectrophotometry for nearby and distant Type Ia supernovae will expand the cosmological distance lever arm by unlocking the constraining power of high-z supernovae; and cosmology with time delays of strongly-lensed supernovae and quasars will require additional high-cadence imaging to supplement LSST, adaptive optics imaging or spectroscopy for accurate lens and source positions, and IFU or slit spectroscopy to measure detailed properties of lens systems. We highlight the scientific impact of these two science drivers, and discuss how additional resources will benefit them. For both science cases, LSST will deliver a large sample of objects over both the wide and deep fields in the LSST survey, but additional data to characterize both individual systems and overall systematics will be key to ensuring robust cosmological inference to high redshifts. Community access to large amounts of natural-seeing imaging on ~2-4 m telescopes, adaptive optics imaging and spectroscopy on 8-40 m telescopes, and high-throughput single-target spectroscopy on 4-40 m telescopes will be necessary for LSST time domain cosmology to reach its full potential. In two companion white papers we present the additional gains for LSST cosmology that will come from deep and from wide-field multi-object spectroscopy.Comment: Submitted to the call for Astro2020 science white paper

    Lay understanding of probability distributions

    Get PDF
    How accurate are laypeople’s intuitions about probability distributions of events? The economic and psychological literatures provide opposing answers. A classical economic view assumes that ordinary decision makers consult perfect expectations, while recent psychological research has emphasized biases in perceptions. In this work, we test laypeople’s intuitions about probability distributions. To establish a ground truth against which accuracy can be assessed, we control the information seen by each subject to establish unambiguous normative answers. We find that laypeople’s statistical intuitions can be highly accurate, and depend strongly upon the elicitation method used. In particular, we find that eliciting an entire distribution from a respondent using a graphical interface, and then computing simple statistics (such as means, fractiles, and confidence intervals) on this distribution, leads to greater accuracy, on both the individual and aggregate level, than the standard method of asking about the same statistics directly

    Reasoning the fast and frugal way: Models of bounded rationality.

    Get PDF

    Comparing Traditional and LLM-based Search for Consumer Choice: A Randomized Experiment

    Full text link
    Recent advances in the development of large language models are rapidly changing how online applications function. LLM-based search tools, for instance, offer a natural language interface that can accommodate complex queries and provide detailed, direct responses. At the same time, there have been concerns about the veracity of the information provided by LLM-based tools due to potential mistakes or fabrications that can arise in algorithmically generated text. In a set of online experiments we investigate how LLM-based search changes people's behavior relative to traditional search, and what can be done to mitigate overreliance on LLM-based output. Participants in our experiments were asked to solve a series of decision tasks that involved researching and comparing different products, and were randomly assigned to do so with either an LLM-based search tool or a traditional search engine. In our first experiment, we find that participants using the LLM-based tool were able to complete their tasks more quickly, using fewer but more complex queries than those who used traditional search. Moreover, these participants reported a more satisfying experience with the LLM-based search tool. When the information presented by the LLM was reliable, participants using the tool made decisions with a comparable level of accuracy to those using traditional search, however we observed overreliance on incorrect information when the LLM erred. Our second experiment further investigated this issue by randomly assigning some users to see a simple color-coded highlighting scheme to alert them to potentially incorrect or misleading information in the LLM responses. Overall we find that this confidence-based highlighting substantially increases the rate at which users spot incorrect information, improving the accuracy of their overall decisions while leaving most other measures unaffected

    Using Preferred Outcome Distributions to estimate Value and Probability Weighting Functions in Decisions under Risk

    Get PDF
    In this paper we propose the use of preferred outcome distributions as a new method to elicit individuals' value and probability weighting functions in decisions under risk. Extant approaches for the elicitation of these two key ingredients of individuals' risk attitude typically rely on a long, chained sequence of lottery choices. In contrast, preferred outcome distributions can be elicited through an intuitive graphical interface, and, as we show, the information contained in two preferred outcome distributions is sufficient to identify non-parametrically both the value function and the probability weighting function in rank-dependent utility models. To illustrate our method and its advantages, we run an incentive-compatible lab study in which participants use a simple graphical interface - the Distribution Builder (Goldstein et al. 2008) - to construct their preferred outcome distributions, subject to a budget constraint. Results show that estimates of the value function are in line with previous research but that probability weighting biases are diminished, thus favoring our proposed approach based on preferred outcome distributions

    Using Preferred Outcome Distributions to Estimate Value and Probability Weighting Functions in Decisions under Risk

    Get PDF
    In this paper we propose the use of preferred outcome distributions as a new method to elicit individuals’ value and probability weighting functions in decisions under risk. Extant approaches for the elicitation of these two key ingredients of individuals’ risk attitude typically rely on a long, chained sequence of lottery choices. In contrast, preferred outcome distributions can be elicited through an intuitive graphical interface, and, as we show, the information contained in two preferred outcome distributions is sufficient to identify non-parametrically both the value function and the probability weighting function in rank-dependent utility models. To illustrate our method and its advantages, we run an incentive-compatible lab study in which participants use a simple graphical interface – the Distribution Builder (Goldstein et al. 2008) – to construct their preferred outcome distributions, subject to a budget constraint. Results show that estimates of the value function are in line with previous research but that probability weighting biases are diminished, thus favoring our proposed approach based on preferred outcome distributions

    The Recognition Heuristic: A Review of Theory and Tests

    Get PDF
    The recognition heuristic is a prime example of how, by exploiting a match between mind and environment, a simple mental strategy can lead to efficient decision making. The proposal of the heuristic initiated a debate about the processes underlying the use of recognition in decision making. We review research addressing four key aspects of the recognition heuristic: (a) that recognition is often an ecologically valid cue; (b) that people often follow recognition when making inferences; (c) that recognition supersedes further cue knowledge; (d) that its use can produce the less-is-more effect – the phenomenon that lesser states of recognition knowledge can lead to more accurate inferences than more complete states. After we contrast the recognition heuristic to other related concepts, including availability and fluency, we carve out, from the existing findings, some boundary conditions of the use of the recognition heuristic as well as key questions for future research. Moreover, we summarize developments concerning the connection of the recognition heuristic with memory models. We suggest that the recognition heuristic is used adaptively and that, compared to other cues, recognition seems to have a special status in decision making. Finally, we discuss how systematic ignorance is exploited in other cognitive mechanisms (e.g., estimation and preference)

    Reasoning the fast and frugal way: Models of bounded rationality.

    Full text link
    • …
    corecore