211 research outputs found

    Finding a most biased coin with fewest flips

    Full text link
    We study the problem of learning a most biased coin among a set of coins by tossing the coins adaptively. The goal is to minimize the number of tosses until we identify a coin i* whose posterior probability of being most biased is at least 1-delta for a given delta. Under a particular probabilistic model, we give an optimal algorithm, i.e., an algorithm that minimizes the expected number of future tosses. The problem is closely related to finding the best arm in the multi-armed bandit problem using adaptive strategies. Our algorithm employs an optimal adaptive strategy -- a strategy that performs the best possible action at each step after observing the outcomes of all previous coin tosses. Consequently, our algorithm is also optimal for any starting history of outcomes. To our knowledge, this is the first algorithm that employs an optimal adaptive strategy under a Bayesian setting for this problem. Our proof of optimality employs tools from the field of Markov games

    The Sample Complexity of Search over Multiple Populations

    Full text link
    This paper studies the sample complexity of searching over multiple populations. We consider a large number of populations, each corresponding to either distribution P0 or P1. The goal of the search problem studied here is to find one population corresponding to distribution P1 with as few samples as possible. The main contribution is to quantify the number of samples needed to correctly find one such population. We consider two general approaches: non-adaptive sampling methods, which sample each population a predetermined number of times until a population following P1 is found, and adaptive sampling methods, which employ sequential sampling schemes for each population. We first derive a lower bound on the number of samples required by any sampling scheme. We then consider an adaptive procedure consisting of a series of sequential probability ratio tests, and show it comes within a constant factor of the lower bound. We give explicit expressions for this constant when samples of the populations follow Gaussian and Bernoulli distributions. An alternative adaptive scheme is discussed which does not require full knowledge of P1, and comes within a constant factor of the optimal scheme. For comparison, a lower bound on the sampling requirements of any non-adaptive scheme is presented.Comment: To appear, IEEE Transactions on Information Theor

    Prediction Markets: Alternative Mechanisms for Complex Environments with Few Traders

    Get PDF
    Double auction prediction markets have proven successful in large-scale applications such as elections and sporting events. Consequently, several large corporations have adopted these markets for smaller-scale internal applications where information may be complex and the number of traders is small. Using laboratory experiments, we test the performance of the double auction in complex environments with few traders and compare it to three alternative mechanisms. When information is complex we find that an iterated poll (or Delphi method) outperforms the double auction mechanism. We present five behavioral observations that may explain why the poll performs better in these settings

    Are Investors’ Gains and Losses from Securities Fraud Equal Over Time? Theory and Evidence

    Get PDF
    Most leading securities regulation scholars argue that compensating securities fraud victims is inefficient. They maintain that because diversified investors that trade frequently are as likely to gain from trading in fraud-tainted stocks as they are to suffer harm from doing so, these investors should have no expected net losses from fraud over the long term. This assertion, which analogizes trading in fraud-tainted stocks to participating in a coin toss game in which players win 1onheadsandlose1 on heads and lose 1 on tails, is problematic for a number of reasons. First, even if we accept this analogy, probability theory holds that as the number of trials (in this context, purchases and sales of fraud-tainted stock) increases, the lower the probability of being break-even. Second, though true that with increased trials, the likelihood of the proportion of gains and losses being roughly equal will increase, investors generally do not engage in enough trading activity to have a reasonable degree of certainty of reaching the expected proportion of equal gains and losses. Finally, given the variation in fraud-related gains and losses in each stock trade (i.e., the payoffs are not constant), the coin toss analogy is inappropriate. This study, using observational data and computer simulated trading data on 14 investor prototypes, sets out to test the conventional wisdom and reveals not only that undiversified individual investors can suffer significant net losses from securities fraud over a 10-year period, but also that large numbers of diversified institutional investors can, as well. These results refute the claims of fraud compensation opponents who assert that a diversified investor that engages in active trading should suffer little or no fraud-related net harm over the long term and that individual investors can protect themselves fully from fraud-related net harm by investing through mutual funds or other intermediaries

    ORGANIZATION OF SELF-KNOWLEDGE PREDICTS UNETHICAL BEHAVIOR

    Get PDF
    People represent the self (self-structure) using cognitive strategies that either confront (integration) or avoid (compartmentalization) negative self-information (Showers, 1992). Previous research has found that compartmentalization predicts dishonesty on academic performance tasks under neutral conditions in the laboratory (Showers, Thomas, & Grundy, 2015; Thomas, 2015). The current experiments extend this work by using an online paradigm to assess cheating via a coin flip procedure (Bryan, Adams, & Monin, 2013). Here, two experiments seek to replicate the association between compartmentalization and dishonesty under various priming conditions. In Experiment 1, individuals with compartmentalized selves were more dishonest than were individuals with integrative selves, especially under conditions of a “cheater” prime. In Experiment 2, results showed that individuals with integrative selves remained relatively honest compared to individuals with compartmentalized selves even under conditions of greater temptation (money prime). These findings are consistent with the model that individuals with compartmentalized selves defensively avoid negative interpretations of their own behavior. Instead, they may rationalize their dishonesty as normative or even self-enhancing. Conversely, individuals with integrative selves vigilantly process dishonest behavior as having negative implications for the self, thereby motivating themselves to behave more honestly. This model of defensive self-structure lays the framework for a more comprehensive understanding of ethical behavior

    Quantum Algorithm Implementations for Beginners

    Full text link
    As quantum computers become available to the general public, the need has arisen to train a cohort of quantum programmers, many of whom have been developing classical computer programs for most of their careers. While currently available quantum computers have less than 100 qubits, quantum computing hardware is widely expected to grow in terms of qubit count, quality, and connectivity. This review aims to explain the principles of quantum programming, which are quite different from classical programming, with straightforward algebra that makes understanding of the underlying fascinating quantum mechanical principles optional. We give an introduction to quantum computing algorithms and their implementation on real quantum hardware. We survey 20 different quantum algorithms, attempting to describe each in a succinct and self-contained fashion. We show how these algorithms can be implemented on IBM's quantum computer, and in each case, we discuss the results of the implementation with respect to differences between the simulator and the actual hardware runs. This article introduces computer scientists, physicists, and engineers to quantum algorithms and provides a blueprint for their implementations

    The Effects of Housing Conditions and Methylphenidate on Two Volitional Inhibition Tasks

    Get PDF
    abstract: The failure to withhold inappropriate behavior is a central component of most impulse control disorders, including Attention Deficit Hyperactivity Disorder (ADHD). The present study examined the effects of housing environment and methylphenidate (a drug often prescribed for ADHD) on the performance of rats in two response inhibition tasks: differential reinforcement of low rate (DRL) and fixed minimum interval (FMI). Both tasks required rats to wait a fixed amount of time (6 s) before emitting a reinforced response. The capacity to withhold the target response (volitional inhibition) and timing precision were estimated on the basis of performance in each of the tasks. Paradoxically, rats housed in a mildly enriched environment that included a conspecific displayed less volitional inhibition in both tasks compared to rats housed in an isolated environment. Enriched housing, however, increased timing precision. Acute administration of methylphenidate partially reversed the effects of enriched housing. Implications of these results in the assessment and treatment of ADHD-related impulsivity are discussed.Dissertation/ThesisM.A. Psychology 201

    Significance testing of word frequencies in corpora

    Get PDF
    Finding out whether a word occurs significantly more often in one text or corpus than in another is an important question in analysing corpora. As noted by Kilgarriff (Language is never, ever, ever, random, Corpus Linguistics and Linguistic Theory, 2005; 1(2): 263–76.), the use of the X2 and log-likelihood ratio tests is problematic in this context, as they are based on the assumption that all samples are statistically independent of each other. However, words within a text are not independent. As pointed out in Kilgarriff (Comparing corpora, International Journal of Corpus Linguistics, 2001; 6(1): 1–37) and Paquot and Bestgen (Distinctive words in academic writing: a comparison of three statistical tests for keyword extraction. In Jucker, A., Schreier, D., and Hundt, M. (eds), Corpora: Pragmatics and Discourse. Amsterdam: Rodopi, 2009, pp. 247–69), it is possible to represent the data differently and employ other tests, such that we assume independence at the level of texts rather than individual words. This allows us to account for the distribution of words within a corpus. In this article we compare the significance estimates of various statistical tests in a controlled resampling experiment and in a practical setting, studying differences between texts produced by male and female fiction writers in the British National Corpus. We find that the choice of the test, and hence data representation, matters. We conclude that significance testing can be used to find consequential differences between corpora, but that assuming independence between all words may lead to overestimating the significance of the observed differences, especially for poorly dispersed words. We recommend the use of the t-test, Wilcoxon rank sum test, or bootstrap test for comparing word frequencies across corpora.Peer reviewe
    corecore