223 research outputs found
Recommended from our members
Finding good enough coins under symmetric and asymmetric information
We study the problem of returning m coins with biases above 0:5. These good enough coins that are returned by the agent should be acceptable to the authority by meeting the authority's Family Wise Error Rate constraint. We design adaptive algorithms that invoke Sequential Probability Ratio Test to find these good enough coins. We consider scenarios that differ in terms of the information available about the underlying Bayesian setting. The symmetry or asymmetry of the underlying setup, i.e., the difference between what the agent and the authority know about the underlying prior and the support, presents different challenges. We also make notes on the algorithms' sample complexity.Electrical and Computer Engineerin
The Sample Complexity of Search over Multiple Populations
This paper studies the sample complexity of searching over multiple
populations. We consider a large number of populations, each corresponding to
either distribution P0 or P1. The goal of the search problem studied here is to
find one population corresponding to distribution P1 with as few samples as
possible. The main contribution is to quantify the number of samples needed to
correctly find one such population. We consider two general approaches:
non-adaptive sampling methods, which sample each population a predetermined
number of times until a population following P1 is found, and adaptive sampling
methods, which employ sequential sampling schemes for each population. We first
derive a lower bound on the number of samples required by any sampling scheme.
We then consider an adaptive procedure consisting of a series of sequential
probability ratio tests, and show it comes within a constant factor of the
lower bound. We give explicit expressions for this constant when samples of the
populations follow Gaussian and Bernoulli distributions. An alternative
adaptive scheme is discussed which does not require full knowledge of P1, and
comes within a constant factor of the optimal scheme. For comparison, a lower
bound on the sampling requirements of any non-adaptive scheme is presented.Comment: To appear, IEEE Transactions on Information Theor
Prediction Markets: Alternative Mechanisms for Complex Environments with Few Traders
Double auction prediction markets have proven successful in large-scale applications such as elections and sporting events. Consequently, several large corporations have adopted these markets for smaller-scale internal applications where information may be complex and the number of traders is small. Using laboratory experiments, we test the performance of the double auction in complex environments with few traders and compare it to three alternative mechanisms. When information is complex we find that an iterated poll (or Delphi method) outperforms the double auction mechanism. We present five behavioral observations that may explain why the poll performs better in these settings
Finding a most biased coin with fewest flips
We study the problem of learning a most biased coin among a set of coins by
tossing the coins adaptively. The goal is to minimize the number of tosses
until we identify a coin i* whose posterior probability of being most biased is
at least 1-delta for a given delta. Under a particular probabilistic model, we
give an optimal algorithm, i.e., an algorithm that minimizes the expected
number of future tosses. The problem is closely related to finding the best arm
in the multi-armed bandit problem using adaptive strategies. Our algorithm
employs an optimal adaptive strategy -- a strategy that performs the best
possible action at each step after observing the outcomes of all previous coin
tosses. Consequently, our algorithm is also optimal for any starting history of
outcomes. To our knowledge, this is the first algorithm that employs an optimal
adaptive strategy under a Bayesian setting for this problem. Our proof of
optimality employs tools from the field of Markov games
Recommended from our members
Bayesian Inference of Markov Transition Rates
To better understand the world around us, we utilize a series of mathematical models to describe processes in meterology, ecology, finance, and biology. These models (and reality) are usually continuous in time, but it is impractical to assume we can measure quantities continuously and precisely, so researchers are limited to a series of discrete measurements over the course of a study from which they must be able to infer features of interest in the model. To accomplish this, researchers turn to optimal experimental designs, which can achieve these goals more rapidly and at lower experimental costs than non-optimal designs. This thesis focuses on designing optimal sampling schemes to estimate transition rates for two-state continuous time Markov chains, which are adopted to characterize political affiliation, spread of disease, and even decision making models.
We employ Bayesian inference technique to derive three adaptive methods whose sampling times will change with additional evidence. The first two rely on log-likelihood ratio updates to select from two possible parameter values and are thus suited to cases where it is sufficient to determine if the transition rate is high or low. The series of updates provides evidence in favor of or against a particular hypothesis, culminating in the model's convergence to a parameter estimate with a given confidence. The final method minimizes the expected variance of the posterior distribution, allowing for inference over a continuum of possible transition rates. These methods are then compared to periodic (non-adaptive) sampling schemes which fix sampling times across all trials for both unidirectional and symmetric bidirectional two-state Markov chains to determine which provide accurate estimates at minimal cost.</p
Are Investors’ Gains and Losses from Securities Fraud Equal Over Time? Theory and Evidence
Most leading securities regulation scholars argue that compensating securities fraud victims is inefficient. They maintain that because diversified investors that trade frequently are as likely to gain from trading in fraud-tainted stocks as they are to suffer harm from doing so, these investors should have no expected net losses from fraud over the long term. This assertion, which analogizes trading in fraud-tainted stocks to participating in a coin toss game in which players win 1 on tails, is problematic for a number of reasons. First, even if we accept this analogy, probability theory holds that as the number of trials (in this context, purchases and sales of fraud-tainted stock) increases, the lower the probability of being break-even. Second, though true that with increased trials, the likelihood of the proportion of gains and losses being roughly equal will increase, investors generally do not engage in enough trading activity to have a reasonable degree of certainty of reaching the expected proportion of equal gains and losses. Finally, given the variation in fraud-related gains and losses in each stock trade (i.e., the payoffs are not constant), the coin toss analogy is inappropriate. This study, using observational data and computer simulated trading data on 14 investor prototypes, sets out to test the conventional wisdom and reveals not only that undiversified individual investors can suffer significant net losses from securities fraud over a 10-year period, but also that large numbers of diversified institutional investors can, as well. These results refute the claims of fraud compensation opponents who assert that a diversified investor that engages in active trading should suffer little or no fraud-related net harm over the long term and that individual investors can protect themselves fully from fraud-related net harm by investing through mutual funds or other intermediaries
ORGANIZATION OF SELF-KNOWLEDGE PREDICTS UNETHICAL BEHAVIOR
People represent the self (self-structure) using cognitive strategies that either confront (integration) or avoid (compartmentalization) negative self-information (Showers, 1992). Previous research has found that compartmentalization predicts dishonesty on academic performance tasks under neutral conditions in the laboratory (Showers, Thomas, & Grundy, 2015; Thomas, 2015). The current experiments extend this work by using an online paradigm to assess cheating via a coin flip procedure (Bryan, Adams, & Monin, 2013). Here, two experiments seek to replicate the association between compartmentalization and dishonesty under various priming conditions. In Experiment 1, individuals with compartmentalized selves were more dishonest than were individuals with integrative selves, especially under conditions of a “cheater” prime. In Experiment 2, results showed that individuals with integrative selves remained relatively honest compared to individuals with compartmentalized selves even under conditions of greater temptation (money prime). These findings are consistent with the model that individuals with compartmentalized selves defensively avoid negative interpretations of their own behavior. Instead, they may rationalize their dishonesty as normative or even self-enhancing. Conversely, individuals with integrative selves vigilantly process dishonest behavior as having negative implications for the self, thereby motivating themselves to behave more honestly. This model of defensive self-structure lays the framework for a more comprehensive understanding of ethical behavior
Quantum Algorithm Implementations for Beginners
As quantum computers become available to the general public, the need has
arisen to train a cohort of quantum programmers, many of whom have been
developing classical computer programs for most of their careers. While
currently available quantum computers have less than 100 qubits, quantum
computing hardware is widely expected to grow in terms of qubit count, quality,
and connectivity. This review aims to explain the principles of quantum
programming, which are quite different from classical programming, with
straightforward algebra that makes understanding of the underlying fascinating
quantum mechanical principles optional. We give an introduction to quantum
computing algorithms and their implementation on real quantum hardware. We
survey 20 different quantum algorithms, attempting to describe each in a
succinct and self-contained fashion. We show how these algorithms can be
implemented on IBM's quantum computer, and in each case, we discuss the results
of the implementation with respect to differences between the simulator and the
actual hardware runs. This article introduces computer scientists, physicists,
and engineers to quantum algorithms and provides a blueprint for their
implementations
The Effects of Housing Conditions and Methylphenidate on Two Volitional Inhibition Tasks
abstract: The failure to withhold inappropriate behavior is a central component of most impulse control disorders, including Attention Deficit Hyperactivity Disorder (ADHD). The present study examined the effects of housing environment and methylphenidate (a drug often prescribed for ADHD) on the performance of rats in two response inhibition tasks: differential reinforcement of low rate (DRL) and fixed minimum interval (FMI). Both tasks required rats to wait a fixed amount of time (6 s) before emitting a reinforced response. The capacity to withhold the target response (volitional inhibition) and timing precision were estimated on the basis of performance in each of the tasks. Paradoxically, rats housed in a mildly enriched environment that included a conspecific displayed less volitional inhibition in both tasks compared to rats housed in an isolated environment. Enriched housing, however, increased timing precision. Acute administration of methylphenidate partially reversed the effects of enriched housing. Implications of these results in the assessment and treatment of ADHD-related impulsivity are discussed.Dissertation/ThesisM.A. Psychology 201
- …