158,662 research outputs found
Sequential stopping for high-throughput experiments
In high-throughput experiments, the sample size is typically chosen informally. Most formal sample-size calculations depend critically on prior knowledge. We propose a sequential strategy that, by updating knowledge when new data are available, depends less critically on prior assumptions. Experiments are stopped or continued based on the potential benefits in obtaining additional data. The underlying decision-theoretic framework guarantees the design to proceed in a coherent fashion. We propose intuitively appealing, easy-to-implement utility functions. As in most sequential design problems, an exact solution is prohibitive. We propose a simulation-based approximation that uses decision boundaries. We apply the method to RNA-seq, microarray, and reverse-phase protein array studies and show its potential advantages. The approach has been added to the Bioconductor package gaga
Handling Covariates in the Design of Clinical Trials
There has been a split in the statistics community about the need for taking
covariates into account in the design phase of a clinical trial. There are many
advocates of using stratification and covariate-adaptive randomization to
promote balance on certain known covariates. However, balance does not always
promote efficiency or ensure more patients are assigned to the better
treatment. We describe these procedures, including model-based procedures, for
incorporating covariates into the design of clinical trials, and give examples
where balance, efficiency and ethical considerations may be in conflict. We
advocate a new class of procedures, covariate-adjusted response-adaptive (CARA)
randomization procedures that attempt to optimize both efficiency and ethical
considerations, while maintaining randomization. We review all these
procedures, present a few new simulation studies, and conclude with our
philosophy.Comment: Published in at http://dx.doi.org/10.1214/08-STS269 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Recommended from our members
On optimal designs for clinical trials: An updated review
Optimization of clinical trial designs can help investigators achieve higher qualityresults for the given resource constraints. The present paper gives an overviewof optimal designs for various important problems that arise in different stages ofclinical drug development, including phase I dose–toxicity studies; phase I/II studiesthat consider early efficacy and toxicity outcomes simultaneously; phase IIdose–response studies driven by multiple comparisons (MCP), modeling techniques(Mod), or their combination (MCP–Mod); phase III randomized controlled multiarmmulti-objective clinical trials to test difference among several treatment groups;and population pharmacokinetics–pharmacodynamics experiments. We find thatmodern literature is very rich with optimal design methodologies that can be utilizedby clinical researchers to improve efficiency of drug development
Online Ascending Auctions for Gradually Expiring Items
In this paper we consider online auction mechanisms for the allocation of M items that are identical to each other except for the fact that they have different expiration times, and each item must be allocated before it expires. Players arrive at different times, and wish to buy one item before their deadline. The main difficulty is that players act "selfishly" and may mis-report their values, deadlines, or arrival times. We begin by showing that the usual notion of truthfulness (where players follow a single dominant strategy) cannot be used in this case, since any (deterministic) truthful auction cannot obtain better than an M-approximation of the social welfare. Therefore, instead of designing auctions in which players should follow a single strategy, we design two auctions that perform well under a wide class of selfish, "semi-myopic", strategies. For every combination of such strategies, the auction is associated with a different algorithm, and so we have a family of "semi-myopic" algorithms. We show that any algorithm in this family obtains a 3-approximation, and by this conclude that our auctions will perform well under any choice of such semi-myopic behaviors. We next turn to provide a game-theoretic justification for acting in such a semi-myopic way. We suggest a new notion of "Set-Nash" equilibrium, where we cannot pin-point a single best-response strategy, but rather only a set of possible best-response strategies. We show that our auctions have a Set-Nash equilibrium which is all semi-myopic, hence guarantees a 3-approximation. We believe that this notion is of independent interest
Empirical Evaluation of the Parallel Distribution Sweeping Framework on Multicore Architectures
In this paper, we perform an empirical evaluation of the Parallel External
Memory (PEM) model in the context of geometric problems. In particular, we
implement the parallel distribution sweeping framework of Ajwani, Sitchinava
and Zeh to solve batched 1-dimensional stabbing max problem. While modern
processors consist of sophisticated memory systems (multiple levels of caches,
set associativity, TLB, prefetching), we empirically show that algorithms
designed in simple models, that focus on minimizing the I/O transfers between
shared memory and single level cache, can lead to efficient software on current
multicore architectures. Our implementation exhibits significantly fewer
accesses to slow DRAM and, therefore, outperforms traditional approaches based
on plane sweep and two-way divide and conquer.Comment: Longer version of ESA'13 pape
Auctions with Severely Bounded Communication
We study auctions with severe bounds on the communication allowed: each
bidder may only transmit t bits of information to the auctioneer. We consider
both welfare- and profit-maximizing auctions under this communication
restriction. For both measures, we determine the optimal auction and show that
the loss incurred relative to unconstrained auctions is mild. We prove
non-surprising properties of these kinds of auctions, e.g., that in optimal
mechanisms bidders simply report the interval in which their valuation lies in,
as well as some surprising properties, e.g., that asymmetric auctions are
better than symmetric ones and that multi-round auctions reduce the
communication complexity only by a linear factor
Beyond A/B Testing: Sequential Randomization for Developing Interventions in Scaled Digital Learning Environments
Randomized experiments ensure robust causal inference that are critical to
effective learning analytics research and practice. However, traditional
randomized experiments, like A/B tests, are limiting in large scale digital
learning environments. While traditional experiments can accurately compare two
treatment options, they are less able to inform how to adapt interventions to
continually meet learners' diverse needs. In this work, we introduce a trial
design for developing adaptive interventions in scaled digital learning
environments -- the sequential randomized trial (SRT). With the goal of
improving learner experience and developing interventions that benefit all
learners at all times, SRTs inform how to sequence, time, and personalize
interventions. In this paper, we provide an overview of SRTs, and we illustrate
the advantages they hold compared to traditional experiments. We describe a
novel SRT run in a large scale data science MOOC. The trial results
contextualize how learner engagement can be addressed through inclusive
culturally targeted reminder emails. We also provide practical advice for
researchers who aim to run their own SRTs to develop adaptive interventions in
scaled digital learning environments
Coordinated Multicasting with Opportunistic User Selection in Multicell Wireless Systems
Physical layer multicasting with opportunistic user selection (OUS) is
examined for multicell multi-antenna wireless systems. By adopting a two-layer
encoding scheme, a rate-adaptive channel code is applied in each fading block
to enable successful decoding by a chosen subset of users (which varies over
different blocks) and an application layer erasure code is employed across
multiple blocks to ensure that every user is able to recover the message after
decoding successfully in a sufficient number of blocks. The transmit signal and
code-rate in each block determine opportunistically the subset of users that
are able to successfully decode and can be chosen to maximize the long-term
multicast efficiency. The employment of OUS not only helps avoid
rate-limitations caused by the user with the worst channel, but also helps
coordinate interference among different cells and multicast groups. In this
work, efficient algorithms are proposed for the design of the transmit
covariance matrices, the physical layer code-rates, and the target user subsets
in each block. In the single group scenario, the system parameters are
determined by maximizing the group-rate, defined as the physical layer
code-rate times the fraction of users that can successfully decode in each
block. In the multi-group scenario, the system parameters are determined by
considering a group-rate balancing optimization problem, which is solved by a
successive convex approximation (SCA) approach. To further reduce the feedback
overhead, we also consider the case where only part of the users feed back
their channel vectors in each block and propose a design based on the balancing
of the expected group-rates. In addition to SCA, a sample average approximation
technique is also introduced to handle the probabilistic terms arising in this
problem. The effectiveness of the proposed schemes is demonstrated by computer
simulations.Comment: Accepted by IEEE Transactions on Signal Processin
- …