34,130 research outputs found
Anytime Point-Based Approximations for Large POMDPs
The Partially Observable Markov Decision Process has long been recognized as
a rich framework for real-world planning and control problems, especially in
robotics. However exact solutions in this framework are typically
computationally intractable for all but the smallest problems. A well-known
technique for speeding up POMDP solving involves performing value backups at
specific belief points, rather than over the entire belief simplex. The
efficiency of this approach, however, depends greatly on the selection of
points. This paper presents a set of novel techniques for selecting informative
belief points which work well in practice. The point selection procedure is
combined with point-based value backups to form an effective anytime POMDP
algorithm called Point-Based Value Iteration (PBVI). The first aim of this
paper is to introduce this algorithm and present a theoretical analysis
justifying the choice of belief selection technique. The second aim of this
paper is to provide a thorough empirical comparison between PBVI and other
state-of-the-art POMDP methods, in particular the Perseus algorithm, in an
effort to highlight their similarities and differences. Evaluation is performed
using both standard POMDP domains and realistic robotic tasks
Ensemble Analysis of Adaptive Compressed Genome Sequencing Strategies
Acquiring genomes at single-cell resolution has many applications such as in
the study of microbiota. However, deep sequencing and assembly of all of
millions of cells in a sample is prohibitively costly. A property that can come
to rescue is that deep sequencing of every cell should not be necessary to
capture all distinct genomes, as the majority of cells are biological
replicates. Biologically important samples are often sparse in that sense. In
this paper, we propose an adaptive compressed method, also known as distilled
sensing, to capture all distinct genomes in a sparse microbial community with
reduced sequencing effort. As opposed to group testing in which the number of
distinct events is often constant and sparsity is equivalent to rarity of an
event, sparsity in our case means scarcity of distinct events in comparison to
the data size. Previously, we introduced the problem and proposed a distilled
sensing solution based on the breadth first search strategy. We simulated the
whole process which constrained our ability to study the behavior of the
algorithm for the entire ensemble due to its computational intensity. In this
paper, we modify our previous breadth first search strategy and introduce the
depth first search strategy. Instead of simulating the entire process, which is
intractable for a large number of experiments, we provide a dynamic programming
algorithm to analyze the behavior of the method for the entire ensemble. The
ensemble analysis algorithm recursively calculates the probability of capturing
every distinct genome and also the expected total sequenced nucleotides for a
given population profile. Our results suggest that the expected total sequenced
nucleotides grows proportional to of the number of cells and
proportional linearly with the number of distinct genomes
- …