1,347 research outputs found
Pre-processing for approximate Bayesian computation in image analysis
Most of the existing algorithms for approximate Bayesian computation (ABC)
assume that it is feasible to simulate pseudo-data from the model at each
iteration. However, the computational cost of these simulations can be
prohibitive for high dimensional data. An important example is the Potts model,
which is commonly used in image analysis. Images encountered in real world
applications can have millions of pixels, therefore scalability is a major
concern. We apply ABC with a synthetic likelihood to the hidden Potts model
with additive Gaussian noise. Using a pre-processing step, we fit a binding
function to model the relationship between the model parameters and the
synthetic likelihood parameters. Our numerical experiments demonstrate that the
precomputed binding function dramatically improves the scalability of ABC,
reducing the average runtime required for model fitting from 71 hours to only 7
minutes. We also illustrate the method by estimating the smoothing parameter
for remotely sensed satellite imagery. Without precomputation, Bayesian
inference is impractical for datasets of that scale.Comment: 5th IMS-ISBA joint meeting (MCMSki IV
Butterfly Factorization
The paper introduces the butterfly factorization as a data-sparse
approximation for the matrices that satisfy a complementary low-rank property.
The factorization can be constructed efficiently if either fast algorithms for
applying the matrix and its adjoint are available or the entries of the matrix
can be sampled individually. For an matrix, the resulting
factorization is a product of sparse matrices, each with
non-zero entries. Hence, it can be applied rapidly in operations.
Numerical results are provided to demonstrate the effectiveness of the
butterfly factorization and its construction algorithms
A Discrete Logarithm-based Approach to Compute Low-Weight Multiples of Binary Polynomials
Being able to compute efficiently a low-weight multiple of a given binary
polynomial is often a key ingredient of correlation attacks to LFSR-based
stream ciphers. The best known general purpose algorithm is based on the
generalized birthday problem. We describe an alternative approach which is
based on discrete logarithms and has much lower memory complexity requirements
with a comparable time complexity.Comment: 12 page
CDDT: Fast Approximate 2D Ray Casting for Accelerated Localization
Localization is an essential component for autonomous robots. A
well-established localization approach combines ray casting with a particle
filter, leading to a computationally expensive algorithm that is difficult to
run on resource-constrained mobile robots. We present a novel data structure
called the Compressed Directional Distance Transform for accelerating ray
casting in two dimensional occupancy grid maps. Our approach allows online map
updates, and near constant time ray casting performance for a fixed size map,
in contrast with other methods which exhibit poor worst case performance. Our
experimental results show that the proposed algorithm approximates the
performance characteristics of reading from a three dimensional lookup table of
ray cast solutions while requiring two orders of magnitude less memory and
precomputation. This results in a particle filter algorithm which can maintain
2500 particles with 61 ray casts per particle at 40Hz, using a single CPU
thread onboard a mobile robot.Comment: 8 pages, 14 figures, ICRA versio
Efficient motion planning for problems lacking optimal substructure
We consider the motion-planning problem of planning a collision-free path of
a robot in the presence of risk zones. The robot is allowed to travel in these
zones but is penalized in a super-linear fashion for consecutive accumulative
time spent there. We suggest a natural cost function that balances path length
and risk-exposure time. Specifically, we consider the discrete setting where we
are given a graph, or a roadmap, and we wish to compute the minimal-cost path
under this cost function. Interestingly, paths defined using our cost function
do not have an optimal substructure. Namely, subpaths of an optimal path are
not necessarily optimal. Thus, the Bellman condition is not satisfied and
standard graph-search algorithms such as Dijkstra cannot be used. We present a
path-finding algorithm, which can be seen as a natural generalization of
Dijkstra's algorithm. Our algorithm runs in time, where~ and are the number of vertices and
edges of the graph, respectively, and is the number of intersections
between edges and the boundary of the risk zone. We present simulations on
robotic platforms demonstrating both the natural paths produced by our cost
function and the computational efficiency of our algorithm
- …