100,164 research outputs found
Completeness of Randomized Kinodynamic Planners with State-based Steering
Probabilistic completeness is an important property in motion planning.
Although it has been established with clear assumptions for geometric planners,
the panorama of completeness results for kinodynamic planners is still
incomplete, as most existing proofs rely on strong assumptions that are
difficult, if not impossible, to verify on practical systems. In this paper, we
focus on an important class of kinodynamic planners, namely those that
interpolate trajectories in the state space. We provide a proof of
probabilistic completeness for these planners under assumptions that can be
readily verified from the system's equations of motion and the user-defined
interpolation function. Our proof relies crucially on a property of
interpolated trajectories, termed second-order continuity (SOC), which we show
is tightly related to the ability of a planner to benefit from denser sampling.
We analyze the impact of this property in simulations on a low-torque pendulum.
Our results show that a simple RRT using a second-order continuous
interpolation swiftly finds solution, while it is impossible for the same
planner using standard Bezier curves (which are not SOC) to find any solution.Comment: 21 pages, 5 figure
The Army of One (Sample): the Characteristics of Sampling-based Probabilistic Neural Representations
There is growing evidence that humans and animals represent the uncertainty associated with sensory stimuli and utilize this uncertainty during planning and decision making in a statistically optimal way. Recently, a nonparametric framework for representing probabilistic information has been proposed whereby neural activity encodes samples from the distribution over external variables. Although such sample-based probabilistic representations have strong empirical and theoretical support, two major issues need to be clarified before they can be considered as viable candidate theories of cortical computation. First, in a fluctuating natural environment, can neural dynamics provide sufficient samples to accurately estimate a stimulus? Second, can such a code support accurate learning over biologically plausible time-scales? Although it is well known that sampling is statistically optimal if the number of samples is unlimited, biological constraints mean that estimation and learning in the cortex must be supported by a relatively small number of possibly dependent samples. We explored these issues in a cue combination task by comparing a neural circuit that employed a sampling-based representation to an optimal estimator. For static stimuli, we found that a single sample is sufficient to obtain an estimator with less than twice the optimal variance, and that performance improves with the inverse square root of the number of samples. For dynamic stimuli, with linear-Gaussian evolution, we found that the efficiency of the estimation improves significantly as temporal information stabilizes the estimate, and because sampling does not require a burn-in phase. Finally, we found that using a single sample, the dynamic model can accurately learn the parameters of the input neural populations up to a general scaling factor, which disappears for modest sample size. These results suggest that sample-based representations can support estimation and learning using a relatively small number of samples and are therefore highly feasible alternatives for performing probabilistic cortical computations.

Sensor Synthesis for POMDPs with Reachability Objectives
Partially observable Markov decision processes (POMDPs) are widely used in
probabilistic planning problems in which an agent interacts with an environment
using noisy and imprecise sensors. We study a setting in which the sensors are
only partially defined and the goal is to synthesize "weakest" additional
sensors, such that in the resulting POMDP, there is a small-memory policy for
the agent that almost-surely (with probability~1) satisfies a reachability
objective. We show that the problem is NP-complete, and present a symbolic
algorithm by encoding the problem into SAT instances. We illustrate trade-offs
between the amount of memory of the policy and the number of additional sensors
on a simple example. We have implemented our approach and consider three
classical POMDP examples from the literature, and show that in all the examples
the number of sensors can be significantly decreased (as compared to the
existing solutions in the literature) without increasing the complexity of the
policies.Comment: arXiv admin note: text overlap with arXiv:1511.0845
Optimal Sampling-Based Motion Planning under Differential Constraints: the Driftless Case
Motion planning under differential constraints is a classic problem in
robotics. To date, the state of the art is represented by sampling-based
techniques, with the Rapidly-exploring Random Tree algorithm as a leading
example. Yet, the problem is still open in many aspects, including guarantees
on the quality of the obtained solution. In this paper we provide a thorough
theoretical framework to assess optimality guarantees of sampling-based
algorithms for planning under differential constraints. We exploit this
framework to design and analyze two novel sampling-based algorithms that are
guaranteed to converge, as the number of samples increases, to an optimal
solution (namely, the Differential Probabilistic RoadMap algorithm and the
Differential Fast Marching Tree algorithm). Our focus is on driftless
control-affine dynamical models, which accurately model a large class of
robotic systems. In this paper we use the notion of convergence in probability
(as opposed to convergence almost surely): the extra mathematical flexibility
of this approach yields convergence rate bounds - a first in the field of
optimal sampling-based motion planning under differential constraints.
Numerical experiments corroborating our theoretical results are presented and
discussed
- …