18,469 research outputs found
Stochastic motion planning and applications to traffic
This paper presents a stochastic motion planning algorithm and its application to traffic navigation. The algorithm copes with the uncertainty of road traffic conditions by stochastic modeling of travel delay on road networks. The algorithm determines paths between two points that optimize a cost function of the delay probability distribution. It can be used to find paths that maximize the probability of reaching a destination within a particular travel deadline. For such problems, standard shortest-path algorithms don’t work because the optimal substructure property doesn’t hold. We evaluate our algorithm using both simulations and real-world drives, using delay data gathered from a set of taxis equipped with GPS sensors and a wireless network. Our algorithm can be integrated into on-board navigation systems as well as route-finding Web sites, providing drivers with good paths that meet their desired goals.National Science Foundation (U.S.) (grant EFRI-0710252)National Science Foundation (U.S.) (grant IIS-0426838
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Asymptotic optimality of maximum pressure policies in stochastic processing networks
We consider a class of stochastic processing networks. Assume that the
networks satisfy a complete resource pooling condition. We prove that each
maximum pressure policy asymptotically minimizes the workload process in a
stochastic processing network in heavy traffic. We also show that, under each
quadratic holding cost structure, there is a maximum pressure policy that
asymptotically minimizes the holding cost. A key to the optimality proofs is to
prove a state space collapse result and a heavy traffic limit theorem for the
network processes under a maximum pressure policy. We extend a framework of
Bramson [Queueing Systems Theory Appl. 30 (1998) 89--148] and Williams
[Queueing Systems Theory Appl. 30 (1998b) 5--25] from the multiclass queueing
network setting to the stochastic processing network setting to prove the state
space collapse result and the heavy traffic limit theorem. The extension can be
adapted to other studies of stochastic processing networks.Comment: Published in at http://dx.doi.org/10.1214/08-AAP522 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Stochastic Motion Planning For Mobile Robots
Stochastic motion planning is of crucial importance in robotic applications not only because of the imperfect models for robot dynamics and sensing but also the potentially unknown environment. Due to efficiency considerations, practical methods often introduce additional assumptions or heuristics, like the use of separation theorem, into the solution. However, there are intrinsic limitations of practical frameworks that prevent further improving reliability and robustness of the system, which cannot be addressed with minor tweaks. Therefore, it is necessary to develop theoretically justified solutions to stochastic motion planning problems. Despite the challenges in developing such solutions, the reward is unparalleled due to their wide impact on a majority of, if not all, robotic applications. The overall goal of this dissertation is to develop solutions for stochastic motion planning problems with theoretical justifications and demonstrate their superior performance in real world applications.
In the first part of this dissertation, we model the stochastic motion planning problem as Partially Observable Markov Decision Processes (POMDP) and propose two solutions featuring different optimization regimes trading off model generality and efficiency. The first is a gradient-based solution based on iterative Linear Quadratic Gaussian (iLQG) assuming explicit model formulations and Gaussian noises. The special structure of the problem allows a time-varying affine policy to be solved offline and leads to efficient online usage. The proposed algorithm addresses limitations of previous works on iLQG in working with nondifferentiable system models and sparse informative measurements. The second solution is a sampled-based general POMDP solver assuming mild conditions on the control space and measurement models. The generality of the problem formulation promises wide applications of the algorithm. The proposed solution addresses the degeneracy issue of Monte Carlo tree search when applied to continuous POMDPs, especially for systems with continuous measurement space. Through theoretical analysis, we show that the proposed algorithm is a valid Monte Carlo control algorithm alternating unbiased policy evaluation and policy improvement.
In the second part of this dissertation, we apply the proposed solutions to different robotic applications where the dominant uncertainty either comes from the robot itself or external environment. We first consider the the application of mobile robot navigation in known environment where the major sources of uncertainties are the robot dynamical and sensing noises. Although the problem is widely studied, few work has applied POMDP solutions to the application. By demonstrating the superior performance of proposed solutions on such a familiar application, the importance of stochastic motion planning may be better appreciated by the robotics community. We also apply the proposed solutions to autonomous driving where the dominant uncertainty comes from the external environment, i.e. the unknown behavior of human drivers.In this work, we propose a data-driven model for the stochastic traffic dynamics where we explicitly model the intention of human drivers. To our best knowledge, this is the first work that applies POMDP solutions to data-driven traffic models. Through simulations, we show the proposed solutions are able to develop high-level intelligent behaviors and outperform other similar methods that also consider uncertainties in the autonomous driving application
Correction. Brownian models of open processing networks: canonical representation of workload
Due to a printing error the above mentioned article [Annals of Applied
Probability 10 (2000) 75--103, doi:10.1214/aoap/1019737665] had numerous
equations appearing incorrectly in the print version of this paper. The entire
article follows as it should have appeared. IMS apologizes to the author and
the readers for this error. A recent paper by Harrison and Van Mieghem
explained in general mathematical terms how one forms an ``equivalent workload
formulation'' of a Brownian network model. Denoting by the state vector
of the original Brownian network, one has a lower dimensional state descriptor
in the equivalent workload formulation, where can be chosen as
any basis matrix for a particular linear space. This paper considers Brownian
models for a very general class of open processing networks, and in that
context develops a more extensive interpretation of the equivalent workload
formulation, thus extending earlier work by Laws on alternate routing problems.
A linear program called the static planning problem is introduced to articulate
the notion of ``heavy traffic'' for a general open network, and the dual of
that linear program is used to define a canonical choice of the basis matrix
. To be specific, rows of the canonical are alternative basic optimal
solutions of the dual linear program. If the network data satisfy a natural
monotonicity condition, the canonical matrix is shown to be nonnegative,
and another natural condition is identified which ensures that admits a
factorization related to the notion of resource pooling.Comment: Published at http://dx.doi.org/10.1214/105051606000000583 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
Heavy traffic analysis of open processing networks with complete resource pooling: asymptotic optimality of discrete review policies
We consider a class of open stochastic processing networks, with feedback
routing and overlapping server capabilities, in heavy traffic. The networks we
consider satisfy the so-called complete resource pooling condition and
therefore have one-dimensional approximating Brownian control problems.
We propose a simple discrete review policy for controlling such networks.
Assuming 2+\epsilon moments on the interarrival times and processing times,
we provide a conceptually simple proof of asymptotic optimality of the proposed
policy.Comment: Published at http://dx.doi.org/10.1214/105051604000000495 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
Belief State Planning for Autonomously Navigating Urban Intersections
Urban intersections represent a complex environment for autonomous vehicles
with many sources of uncertainty. The vehicle must plan in a stochastic
environment with potentially rapid changes in driver behavior. Providing an
efficient strategy to navigate through urban intersections is a difficult task.
This paper frames the problem of navigating unsignalized intersections as a
partially observable Markov decision process (POMDP) and solves it using a
Monte Carlo sampling method. Empirical results in simulation show that the
resulting policy outperforms a threshold-based heuristic strategy on several
relevant metrics that measure both safety and efficiency.Comment: 6 pages, 6 figures, accepted to IV201
- …