2,818 research outputs found
Modeling Cooperative Navigation in Dense Human Crowds
For robots to be a part of our daily life, they need to be able to navigate
among crowds not only safely but also in a socially compliant fashion. This is
a challenging problem because humans tend to navigate by implicitly cooperating
with one another to avoid collisions, while heading toward their respective
destinations. Previous approaches have used hand-crafted functions based on
proximity to model human-human and human-robot interactions. However, these
approaches can only model simple interactions and fail to generalize for
complex crowded settings. In this paper, we develop an approach that models the
joint distribution over future trajectories of all interacting agents in the
crowd, through a local interaction model that we train using real human
trajectory data. The interaction model infers the velocity of each agent based
on the spatial orientation of other agents in his vicinity. During prediction,
our approach infers the goal of the agent from its past trajectory and uses the
learned model to predict its future trajectory. We demonstrate the performance
of our method against a state-of-the-art approach on a public dataset and show
that our model outperforms when predicting future trajectories for longer
horizons.Comment: Accepted at ICRA 201
Real-Time Predictive Modeling and Robust Avoidance of Pedestrians with Uncertain, Changing Intentions
To plan safe trajectories in urban environments, autonomous vehicles must be
able to quickly assess the future intentions of dynamic agents. Pedestrians are
particularly challenging to model, as their motion patterns are often uncertain
and/or unknown a priori. This paper presents a novel changepoint detection and
clustering algorithm that, when coupled with offline unsupervised learning of a
Gaussian process mixture model (DPGP), enables quick detection of changes in
intent and online learning of motion patterns not seen in prior training data.
The resulting long-term movement predictions demonstrate improved accuracy
relative to offline learning alone, in terms of both intent and trajectory
prediction. By embedding these predictions within a chance-constrained motion
planner, trajectories which are probabilistically safe to pedestrian motions
can be identified in real-time. Hardware experiments demonstrate that this
approach can accurately predict pedestrian motion patterns from onboard
sensor/perception data and facilitate robust navigation within a dynamic
environment.Comment: Submitted to 2014 International Workshop on the Algorithmic
Foundations of Robotic
Social Attention: Modeling Attention in Human Crowds
Robots that navigate through human crowds need to be able to plan safe,
efficient, and human predictable trajectories. This is a particularly
challenging problem as it requires the robot to predict future human
trajectories within a crowd where everyone implicitly cooperates with each
other to avoid collisions. Previous approaches to human trajectory prediction
have modeled the interactions between humans as a function of proximity.
However, that is not necessarily true as some people in our immediate vicinity
moving in the same direction might not be as important as other people that are
further away, but that might collide with us in the future. In this work, we
propose Social Attention, a novel trajectory prediction model that captures the
relative importance of each person when navigating in the crowd, irrespective
of their proximity. We demonstrate the performance of our method against a
state-of-the-art approach on two publicly available crowd datasets and analyze
the trained attention model to gain a better understanding of which surrounding
agents humans attend to, when navigating in a crowd
Predictive Modeling of Pedestrian Motion Patterns with Bayesian Nonparametrics
For safe navigation in dynamic environments, an autonomous vehicle must be able to identify and predict the future behaviors of other mobile agents. A promising data-driven approach is to learn motion patterns from previous observations using Gaussian process (GP) regression, which are then used for online prediction. GP mixture models have been subsequently proposed for finding the number of motion patterns using GP likelihood as a similarity metric. However, this paper shows that using GP likelihood as a similarity metric can lead to non-intuitive clustering configurations - such as grouping trajectories with a small planar shift with respect to each other into different clusters - and thus produce poor prediction results. In this paper we develop a novel modeling framework, Dirichlet process active region (DPAR), that addresses the deficiencies of the previous GP-based approaches. In particular, with a discretized representation of the environment, we can explicitly account for planar shifts via a max pooling step, and reduce the computational complexity of the statistical inference procedure compared with the GP-based approaches. The proposed algorithm was applied on two real pedestrian trajectory datasets collected using a 3D Velodyne Lidar, and showed 15% improvement in prediction accuracy and 4.2 times reduction in computational time compared with a GP-based algorithm.Ford Motor Compan
Learning to Segment and Represent Motion Primitives from Driving Data for Motion Planning Applications
Developing an intelligent vehicle which can perform human-like actions
requires the ability to learn basic driving skills from a large amount of
naturalistic driving data. The algorithms will become efficient if we could
decompose the complex driving tasks into motion primitives which represent the
elementary compositions of driving skills. Therefore, the purpose of this paper
is to segment unlabeled trajectory data into a library of motion primitives. By
applying a probabilistic inference based on an iterative
Expectation-Maximization algorithm, our method segments the collected
trajectories while learning a set of motion primitives represented by the
dynamic movement primitives. The proposed method utilizes the mutual
dependencies between the segmentation and representation of motion primitives
and the driving-specific based initial segmentation. By utilizing this mutual
dependency and the initial condition, this paper presents how we can enhance
the performance of both the segmentation and the motion primitive library
establishment. We also evaluate the applicability of the primitive
representation method to imitation learning and motion planning algorithms. The
model is trained and validated by using the driving data collected from the
Beijing Institute of Technology intelligent vehicle platform. The results show
that the proposed approach can find the proper segmentation and establish the
motion primitive library simultaneously
Shared Control Policies and Task Learning for Hydraulic Earth-Moving Machinery
This thesis develops a shared control design framework for improving operator efficiency and performance on hydraulic excavation tasks. The framework is based on blended shared control (BSC), a technique whereby the operator’s command input is continually augmented by an assistive controller. Designing a BSC control scheme is subdivided here into four key components. Task learning utilizes nonparametric inverse reinforcement learning to identify the underlying goal structure of a task as a sequence of subgoals directly from the demonstration data of an experienced operator.
These subgoals may be distinct points in the actuator space or distributions overthe space, from which the operator draws a subgoal location during the task. The remaining three steps are executed on-line during each update of the BSC controller. In real-time, the subgoal prediction step involves utilizing the subgoal decomposition from the learning process in order to predict the current subgoal of the operator.
Novel deterministic and probabilistic prediction methods are developed and evaluated for their ease of implementation and performance against manually labeled trial data. The control generation component involves computing polynomial trajectories to the predicted subgoal location or mean of the subgoal distribution, and computing a control input which tracks those trajectories. Finally, the blending law synthesizes both inputs through a weighted averaging of the human and control input, using a blending parameter which can be static or dynamic. In the latter case, mapping probabilistic quantities such as the maximum a posteriori probability or statistical entropy to the value of the dynamic blending parameter may yield a more intelligent control assistance, scaling the intervention according to the confidence of the prediction.
A reduced-scale (1/12) fully hydraulic excavator model was instrumented for BSC experimentation, equipped with absolute position feedback of each hydraulic actuator. Experiments were conducted using a standard operator control interface and a common earthmoving task: loading a truck from a pile. Under BSC, operators experienced an 18% improvement in mean digging efficiency, defined as mass of material moved per cycle time. Effects of BSC vary with regard to pure cycle time, although most operators experienced a reduced mean cycle time
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Discovery and recognition of motion primitives in human activities
We present a novel framework for the automatic discovery and recognition of
motion primitives in videos of human activities. Given the 3D pose of a human
in a video, human motion primitives are discovered by optimizing the `motion
flux', a quantity which captures the motion variation of a group of skeletal
joints. A normalization of the primitives is proposed in order to make them
invariant with respect to a subject anatomical variations and data sampling
rate. The discovered primitives are unknown and unlabeled and are
unsupervisedly collected into classes via a hierarchical non-parametric Bayes
mixture model. Once classes are determined and labeled they are further
analyzed for establishing models for recognizing discovered primitives. Each
primitive model is defined by a set of learned parameters.
Given new video data and given the estimated pose of the subject appearing on
the video, the motion is segmented into primitives, which are recognized with a
probability given according to the parameters of the learned models.
Using our framework we build a publicly available dataset of human motion
primitives, using sequences taken from well-known motion capture datasets. We
expect that our framework, by providing an objective way for discovering and
categorizing human motion, will be a useful tool in numerous research fields
including video analysis, human inspired motion generation, learning by
demonstration, intuitive human-robot interaction, and human behavior analysis
- …