44 research outputs found
Towards Long-term Autonomy: A Perspective from Robot Learning
In the future, service robots are expected to be able to operate autonomously
for long periods of time without human intervention. Many work striving for
this goal have been emerging with the development of robotics, both hardware
and software. Today we believe that an important underpinning of long-term
robot autonomy is the ability of robots to learn on site and on-the-fly,
especially when they are deployed in changing environments or need to traverse
different environments. In this paper, we examine the problem of long-term
autonomy from the perspective of robot learning, especially in an online way,
and discuss in tandem its premise "data" and the subsequent "deployment".Comment: Accepted by AAAI-23 Bridge Program on AI & Robotic
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
MX-LSTM: mixing tracklets and vislets to jointly forecast trajectories and head poses
Recent approaches on trajectory forecasting use tracklets to predict the
future positions of pedestrians exploiting Long Short Term Memory (LSTM)
architectures. This paper shows that adding vislets, that is, short sequences
of head pose estimations, allows to increase significantly the trajectory
forecasting performance. We then propose to use vislets in a novel framework
called MX-LSTM, capturing the interplay between tracklets and vislets thanks to
a joint unconstrained optimization of full covariance matrices during the LSTM
backpropagation. At the same time, MX-LSTM predicts the future head poses,
increasing the standard capabilities of the long-term trajectory forecasting
approaches. With standard head pose estimators and an attentional-based social
pooling, MX-LSTM scores the new trajectory forecasting state-of-the-art in all
the considered datasets (Zara01, Zara02, UCY, and TownCentre) with a dramatic
margin when the pedestrians slow down, a case where most of the forecasting
approaches struggle to provide an accurate solution.Comment: 10 pages, 3 figures to appear in CVPR 201
Exploration and Mapping of Spatio-Temporal Pedestrian Flow Patterns for Mobile Robots
Socially compliant robot navigation is one of the key aspects for long-term acceptance of mobile robots in human-populated environments. One of the current barriers for this acceptance is that many navigation methods are based only on reactive behaviours, which can lead to frequent re-plannings, causing an erratic or aggressive robot behaviour. Instead, giving the ability to model and predict in advance how the people are likely to behave, from a long-term perspective, is an important enabler for safe and efficient navigation. For example, a robot may use its knowledge of the expected human motion to go with the main direction of flow to minimise the possibility of collisions or trajectory re-plannings.
In order to provide robots with knowledge of the expected activity patterns of people at different places and times,the first main contribution of this thesis is the introduction of a Spatio-Temporal Flow map (STeF-map). This is a time-dependent probabilistic map able to model and predict the flow patterns of people in the environment. The proposed representation models the likelihood of motion directions on a grid-based map by a set of harmonic functions, which efficiently capture long-term variations of crowd movements over time. The experimental evaluation shows that the proposed model enables a better human motion prediction than spatial-only approaches and an increased capacity for socially compliant robot navigation.
Obtaining this knowledge from a mobile robot platform is, however, not a trivial task, as usually they can only observe a fraction of the environment at a time, while the activity patterns of people may also change at different times. Therefore, the second main contribution is the investigation of a new methodology for mobile robot exploration to maximise the knowledge of human activity patterns, by deciding where and when to collect observations based on an exploration policy driven by the entropy levels in a spatio-temporal map. The evaluation is performed by simulating mobile robot exploration using real sensory data from three long-term pedestrian datasets, and the results show that for certain scenarios, the proposed exploration system can learn STeF-maps more quickly and better predict the flow patterns than uninformed strategies
Time-varying Pedestrian Flow Models for Service Robots
We present a human-centric spatiotemporal model for service robots operating in densely populated environments
for long time periods. The method integrates observations of pedestrians performed by a mobile robot at different locations and times into a memory efficient model, that represents the spatial layout of natural pedestrian flows and how they change over time. To represent temporal variations of the observed flows, our method does not model the time in a linear fashion, but by several dimensions wrapped into themselves. This representation of time can capture long-term (i.e. days to weeks) periodic patterns of peoples’ routines and habits. Knowledge of these patterns allows
making long-term predictions of future human presence and walking directions, which can support mobile robot navigation in human-populated environments. Using datasets gathered for several weeks, we compare the model to state-of-the-art methods for pedestrian flow modelling