30,912 research outputs found
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Quality-Aware Broadcasting Strategies for Position Estimation in VANETs
The dissemination of vehicle position data all over the network is a
fundamental task in Vehicular Ad Hoc Network (VANET) operations, as
applications often need to know the position of other vehicles over a large
area. In such cases, inter-vehicular communications should be exploited to
satisfy application requirements, although congestion control mechanisms are
required to minimize the packet collision probability. In this work, we face
the issue of achieving accurate vehicle position estimation and prediction in a
VANET scenario. State of the art solutions to the problem try to broadcast the
positioning information periodically, so that vehicles can ensure that the
information their neighbors have about them is never older than the
inter-transmission period. However, the rate of decay of the information is not
deterministic in complex urban scenarios: the movements and maneuvers of
vehicles can often be erratic and unpredictable, making old positioning
information inaccurate or downright misleading. To address this problem, we
propose to use the Quality of Information (QoI) as the decision factor for
broadcasting. We implement a threshold-based strategy to distribute position
information whenever the positioning error passes a reference value, thereby
shifting the objective of the network to limiting the actual positioning error
and guaranteeing quality across the VANET. The threshold-based strategy can
reduce the network load by avoiding the transmission of redundant messages, as
well as improving the overall positioning accuracy by more than 20% in
realistic urban scenarios.Comment: 8 pages, 7 figures, 2 tables, accepted for presentation at European
Wireless 201
Learning Pose Estimation for UAV Autonomous Navigation and Landing Using Visual-Inertial Sensor Data
In this work, we propose a robust network-in-the-loop control system for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV). To estimate the UAV’s absolute pose, we develop a deep neural network (DNN) architecture for visual-inertial odometry, which provides a robust alternative to traditional methods. We first evaluate the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% over the baseline. Finally, we integrate the data-driven estimator in the closed-loop flight control system of Airsim, a simulator available as a plugin for Unreal Engine, and we provide simulation results for autonomous navigation and landing
Vision and Learning for Deliberative Monocular Cluttered Flight
Cameras provide a rich source of information while being passive, cheap and
lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work
we present the first implementation of receding horizon control, which is
widely used in ground vehicles, with monocular vision as the only sensing mode
for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a
number of contributions: novel coupling of perception and control via relevant
and diverse, multiple interpretations of the scene around the robot, leveraging
recent advances in machine learning to showcase anytime budgeted cost-sensitive
feature selection, and fast non-linear regression for monocular depth
prediction. We empirically demonstrate the efficacy of our novel pipeline via
real world experiments of more than 2 kms through dense trees with a quadrotor
built from off-the-shelf parts. Moreover our pipeline is designed to combine
information from other modalities like stereo and lidar as well if available
- …