3,182 research outputs found

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    Neuromorphic Vision Based Multivehicle Detection and Tracking for Intelligent Transportation System

    Get PDF
    Neuromorphic vision sensor is a new passive sensing modality and a frameless sensor with a number of advantages over traditional cameras. Instead of wastefully sending entire images at fixed frame rate, neuromorphic vision sensor only transmits the local pixel-level changes caused by the movement in a scene"jats:italic" at the time they occur"/jats:italic". This results in advantageous characteristics, in terms of low energy consumption, high dynamic range, sparse event stream, and low response latency, which can be very useful in intelligent perception systems for modern intelligent transportation system (ITS) that requires efficient wireless data communication and low power embedded computing resources. In this paper, we propose the first neuromorphic vision based multivehicle detection and tracking system in ITS. The performance of the system is evaluated with a dataset recorded by a neuromorphic vision sensor mounted on a highway bridge. We performed a preliminary multivehicle tracking-by-clustering study using three classical clustering approaches and four tracking approaches. Our experiment results indicate that, by making full use of the low latency and sparse event stream, we could easily integrate an online tracking-by-clustering system running at a high frame rate, which far exceeds the real-time capabilities of traditional frame-based cameras. If the accuracy is prioritized, the tracking task can also be performed robustly at a relatively high rate with different combinations of algorithms. We also provide our dataset and evaluation approaches serving as the first neuromorphic benchmark in ITS and hopefully can motivate further research on neuromorphic vision sensors for ITS solutions. Document type: Articl

    SenSys: A Smartphone-Based Framework for ITS applications

    Get PDF
    Intelligent transportation systems (ITS) use different methods to collect and process traffic data. Conventional techniques suffer from different challenges, like the high installation and maintenance cost, connectivity and communication problems, and the limited set of data. The recent massive spread of smartphones among drivers encouraged the ITS community to use them to solve ITS challenges. Using smartphones in ITS is gaining an increasing interest among researchers and developers. Typically, the set of sensors that comes with smartphones is utilized to develop tools and services in order to enhance safety and driving experience. GPS, cameras, Bluetooth, inertial sensors and other embedded sensors are used to detect and analyze drivers\u27 behavior and vehicles\u27 motion. The use of smartphones made the data collection process easier because of their availability among drivers, the set of different sensors, the computation ability, and the low installation and maintenance cost. On the other hand, different smartphones sensors have diverse characteristics and accuracy and each one of them needs special fusion, processing, and filtration methods to generate more stable and accurate data. Using smartphones in ITS faces different challenges like inaccurate readings, weak GPS reception, noisy sensors and unaligned readings.These challenges waste researchers and developers time and effort, and they prevent them from building accurate ITS applications. This work proposes SenSys a smartphone framework that collects and processes traffic data and then analyzes and extracts vehicle dynamics and vehicle activities which can be used by developers and researchers to create their navigation, communication, and safety ITS applications. SenSys framework fuses and filters smartphone\u27s sensors readings which result in enhancing the accuracy of tracking and analyzing various vehicle dynamics such as vehicle\u27s stops, lane changes, turn detection, and accurate vehicle speed calculation that, in turn, will enable development of new ITS applications and services

    Efficient Min-cost Flow Tracking with Bounded Memory and Computation

    Get PDF
    This thesis is a contribution to solving multi-target tracking in an optimal fashion for real-time demanding computer vision applications. We introduce a challenging benchmark, recorded with our autonomous driving platform AnnieWAY. Three main challenges of tracking are addressed: Solving the data association (min-cost flow) problem faster than standard solvers, extending this approach to an online setting, and making it real-time capable by a tight approximation of the optimal solution

    Infrastructure-Based Vehicle Localization through Camera Calibration for I2V Communication Warning

    Get PDF
    In recent years, the research on object detection and tracking is becoming important for the development of advanced driving assistance systems (ADASs) and connected autonomous vehicles (CAVs) aiming to improve safety for all road users involved. Intersections, especially in urban scenarios, represent the portion of the road where the most relevant accidents take place; therefore, this work proposes an I2V warning system able to detect and track vehicles occupying the intersection and representing an obstacle for other incoming vehicles. This work presents a localization algorithm based on image detection and tracking by a single camera installed on a roadside unit (RSU). The vehicle position in the global reference frame is obtained thanks to a sequence of linear transformations utilizing intrinsic camera parameters, camera height, and pitch angle to obtain the vehicle's distance from the camera and, thus, its global latitude and longitude. The study brings an experimental analysis of both the localization accuracy, with an average error of 0.62 m, and detection reliability in terms of false positive (1.9%) and missed detection (3.6%) rates

    Hearing What You Cannot See: Acoustic Vehicle Detection Around Corners

    Full text link
    This work proposes to use passive acoustic perception as an additional sensing modality for intelligent vehicles. We demonstrate that approaching vehicles behind blind corners can be detected by sound before such vehicles enter in line-of-sight. We have equipped a research vehicle with a roof-mounted microphone array, and show on data collected with this sensor setup that wall reflections provide information on the presence and direction of occluded approaching vehicles. A novel method is presented to classify if and from what direction a vehicle is approaching before it is visible, using as input Direction-of-Arrival features that can be efficiently computed from the streaming microphone array data. Since the local geometry around the ego-vehicle affects the perceived patterns, we systematically study several environment types, and investigate generalization across these environments. With a static ego-vehicle, an accuracy of 0.92 is achieved on the hidden vehicle classification task. Compared to a state-of-the-art visual detector, Faster R-CNN, our pipeline achieves the same accuracy more than one second ahead, providing crucial reaction time for the situations we study. While the ego-vehicle is driving, we demonstrate positive results on acoustic detection, still achieving an accuracy of 0.84 within one environment type. We further study failure cases across environments to identify future research directions.Comment: Accepted to IEEE Robotics & Automation Letters (2021), DOI: 10.1109/LRA.2021.3062254. Code, Data & Video: https://github.com/tudelft-iv/occluded_vehicle_acoustic_detectio
    corecore