18,480 research outputs found

    Detecting Intentions of Vulnerable Road Users Based on Collective Intelligence

    Full text link
    Vulnerable road users (VRUs, i.e. cyclists and pedestrians) will play an important role in future traffic. To avoid accidents and achieve a highly efficient traffic flow, it is important to detect VRUs and to predict their intentions. In this article a holistic approach for detecting intentions of VRUs by cooperative methods is presented. The intention detection consists of basic movement primitive prediction, e.g. standing, moving, turning, and a forecast of the future trajectory. Vehicles equipped with sensors, data processing systems and communication abilities, referred to as intelligent vehicles, acquire and maintain a local model of their surrounding traffic environment, e.g. crossing cyclists. Heterogeneous, open sets of agents (cooperating and interacting vehicles, infrastructure, e.g. cameras and laser scanners, and VRUs equipped with smart devices and body-worn sensors) exchange information forming a multi-modal sensor system with the goal to reliably and robustly detect VRUs and their intentions under consideration of real time requirements and uncertainties. The resulting model allows to extend the perceptual horizon of the individual agent beyond their own sensory capabilities, enabling a longer forecast horizon. Concealments, implausibilities and inconsistencies are resolved by the collective intelligence of cooperating agents. Novel techniques of signal processing and modelling in combination with analytical and learning based approaches of pattern and activity recognition are used for detection, as well as intention prediction of VRUs. Cooperation, by means of probabilistic sensor and knowledge fusion, takes place on the level of perception and intention recognition. Based on the requirements of the cooperative approach for the communication a new strategy for an ad hoc network is proposed.Comment: 20 pages, published at Automatisiertes und vernetztes Fahren (AAET), Braunschweig, Germany, 201

    A Methodological Review of Visual Road Recognition Procedures for Autonomous Driving Applications

    Full text link
    The current research interest in autonomous driving is growing at a rapid pace, attracting great investments from both the academic and corporate sectors. In order for vehicles to be fully autonomous, it is imperative that the driver assistance system is adapt in road and lane keeping. In this paper, we present a methodological review of techniques with a focus on visual road detection and recognition. We adopt a pragmatic outlook in presenting this review, whereby the procedures of road recognition is emphasised with respect to its practical implementations. The contribution of this review hence covers the topic in two parts -- the first part describes the methodological approach to conventional road detection, which covers the algorithms and approaches involved to classify and segregate roads from non-road regions; and the other part focuses on recent state-of-the-art machine learning techniques that are applied to visual road recognition, with an emphasis on methods that incorporate convolutional neural networks and semantic segmentation. A subsequent overview of recent implementations in the commercial sector is also presented, along with some recent research works pertaining to road detections.Comment: 14 pages, 6 Figures, 2 Tables. Permission to reprint granted from original figure author

    iTV: Inferring Traffic Violation-Prone Locations with Vehicle Trajectories and Road Environment Data

    Full text link
    Traffic violations like illegal parking, illegal turning, and speeding have become one of the greatest challenges in urban transportation systems, bringing potential risks of traffic congestions, vehicle accidents, and parking difficulties. To maximize the utility and effectiveness of the traffic enforcement strategies aiming at reducing traffic violations, it is essential for urban authorities to infer the traffic violation-prone locations in the city. Therefore, we propose a low-cost, comprehensive, and dynamic framework to infer traffic violation-prone locations in cities based on the large-scale vehicle trajectory data and road environment data. Firstly, we normalize the trajectory data by map matching algorithms and extract key driving behaviors, i.e., turning behaviors, parking behaviors, and speeds of vehicles. Secondly, we restore spatiotemporal contexts of driving behaviors to get corresponding traffic restrictions such as no parking, no turning, and speed restrictions. After matching the traffic restrictions with driving behaviors, we get the traffic violation distribution. Finally, we extract the spatiotemporal patterns of traffic violations, and build a visualization system to showcase the inferred traffic violation-prone locations. To evaluate the effectiveness of the proposed method, we conduct extensive studies on large-scale, real-world vehicle GPS trajectories collected from two Chinese cities, respectively. Evaluation results confirm that the proposed framework infers traffic violation-prone locations effectively and efficiently, providing comprehensive decision supports for traffic enforcement strategies.Comment: 12 pages, 19 figures, accepted by IEEE System Journa

    Joint Attention in Driver-Pedestrian Interaction: from Theory to Practice

    Full text link
    Today, one of the major challenges that autonomous vehicles are facing is the ability to drive in urban environments. Such a task requires communication between autonomous vehicles and other road users in order to resolve various traffic ambiguities. The interaction between road users is a form of negotiation in which the parties involved have to share their attention regarding a common objective or a goal (e.g. crossing an intersection), and coordinate their actions in order to accomplish it. In this literature review we aim to address the interaction problem between pedestrians and drivers (or vehicles) from joint attention point of view. More specifically, we will discuss the theoretical background behind joint attention, its application to traffic interaction and practical approaches to implementing joint attention for autonomous vehicles

    Intelligent Intersection: Two-Stream Convolutional Networks for Real-time Near Accident Detection in Traffic Video

    Full text link
    In Intelligent Transportation System, real-time systems that monitor and analyze road users become increasingly critical as we march toward the smart city era. Vision-based frameworks for Object Detection, Multiple Object Tracking, and Traffic Near Accident Detection are important applications of Intelligent Transportation System, particularly in video surveillance and etc. Although deep neural networks have recently achieved great success in many computer vision tasks, a uniformed framework for all the three tasks is still challenging where the challenges multiply from demand for real-time performance, complex urban setting, highly dynamic traffic event, and many traffic movements. In this paper, we propose a two-stream Convolutional Network architecture that performs real-time detection, tracking, and near accident detection of road users in traffic video data. The two-stream model consists of a spatial stream network for Object Detection and a temporal stream network to leverage motion features for Multiple Object Tracking. We detect near accidents by incorporating appearance features and motion features from two-stream networks. Using aerial videos, we propose a Traffic Near Accident Dataset (TNAD) covering various types of traffic interactions that is suitable for vision-based traffic analysis tasks. Our experiments demonstrate the advantage of our framework with an overall competitive qualitative and quantitative performance at high frame rates on the TNAD dataset.Comment: Submitted to ACM Transactions on Spatial Algorithms and Systems (TSAS); Special issue on Urban Mobility: Algorithms and Systems. arXiv admin note: text overlap with arXiv:1703.07402 by other author

    Identifying Driver Behaviors using Trajectory Features for Vehicle Navigation

    Full text link
    We present a novel approach to automatically identify driver behaviors from vehicle trajectories and use them for safe navigation of autonomous vehicles. We propose a novel set of features that can be easily extracted from car trajectories. We derive a data-driven mapping between these features and six driver behaviors using an elaborate web-based user study. We also compute a summarized score indicating a level of awareness that is needed while driving next to other vehicles. We also incorporate our algorithm into a vehicle navigation simulation system and demonstrate its benefits in terms of safer real-time navigation, while driving next to aggressive or dangerous drivers

    Perceptual Attention-based Predictive Control

    Full text link
    In this paper, we present a novel information processing architecture for safe deep learning-based visual navigation of autonomous systems. The proposed information processing architecture is used to support a perceptual attention-based predictive control algorithm that leverages model predictive control (MPC), convolutional neural networks (CNNs), and uncertainty quantification methods. The novelty of our approach lies in using MPC to learn how to place attention on relevant areas of the visual input, which ultimately allows the system to more rapidly detect unsafe conditions. We accomplish this by using MPC to learn to select regions of interest in the input image, which are used to output control actions as well as estimates of epistemic and aleatoric uncertainty in the attention-aware visual input. We use these uncertainty estimates to quantify the safety of our network controller under the current navigation condition. The proposed architecture and algorithm is tested on a 1:5 scale terrestrial vehicle. Experimental results show that the proposed algorithm outperforms previous approaches on early detection of unsafe conditions, such as when novel obstacles are present in the navigation environment. The proposed architecture is the first step towards using deep learning-based perceptual control policies in safety-critical domains

    Mini-Unmanned Aerial Vehicle-Based Remote Sensing: Techniques, Applications, and Prospects

    Full text link
    The past few decades have witnessed the great progress of unmanned aircraft vehicles (UAVs) in civilian fields, especially in photogrammetry and remote sensing. In contrast with the platforms of manned aircraft and satellite, the UAV platform holds many promising characteristics: flexibility, efficiency, high-spatial/temporal resolution, low cost, easy operation, etc., which make it an effective complement to other remote-sensing platforms and a cost-effective means for remote sensing. Considering the popularity and expansion of UAV-based remote sensing in recent years, this paper provides a systematic survey on the recent advances and future prospectives of UAVs in the remote-sensing community. Specifically, the main challenges and key technologies of remote-sensing data processing based on UAVs are discussed and summarized firstly. Then, we provide an overview of the widespread applications of UAVs in remote sensing. Finally, some prospects for future work are discussed. We hope this paper will provide remote-sensing researchers an overall picture of recent UAV-based remote sensing developments and help guide the further research on this topic

    Fast image-based obstacle detection from unmanned surface vehicles

    Full text link
    Obstacle detection plays an important role in unmanned surface vehicles (USV). The USVs operate in highly diverse environments in which an obstacle may be a floating piece of wood, a scuba diver, a pier, or a part of a shoreline, which presents a significant challenge to continuous detection from images taken onboard. This paper addresses the problem of online detection by constrained unsupervised segmentation. To this end, a new graphical model is proposed that affords a fast and continuous obstacle image-map estimation from a single video stream captured onboard a USV. The model accounts for the semantic structure of marine environment as observed from USV by imposing weak structural constraints. A Markov random field framework is adopted and a highly efficient algorithm for simultaneous optimization of model parameters and segmentation mask estimation is derived. Our approach does not require computationally intensive extraction of texture features and comfortably runs in real-time. The algorithm is tested on a new, challenging, dataset for segmentation and obstacle detection in marine environments, which is the largest annotated dataset of its kind. Results on this dataset show that our model outperforms the related approaches, while requiring a fraction of computational effort.Comment: This is an extended version of the ACCV2014 paper [Kristan et al., 2014] submitted to a journal. [Kristan et al., 2014] M. Kristan, J. Pers, V. Sulic, S. Kovacic, A graphical model for rapid obstacle image-map estimation from unmanned surface vehicles, in Proc. Asian Conf. Computer Vision, 201

    Computer Vision for Autonomous Vehicles

    Full text link
    In this work, we try to implement Image Processing techniques in the area of autonomous vehicles, both indoor and outdoor. The challenges for both are different and the ways to tackle them vary too. We also showed deep learning makes things easier and precise. We also made base models for all the problems we tackle while building an autonomous car for Indian Institute of Space science and Technology
    • …
    corecore