307 research outputs found

    AMTV headway sensor and safety design

    Get PDF
    A headway sensing system for an automated mixed traffic vehicle (AMTV) employing an array of optical proximity sensor elements is described, and its performance is presented in terms of object detection profiles. The problem of sensing in turns is explored experimentally and requirements for future turn sensors are discussed. A recommended headway sensor configuration, employing multiple source elements in the focal plane of one lens operating together with a similar detector unit, is described. Alternative concepts including laser radar, ultrasonic sensing, imaging techniques, and radar are compared to the present proximity sensor approach. Design concepts for an AMTV body which will minimize the probability of injury to pedestrians or passengers in the event of a collision are presented

    Motorcycles that see: Multifocal stereo vision sensor for advanced safety systems in tilting vehicles

    Get PDF
    Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications

    A Review of Sensor Technologies for Perception in Automated Driving

    Get PDF
    After more than 20 years of research, ADAS are common in modern vehicles available in the market. Automated Driving systems, still in research phase and limited in their capabilities, are starting early commercial tests in public roads. These systems rely on the information provided by on-board sensors, which allow to describe the state of the vehicle, its environment and other actors. Selection and arrangement of sensors represent a key factor in the design of the system. This survey reviews existing, novel and upcoming sensor technologies, applied to common perception tasks for ADAS and Automated Driving. They are put in context making a historical review of the most relevant demonstrations on Automated Driving, focused on their sensing setup. Finally, the article presents a snapshot of the future challenges for sensing technologies and perception, finishing with an overview of the commercial initiatives and manufacturers alliances that will show future market trends in sensors technologies for Automated Vehicles.This work has been partly supported by ECSEL Project ENABLE- S3 (with grant agreement number 692455-2), by the Spanish Government through CICYT projects (TRA2015- 63708-R and TRA2016-78886-C3-1-R)

    Perception Intelligence Integrated Vehicle-to-Vehicle Optical Camera Communication.

    Get PDF
    Ubiquitous usage of cameras and LEDs in modern road and aerial vehicles open up endless opportunities for novel applications in intelligent machine navigation, communication, and networking. To this end, in this thesis work, we hypothesize the benefit of dual-mode usage of vehicular built-in cameras through novel machine perception capabilities combined with optical camera communication (OCC). Current key conception of understanding a line-of-sight (LOS) scenery is from the aspect of object, event, and road situation detection. However, the idea of blending the non-line-of-sight (NLOS) information with the LOS information to achieve a see-through vision virtually is new. This improves the assistive driving performance by enabling a machine to see beyond occlusion. Another aspect of OCC in the vehicular setup is to understand the nature of mobility and its impact on the optical communication channel quality. The research questions gathered from both the car-car mobility modelling, and evaluating a working setup of OCC communication channel can also be inherited to aerial vehicular situations like drone-drone OCC. The aim of this thesis is to answer the research questions along these new application domains, particularly, (i) how to enable a virtual see-through perception in the car assisting system that alerts the human driver about the visible and invisible critical driving events to help drive more safely, (ii) how transmitter-receiver cars behaves while in the mobility and the overall channel performance of OCC in motion modality, (iii) how to help rescue lost Unmanned Aerial Vehicles (UAVs) through coordinated localization with fusion of OCC and WiFi, (iv) how to model and simulate an in-field drone swarm operation experience to design and validate UAV coordinated localization for group of positioning distressed drones. In this regard, in this thesis, we present the end-to-end system design, proposed novel algorithms to solve the challenges in applying such a system, and evaluation results through experimentation and/or simulation

    Issues and challenges for pedestrian active safety systems based on real world accidents

    No full text
    The purpose of this study was to analyze real crashes involving pedestrians in order to evaluate the potential effectiveness of autonomous emergency braking systems (AEB) in pedestrian protection. A sample of 100 real accident cases were reconstructed providing a comprehensive set of data describing the interaction between the vehicle, the environment and the pedestrian all along the scenario of the accident. A generic AEB system based on a camera sensor for pedestrian detection was modelled in order to identify the functionality of its different attributes in the timeline of each crash scenario. These attributes were assessed to determine their impact on pedestrian safety. The influence of the detection and the activation of the AEB system were explored by varying the field of view (FOV) of the sensor and the level of deceleration. A FOV of 35 was estimated to be required to detect and react to the majority of crash scenarios. For the reaction of a system (from hazard detection to triggering the brakes), between 0.5 and 1 s appears necessary

    Object Detection Using LiDAR and Camera Fusion in Off-road Conditions

    Get PDF
    Seoses hüppelise huvi kasvuga autonoomsete sõidukite vastu viimastel aastatel on suurenenud ka vajadus täpsemate ja töökindlamate objektituvastuse meetodite järele. Kuigi tänu konvolutsioonilistele närvivõrkudele on palju edu saavutatud 2D objektituvastuses, siis võrreldavate tulemuste saavutamine 3D maailmas on seni jäänud unistuseks. Põhjuseks on mitmesugused probleemid eri modaalsusega sensorite andmevoogude ühitamisel, samuti on 3D maailmas märgendatud andmestike loomine aeganõudvam ja kallim. Sõltumata sellest, kas kasutame objektide kauguse hindamiseks stereo kaamerat või lidarit, kaasnevad andmevoogude ühitamisega ajastusprobleemid, mis raskendavad selliste lahenduste kasutamist reaalajas. Lisaks on enamus olemasolevaid lahendusi eelkõige välja töötatud ja testitud linnakeskkonnas liikumiseks.Töös pakutakse välja meetod 3D objektituvastuseks, mis põhineb 2D objektituvastuse tulemuste (objekte ümbritsevad kastid või segmenteerimise maskid) projitseerimisel 3D punktipilve ning saadud punktipilve filtreerimisel klasterdamismeetoditega. Tulemusi võrreldakse lihtsa termokaamera piltide filtreerimisel põhineva lahendusega. Täiendavalt viiakse läbi põhjalikud eksperimendid parimate algoritmi parameetrite leidmiseks objektituvastuseks maastikul, saavutamaks suurimat võimalikku täpsust reaalajas.Since the boom in the industry of autonomous vehicles, the need for preciseenvironment perception and robust object detection methods has grown. While we are making progress with state-of-the-art in 2D object detection with approaches such as convolutional neural networks, the challenge remains in efficiently achieving the same level of performance in 3D. The reasons for this include limitations of fusing multi-modal data and the cost of labelling different modalities for training such networks. Whether we use a stereo camera to perceive scene’s ranging information or use time of flight ranging sensors such as LiDAR, ​ the existing pipelines for object detection in point clouds have certain bottlenecks and latency issues which tend to affect the accuracy of detection in real time speed. Moreover, ​ these existing methods are primarily implemented and tested over urban cityscapes.This thesis presents a fusion based approach for detecting objects in 3D by projecting the proposed 2D regions of interest (object’s bounding boxes) or masks (semantically segmented images) to point clouds and applies outlier filtering techniques to filter out target object points in projected regions of interest. Additionally, we compare it with human detection using thermal image thresholding and filtering. Lastly, we performed rigorous benchmarks over the off-road environments to identify potential bottlenecks and to find a combination of pipeline parameters that can maximize the accuracy and performance of real-time object detection in 3D point clouds
    corecore