700 research outputs found

    Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems

    Get PDF
    The European research project DESERVE (DEvelopment platform for Safe and Efficient dRiVE, 2012-2015) had the aim of designing and developing a platform tool to cope with the continuously increasing complexity and the simultaneous need to reduce cost for future embedded Advanced Driver Assistance Systems (ADAS). For this purpose, the DESERVE platform profits from cross-domain software reuse, standardization of automotive software component interfaces, and easy but safety-compliant integration of heterogeneous modules. This enables the development of a new generation of ADAS applications, which challengingly combine different functions, sensors, actuators, hardware platforms, and Human Machine Interfaces (HMI). This book presents the different results of the DESERVE project concerning the ADAS development platform, test case functions, and validation and evaluation of different approaches. The reader is invited to substantiate the content of this book with the deliverables published during the DESERVE project. Technical topics discussed in this book include:Modern ADAS development platforms;Design space exploration;Driving modelling;Video-based and Radar-based ADAS functions;HMI for ADAS;Vehicle-hardware-in-the-loop validation system

    Facing ADAS validation complexity with usage oriented testing

    Get PDF
    International audienceValidating Advanced Driver Assistance Systems (ADAS) is a strategic issue, since such systems are becoming increasingly widespread in the automotive field. ADAS bring extra comfort to drivers, and this has become a selling point. But these functions, while useful, must not affect the general safety of the vehicle which is the manufacturer's responsibility. A significant number of current ADAS are based on vision systems, and applications such as obstacle detection and detection of pedestrians have become essential components of functions such as automatic emergency braking. These systems that preserve and protect road users take on even more importance with the arrival of the new Euro NCAP protocols. Therefore the robustness and reliability of ADAS functions cannot be neglected and car manufacturers need to have tools to ensure that the ADAS functions running on their vehicles operate with the utmost safety. Furthermore, the complexity of these systems in conjunction with the nearly infinite number of parameter combinations related to the usage profile of functions based on image sensors push us to think about testing optimization methods and tool standards to support the design and validation phases of ADAS systems. The resources required for the validation using current methods make them actually less and less adapted to new active safety features, which induce very strong dependability requirements. Today, to test the camera-based ADAS, test vehicles are equipped with these systems and are performing long hours of driving that can last for years. These tests are used to validate the use of the function and to verify its response to the requirements described in the specifications without considering the functional safety standard ISO26262

    Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems

    Get PDF
    The European research project DESERVE (DEvelopment platform for Safe and Efficient dRiVE, 2012-2015) had the aim of designing and developing a platform tool to cope with the continuously increasing complexity and the simultaneous need to reduce cost for future embedded Advanced Driver Assistance Systems (ADAS). For this purpose, the DESERVE platform profits from cross-domain software reuse, standardization of automotive software component interfaces, and easy but safety-compliant integration of heterogeneous modules. This enables the development of a new generation of ADAS applications, which challengingly combine different functions, sensors, actuators, hardware platforms, and Human Machine Interfaces (HMI). This book presents the different results of the DESERVE project concerning the ADAS development platform, test case functions, and validation and evaluation of different approaches. The reader is invited to substantiate the content of this book with the deliverables published during the DESERVE project. Technical topics discussed in this book include:Modern ADAS development platforms;Design space exploration;Driving modelling;Video-based and Radar-based ADAS functions;HMI for ADAS;Vehicle-hardware-in-the-loop validation system

    A systematic review of perception system and simulators for autonomous vehicles research

    Get PDF
    This paper presents a systematic review of the perception systems and simulators for autonomous vehicles (AV). This work has been divided into three parts. In the first part, perception systems are categorized as environment perception systems and positioning estimation systems. The paper presents the physical fundamentals, principle functioning, and electromagnetic spectrum used to operate the most common sensors used in perception systems (ultrasonic, RADAR, LiDAR, cameras, IMU, GNSS, RTK, etc.). Furthermore, their strengths and weaknesses are shown, and the quantification of their features using spider charts will allow proper selection of different sensors depending on 11 features. In the second part, the main elements to be taken into account in the simulation of a perception system of an AV are presented. For this purpose, the paper describes simulators for model-based development, the main game engines that can be used for simulation, simulators from the robotics field, and lastly simulators used specifically for AV. Finally, the current state of regulations that are being applied in different countries around the world on issues concerning the implementation of autonomous vehicles is presented.This work was partially supported by DGT (ref. SPIP2017-02286) and GenoVision (ref. BFU2017-88300-C2-2-R) Spanish Government projects, and the “Research Programme for Groups of Scientific Excellence in the Region of Murcia" of the Seneca Foundation (Agency for Science and Technology in the Region of Murcia – 19895/GERM/15)

    Visibility And Confidence Estimation Of An Onboard-Camera Image For An Intelligent Vehicle

    Get PDF
    More and more drivers nowadays enjoy the convenience brought by advanced driver assistances system (ADAS) including collision detection, lane keeping and ACC. However, many assistant functions are still constrained by weather and terrain. In the way towards automated driving, the need of an automatic condition detector is inevitable, since many solutions only work for certain conditions. When it comes to camera, which is most commonly used tool in lane detection, obstacle detection, visibility estimation is one of such important parameters we need to analyze. Although many papers have proposed their own ways to estimate visibility range, there is little research on the question of how to estimate the confidence of an image. In this thesis, we introduce a new way to detect visual distance based on a monocular camera, and thereby we calculate the overall image confidence. Much progresses has been achieved in the past ten years from restoration of foggy images, real-time fog detection to weather classification. However, each method has its own drawbacks, ranging from complexity, cost, and inaccuracy. According to these considerations, the new way we proposed to estimate visibility range is based on a single vision system. In addition, this method can maintain a relatively robust estimation and produce a more accurate result

    Switching Trackers for Effective Sensor Fusion in Advanced Driver Assistance Systems

    Get PDF
    Modern cars utilise Advanced Driver Assistance Systems (ADAS) in several ways. In ADAS, the use of multiple sensors to gauge the environment surrounding the ego-vehicle offers numerous advantages, as fusing information from more than one sensor helps to provide highly reliable and error-free data. The fused data is typically then fed to a tracker algorithm, which helps to reduce noise and compensate for situations when received sensor data is temporarily absent or spurious, or to counter the offhand false positives and negatives. The performances of these constituent algorithms vary vastly under different scenarios. In this paper, we focus on the variation in the performance of tracker algorithms in sensor fusion due to the alteration in external conditions in different scenarios, and on the methods for countering that variation. We introduce a sensor fusion architecture, where the tracking algorithm is spontaneously switched to achieve the utmost performance under all scenarios. By employing a Real-time Traffic Density Estimation (RTDE) technique, we may understand whether the ego-vehicle is currently in dense or sparse traffic conditions. A highly dense traffic (or congested traffic) condition would mean that external circumstances are non-linear; similarly, sparse traffic conditions would mean that the probability of linear external conditions would be higher. We also employ a Traffic Sign Recognition (TSR) algorithm, which is able to monitor for construction zones, junctions, schools, and pedestrian crossings, thereby identifying areas which have a high probability of spontaneous, on-road occurrences. Based on the results received from the RTDE and TSR algorithms, we construct a logic which switches the tracker of the fusion architecture between an Extended Kalman Filter (for linear external scenarios) and an Unscented Kalman Filter (for non-linear scenarios). This ensures that the fusion model always uses the tracker that is best suited for its current needs, thereby yielding consistent accuracy across multiple external scenarios, compared to the fusion models that employ a fixed single tracker

    Automotive sensor fusion systems for traffic aware adaptive cruise control

    Get PDF
    The autonomous driving (AD) industry is advancing at a rapid pace. New sensing technology for tracking vehicles, controlling vehicle behavior, and communicating with infrastructure are being added to commercial vehicles. These new automotive technologies reduce on road fatalities, improve ride quality, and improve vehicle fuel economy. This research explores two types of automotive sensor fusion systems: a novel radar/camera sensor fusion system using a long shortterm memory (LSTM) neural network (NN) to perform data fusion improving tracking capabilities in a simulated environment and a traditional radar/camera sensor fusion system that is deployed in Mississippi State’s entry in the EcoCAR Mobility Challenge (2019 Chevrolet Blazer) for an adaptive cruise control system (ACC) which functions in on-road applications. Along with vehicles, pedestrians, and cyclists, the sensor fusion system deployed in the 2019 Chevrolet Blazer uses vehicle-to-everything (V2X) communication to communicate with infrastructure such as traffic lights to optimize and autonomously control vehicle acceleration through a connected corrido

    Vehicle Distance Detection Using Monocular Vision and Machine Learning

    Get PDF
    With the development of new cutting-edge technology, autonomous vehicles (AVs) have become the main topic in the majority of the automotive industries. For an AV to be safely used on the public roads it needs to be able to perceive its surrounding environment and calculate decisions within real-time. A perfect AV still does not exist for the majority of public use, but advanced driver assistance systems (ADAS) have been already integrated into everyday vehicles. It is predicted that these systems will evolve to work together to become a fully AV of the future. This thesis’ main focus is the combination of ADAS with artificial intelligence (AI) models. Since neural networks (NNs) could be unpredictable at many occasions, the main aspect of this thesis is the research of which neural network architecture will be most accurate in perceiving distance between vehicles. Hence, the study of integration of ADAS with AI, and studying whether AI can safely be used as a central processor for AV needs resolution. The created ADAS in this thesis mainly focuses on using monocular vision and machine training. A dataset of 200,000 images was used to train a neural network (NN) model, which accurately detect whether an image is a license plate or not by 96.75% accuracy. A sliding window reads whether a sub-section of an image is a license plate; the process achieved if it is, and the algorithm stores that sub-section image. The sub-images are run through a heatmap threshold to help minimize false detections. Upon detecting the license plate, the final algorithm determines the distance of the vehicle of the license plate detected. It then calculates the distance and outputs the data to the user. This process achieves results with up to a 1-meter distance accuracy. This ADAS has been aimed to be useable by the public, and easily integrated into future AV systems
    • …
    corecore