1,861 research outputs found

    A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles

    Get PDF
    This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed

    Multi Traffic Scene Perception Using Support Vector Machine and Digital Image Processing

    Get PDF
    Traffic accidents are especially intense for a rainy day, Night, rainy season, rainy season, ice and day without street lighting many low-level conditions. Current View Drive the help systems are designed to be done under good-nature Weather. Classification is a method of identifying Optical characteristics of vision expansion protocols more efficient. Improve computer vision in awkward manner Weather environments, multi-class weather classification system many weather features and supervision were made Learning. First, basic visual features are extracted Multiple traffic pictures, then the feature is revealed. The team has eight dimensions. Secondly, five supervision was made Learning methods are used to train instructors. Analysis the extracted features indicate that the image describes accurately the highest recognition of etymology and classmates is the accuracy rate and adaptive skills. Provides the basis for the proposed method anterior vehicle innovation increases invention Night light changes, as well as increases View of driving field on an ice day. Image feature extraction is the most important process in pattern recognition and it is the most efficient way to simplify high-dimensional image data. Because it is hard to obtain some information from the M × N × 3-dimensional image matrix. Therefore, owing to perceive multi-traffic scene, the key information must be extracted from the image

    Air pollution and fog detection through vehicular sensors

    Get PDF
    We describe a method for the automatic recognition of air pollution and fog from a vehicle. Our system consists of sensors to acquire main data from cameras as well as from Light Detection and Recognition (LIDAR) instruments. We discuss how this data can be collected, analyzed and merged to determine the degree of air pollution or fog. Such data is essential for control systems of moving vehicles in making autonomous decisions for avoidance. Backend systems need such data for forecasting and strategic traffic planning and control. Laboratory based experimental results are presented for weather conditions like air pollution and fog, showing that the recognition scenario works with better than adequate results. This paper demonstrates that LIDAR technology, already onboard for the purpose of autonomous driving, can be used to improve weather condition recognition when compared with a camera only system. We conclude that the combination of a front camera and a LIDAR laser scanner is well suited as a sensor instrument set for air pollution and fog recognition that can contribute accurate data to driving assistance and weather alerting-systems

    Wide area detection system: Conceptual design study

    Get PDF
    An integrated sensor for traffic surveillance on mainline sections of urban freeways is described. Applicable imaging and processor technology is surveyed and the functional requirements for the sensors and the conceptual design of the breadboard sensors are given. Parameters measured by the sensors include lane density, speed, and volume. The freeway image is also used for incident diagnosis

    A Review of Sensor Technologies for Perception in Automated Driving

    Get PDF
    After more than 20 years of research, ADAS are common in modern vehicles available in the market. Automated Driving systems, still in research phase and limited in their capabilities, are starting early commercial tests in public roads. These systems rely on the information provided by on-board sensors, which allow to describe the state of the vehicle, its environment and other actors. Selection and arrangement of sensors represent a key factor in the design of the system. This survey reviews existing, novel and upcoming sensor technologies, applied to common perception tasks for ADAS and Automated Driving. They are put in context making a historical review of the most relevant demonstrations on Automated Driving, focused on their sensing setup. Finally, the article presents a snapshot of the future challenges for sensing technologies and perception, finishing with an overview of the commercial initiatives and manufacturers alliances that will show future market trends in sensors technologies for Automated Vehicles.This work has been partly supported by ECSEL Project ENABLE- S3 (with grant agreement number 692455-2), by the Spanish Government through CICYT projects (TRA2015- 63708-R and TRA2016-78886-C3-1-R)

    Adaptive Deep Learning Detection Model for Multi-Foggy Images

    Get PDF
    The fog has different features and effects within every single environment. Detection whether there is fog in the image is considered a challenge and giving the type of fog has a substantial enlightening effect on image defogging. Foggy scenes have different types such as scenes based on fog density level and scenes based on fog type. Machine learning techniques have a significant contribution to the detection of foggy scenes. However, most of the existing detection models are based on traditional machine learning models, and only a few studies have adopted deep learning models. Furthermore, most of the existing machines learning detection models are based on fog density-level scenes. However, to the best of our knowledge, there is no such detection model based on multi-fog type scenes have presented yet. Therefore, the main goal of our study is to propose an adaptive deep learning model for the detection of multi-fog types of images. Moreover, due to the lack of a publicly available dataset for inhomogeneous, homogenous, dark, and sky foggy scenes, a dataset for multi-fog scenes is presented in this study (https://github.com/Karrar-H-Abdulkareem/Multi-Fog-Dataset). Experiments were conducted in three stages. First, the data collection phase is based on eight resources to obtain the multi-fog scene dataset. Second, a classification experiment is conducted based on the ResNet-50 deep learning model to obtain detection results. Third, evaluation phase where the performance of the ResNet-50 detection model has been compared against three different models. Experimental results show that the proposed model has presented a stable classification performance for different foggy images with a 96% score for each of Classification Accuracy Rate (CAR), Recall, Precision, F1-Score which has specific theoretical and practical significance. Our proposed model is suitable as a pre-processing step and might be considered in different real-time applications

    Improving Traffic Safety And Drivers\u27 Behavior In Reduced Visibility Conditions

    Get PDF
    This study is concerned with the safety risk of reduced visibility on roadways. Inclement weather events such as fog/smoke (FS), heavy rain (HR), high winds, etc, do affect every road by impacting pavement conditions, vehicle performance, visibility distance, and drivers’ behavior. Moreover, they affect travel demand, traffic safety, and traffic flow characteristics. Visibility in particular is critical to the task of driving and reduction in visibility due FS or other weather events such as HR is a major factor that affects safety and proper traffic operation. A real-time measurement of visibility and understanding drivers’ responses, when the visibility falls below certain acceptable level, may be helpful in reducing the chances of visibility-related crashes. In this regard, one way to improve safety under reduced visibility conditions (i.e., reduce the risk of visibility related crashes) is to improve drivers’ behavior under such adverse weather conditions. Therefore, one of objectives of this research was to investigate the factors affecting drivers’ stated behavior in adverse visibility conditions, and examine whether drivers rely on and follow advisory or warning messages displayed on portable changeable message signs (CMS) and/or variable speed limit (VSL) signs in different visibility, traffic conditions, and on two types of roadways; freeways and two-lane roads. The data used for the analyses were obtained from a self-reported questionnaire survey carried out among 566 drivers in Central Florida, USA. Several categorical data analysis techniques such as conditional distribution, odds’ ratio, and Chi-Square tests were applied. In addition, two modeling approaches; bivariate and multivariate probit models were estimated. The results revealed that gender, age, road type, visibility condition, and familiarity with VSL signs were the significant factors affecting the likelihood of reducing speed following CMS/VSL instructions in reduced visibility conditions. Other objectives of this survey study were to determine the content of messages that iv would achieve the best perceived safety and drivers’ compliance and to examine the best way to improve safety during these adverse visibility conditions. The results indicated that Caution-fog ahead-reduce speed was the best message and using CMS and VSL signs together was the best way to improve safety during such inclement weather situations. In addition, this research aimed to thoroughly examine drivers’ responses under low visibility conditions and quantify the impacts and values of various factors found to be related to drivers’ compliance and drivers’ satisfaction with VSL and CMS instructions in different visibility and traffic conditions. To achieve these goals, Explanatory Factor Analysis (EFA) and Structural Equation Modeling (SEM) approaches were adopted. The results revealed that drivers’ satisfaction with VSL/CMS was the most significant factor that positively affected drivers’ compliance with advice or warning messages displayed on VSL/CMS signs under different fog conditions followed by driver factors. Moreover, it was found that roadway type affected drivers’ compliance to VSL instructions under medium and heavy fog conditions. Furthermore, drivers’ familiarity with VSL signs and driver factors were the significant factors affecting drivers’ satisfaction with VSL/CMS advice under reduced visibility conditions. Based on the findings of the survey-based study, several recommendations are suggested as guidelines to improve drivers’ behavior in such reduced visibility conditions by enhancing drivers’ compliance with VSL/CMS instructions. Underground loop detectors (LDs) are the most common freeway traffic surveillance technologies used for various intelligent transportation system (ITS) applications such as travel time estimation and crash detection. Recently, the emphasis in freeway management has been shifting towards using LDs data to develop real-time crash-risk assessment models. Numerous v studies have established statistical links between freeway crash risk and traffic flow characteristics. However, there is a lack of good understanding of the relationship between traffic flow variables (i.e. speed, volume and occupancy) and crashes that occur under reduced visibility (VR crashes). Thus, another objective of this research was to explore the occurrence of reduced visibility related (VR) crashes on freeways using real-time traffic surveillance data collected from loop detectors (LDs) and radar sensors. In addition, it examines the difference between VR crashes to those occurring at clear visibility conditions (CV crashes). To achieve these objectives, Random Forests (RF) and matched case-control logistic regression model were estimated. The results indicated that traffic flow variables leading to VR crashes are slightly different from those variables leading to CV crashes. It was found that, higher occupancy observed about half a mile between the nearest upstream and downstream stations increases the risk for both VR and CV crashes. Moreover, an increase of the average speed observed on the same half a mile increases the probability of VR crash. On the other hand, high speed variation coupled with lower average speed observed on the same half a mile increase the likelihood of CV crashes. Moreover, two issues that have not explicitly been addressed in prior studies are; (1) the possibility of predicting VR crashes using traffic data collected from the Automatic Vehicle Identification (AVI) sensors installed on Expressways and (2) which traffic data is advantageous for predicting VR crashes; LDs or AVIs. Thus, this research attempts to examine the relationships between VR crash risk and real-time traffic data collected from LDs installed on two Freeways in Central Florida (I-4 and I-95) and from AVI sensors installed on two vi Expressways (SR 408 and SR 417). Also, it investigates which data is better for predicting VR crashes. The approach adopted here involves developing Bayesian matched case-control logistic regression using the historical VR crashes, LDs and AVI data. Regarding models estimated based on LDs data, the average speed observed at the nearest downstream station along with the coefficient of variation in speed observed at the nearest upstream station, all at 5-10 minute prior to the crash time, were found to have significant effect on VR crash risk. However, for the model developed based on AVI data, the coefficient of variation in speed observed at the crash segment, at 5-10 minute prior to the crash time, affected the likelihood of VR crash occurrence. Argument concerning which traffic data (LDs or AVI) is better for predicting VR crashes is also provided and discussed

    Development of rear-end collision avoidance in automobiles

    Get PDF
    The goal of this work is to develop a Rear-End Collision Avoidance System for automobiles. In order to develop the Rear-end Collision Avoidance System, it is stated that the most important difference from the old practice is the fact that new design approach attempts to completely avoid collision instead of minimizing the damage by over-designing cars. Rear-end collisions are the third highest cause of multiple vehicle fatalities in the U.S. Their cause seems to be a result of poor driver awareness and communication. For example, car brake lights illuminate exactly the same whether the car is slowing, stopping or the driver is simply resting his foot on the pedal. In the development of Rear-End Collision Avoidance System (RECAS), a thorough review of hardware, software, driver/human factors, and current rear-end collision avoidance systems are included. Key sensor technologies are identified and reviewed in an attempt to ease the design effort. The characteristics and capabilities of alternative and emerging sensor technologies are also described and their performance compared. In designing a RECAS the first component is to monitor the distance and speed of the car ahead. If an unsafe condition is detected a warning is issued and the vehicle is decelerated (if necessary). The second component in the design effort utilizes the illumination of independent segments of brake lights corresponding to the stopping condition of the car. This communicates the stopping intensity to the following driver. The RECAS is designed the using the LabVIEW software. The simulation is designed to meet several criteria: System warnings should result in a minimum load on driver attention, and the system should also perform well in a variety of driving conditions. In order to illustrate and test the proposed RECAS methods, a Java program has been developed. This simulation animates a multi-car, multi-lane highway environment where car speeds are assigned randomly, and the proposed RECAS approaches demonstrate rear-end collision avoidance successfully. The Java simulation is an applet, which is easily accessible through the World Wide Web and also can be tested for different angles of the sensor
    • …
    corecore