2,066 research outputs found

    Computer Vision based Intelligent Lane Detection and Warning System: A Design Approach

    Get PDF
    In Intelligent Transport System (ITS), prevention from accident is one of prominent area of research in which various approaches are implemented and proposed to assist and warn driver from accidents. As a part of warning system lane departure technique is widely considered, that monitor vehicle’s movement , and warn driver before lane departure which will prevent driver from head on collision. Hence it’s a matter of motivation for developing such a system which can detect lane marks on road and warn driver on any conditions. Due to variety of availability of tools and techniques, several methods where proposed by different authors which are discussed in this paper with their pros and cons that will help us to decide better one according to one’s specific conditions or need

    Multisensor Data Fusion Strategies for Advanced Driver Assistance Systems

    Get PDF
    Multisensor data fusion and integration is a rapidly evolving research area that requires interdisciplinary knowledge in control theory, signal processing, artificial intelligence, probability and statistics, etc. Multisensor data fusion refers to the synergistic combination of sensory data from multiple sensors and related information to provide more reliable and accurate information than could be achieved using a single, independent sensor (Luo et al., 2007). Actually Multisensor data fusion is a multilevel, multifaceted process dealing with automatic detection, association, correlation, estimation, and combination of data from single and multiple information sources. The results of data fusion process help users make decisions in complicated scenarios. Integration of multiple sensor data was originally needed for military applications in ocean surveillance, air-to air and surface-to-air defence, or battlefield intelligence. More recently, multisensor data fusion has also included the nonmilitary fields of remote environmental sensing, medical diagnosis, automated monitoring of equipment, robotics, and automotive systems (Macci et al., 2008). The potential advantages of multisensor fusion and integration are redundancy, complementarity, timeliness, and cost of the information. The integration or fusion of redundant information can reduce overall uncertainty and thus serve to increase the accuracy with which the features are perceived by the system. Multiple sensors providing redundant information can also serve to increase reliability in the case of sensor error or failure. Complementary information from multiple sensors allows features in the environment to be perceived that are impossible to perceive using just the information from each individual sensor operating separately. (Luo et al., 2007) Besides, driving as one of our daily activities is a complex task involving a great amount of interaction between driver and vehicle. Drivers regularly share their attention among operating the vehicle, monitoring traffic and nearby obstacles, and performing secondary tasks such as conversing, adjusting comfort settings (e.g. temperature, radio.) The complexity of the task and uncertainty of the driving environment make driving a very dangerous task, as according to a study in the European member states, there are more than 1,200,000 traffic accidents a year with over 40,000 fatalities. This fact points up the growing demand for automotive safety systems, which aim for a significant contribution to the overall road safety (Tatschke et al., 2006). Therefore, recently, there are an increased number of research activities focusing on the Driver Assistance System (DAS) development in order O pe n A cc es s D at ab as e w w w .in te ch w eb .o r

    Intelligent automatic overtaking system using vision for vehicle detection

    Get PDF
    There is clear evidence that investment in intelligent transportation system technologies brings major social and economic benefits. Technological advances in the area of automatic systems in particular are becoming vital for the reduction of road deaths. We here describe our approach to automation of one the riskiest autonomous manœuvres involving vehicles – overtaking. The approach is based on a stereo vision system responsible for detecting any preceding vehicle and triggering the autonomous overtaking manœuvre. To this end, a fuzzy-logic based controller was developed to emulate how humans overtake. Its input is information from the vision system and from a positioning-based system consisting of a differential global positioning system (DGPS) and an inertial measurement unit (IMU). Its output is the generation of action on the vehicle’s actuators, i.e., the steering wheel and throttle and brake pedals. The system has been incorporated into a commercial Citroën car and tested on the private driving circuit at the facilities of our research center, CAR, with different preceding vehicles – a motorbike, car, and truck – with encouraging results

    Towards Social Autonomous Vehicles: Efficient Collision Avoidance Scheme Using Richardson's Arms Race Model

    Full text link
    Background Road collisions and casualties pose a serious threat to commuters around the globe. Autonomous Vehicles (AVs) aim to make the use of technology to reduce the road accidents. However, the most of research work in the context of collision avoidance has been performed to address, separately, the rear end, front end and lateral collisions in less congested and with high inter-vehicular distances. Purpose The goal of this paper is to introduce the concept of a social agent, which interact with other AVs in social manners like humans are social having the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. The proposed social agent is based on a human-brain inspired mentalizing and mirroring capabilities and has been modelled for collision detection and avoidance under congested urban road traffic. Method We designed our social agent having the capabilities of mentalizing and mirroring and for this purpose we utilized Exploratory Agent Based Modeling (EABM) level of Cognitive Agent Based Computing (CABC) framework proposed by Niazi and Hussain. Results Our simulation and practical experiments reveal that by embedding Richardson's arms race model within AVs, collisions can be avoided while travelling on congested urban roads in a flock like topologies. The performance of the proposed social agent has been compared at two different levels.Comment: 48 pages, 21 figure

    An efficient encode-decode deep learning network for lane markings instant segmentation

    Get PDF
    Nowadays, advanced driver assistance systems (ADAS) has been incorporated with a distinct type of progressive and essential features. One of the most preliminary and significant features of the ADAS is lane marking detection, which permits the vehicle to keep in a particular road lane itself. It has been detected by utilizing high-specialized, handcrafted features and distinct post-processing approaches lead to less accurate, less efficient, and high computational framework under different environmental conditions. Hence, this research proposed a simple encode-decode deep learning approach under distinguishing environmental effects like different daytime, multiple lanes, different traffic condition, good and medium weather conditions for detecting the lane markings more accurately and efficiently. The proposed model is emphasized on the simple encode-decode Seg-Net framework incorporated with VGG16 architecture that has been trained by using the inequity and cross-entropy losses to obtain more accurate instant segmentation result of lane markings. The framework has been trained and tested on a vast public dataset named Tusimple, which includes around 3.6K training and 2.7 k testing image frames of different environmental conditions. The model has noted the highest accuracy, 96.61%, F1 score 96.34%, precision 98.91%, and recall 93.89%. Also, it has also obtained the lowest 3.125% false positive and 1.259% false-negative value, which transcended some of the previous researches. It is expected to assist significantly in the field of lane markings detection applying deep neural networks

    Lane marking detection using simple encode decode deep learning technique: SegNet

    Get PDF
    In recent times, many innocent people are suffering from sudden death for the sake of unwanted road accidents, which also riveting a lot of financial properties. The researchers have deployed advanced driver assistance systems (ADAS) in which a large number of automated features have been incorporated in the modern vehicles to overcome human mortality as well as financial loss, and lane markings detection is one of them. Many computer vision techniques and intricate image processing approaches have been used for detecting the lane markings by utilizing the handcrafted with highly specialized features. However, the systems have become more challenging due to the computational complexity, overfitting, less accuracy, and incapability to cope up with the intricate environmental conditions. Therefore, this research paper proposed a simple encode-decode deep learning model to detect lane markings under the distinct environmental condition with lower computational complexity. The model is based on SegNet architecture for improving the performance of the existing researches, which is trained by the lane marking dataset containing different complex environment conditions like rain, cloud, low light, curve roads. The model has successfully achieved 96.38% accuracy, 0.0311 false positive, 0.0201 false negative, 0.960 F1 score with a loss of only 1.45%, less overfitting and 428 ms per step that outstripped some of the existing researches. It is expected that this research will bring a significant contribution to the field lane marking detection

    Lane Line Detection and Object Scene Segmentation Using Otsu Thresholding and the Fast Hough Transform for Intelligent Vehicles in Complex Road Conditions

    Get PDF
    An Otsu-threshold- and Canny-edge-detection-based fast Hough transform (FHT) approach to lane detection was proposed to improve the accuracy of lane detection for autonomous vehicle driving. During the last two decades, autonomous vehicles have become very popular, and it is constructive to avoid traffic accidents due to human mistakes. The new generation needs automatic vehicle intelligence. One of the essential functions of a cutting-edge automobile system is lane detection. This study recommended the idea of lane detection through improved (extended) Canny edge detection using a fast Hough transform. The Gaussian blur filter was used to smooth out the image and reduce noise, which could help to improve the edge detection accuracy. An edge detection operator known as the Sobel operator calculated the gradient of the image intensity to identify edges in an image using a convolutional kernel. These techniques were applied in the initial lane detection module to enhance the characteristics of the road lanes, making it easier to detect them in the image. The Hough transform was then used to identify the routes based on the mathematical relationship between the lanes and the vehicle. It did this by converting the image into a polar coordinate system and looking for lines within a specific range of contrasting points. This allowed the algorithm to distinguish between the lanes and other features in the image. After this, the Hough transform was used for lane detection, making it possible to distinguish between left and right lane marking detection extraction; the region of interest (ROI) must be extracted for traditional approaches to work effectively and easily. The proposed methodology was tested on several image sequences. The least-squares fitting in this region was then used to track the lane. The proposed system demonstrated high lane detection in experiments, demonstrating that the identification method performed well regarding reasoning speed and identification accuracy, which considered both accuracy and real-time processing and could satisfy the requirements of lane recognition for lightweight automatic driving systems
    corecore