1,797 research outputs found

    Robust road marking extraction in urban environments using stereo images

    Full text link
    Most road marking detection systems use image processing to extract potential marking elements in their first stage. Hence, the performances of extraction algorithms clearly impact the result of the whole process. In this paper, we address the problem of extracting road markings in high resolution environment images taken by inspection vehicles in a urban context. This situation is challenging since large special markings, such as crosswalks, zebras or pictographs must be detected as well as lane markings. Moreover, urban images feature many white elements that might lure the extraction process. In prior work an efficient extraction process, called Median Local Threshold algorithm, was proposed that can handle all kinds of road markings. This extraction algorithm is here improved and compared to other extraction algorithms. An experimental study performed on a database of images with ground-truth shows that the stereovision strategy reduces the number of false alarms without significant loss of true detection

    Overview of Environment Perception for Intelligent Vehicles

    Get PDF
    This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The state-of-the-art algorithms and modeling methods for intelligent vehicles are given, with a summary of their pros and cons. A special attention is paid to methods for lane and road detection, traffic sign recognition, vehicle tracking, behavior analysis, and scene understanding. In addition, we provide information about datasets, common performance analysis, and perspectives on future research directions in this area

    Real-time object detection using monocular vision for low-cost automotive sensing systems

    Get PDF
    This work addresses the problem of real-time object detection in automotive environments using monocular vision. The focus is on real-time feature detection, tracking, depth estimation using monocular vision and finally, object detection by fusing visual saliency and depth information. Firstly, a novel feature detection approach is proposed for extracting stable and dense features even in images with very low signal-to-noise ratio. This methodology is based on image gradients, which are redefined to take account of noise as part of their mathematical model. Each gradient is based on a vector connecting a negative to a positive intensity centroid, where both centroids are symmetric about the centre of the area for which the gradient is calculated. Multiple gradient vectors define a feature with its strength being proportional to the underlying gradient vector magnitude. The evaluation of the Dense Gradient Features (DeGraF) shows superior performance over other contemporary detectors in terms of keypoint density, tracking accuracy, illumination invariance, rotation invariance, noise resistance and detection time. The DeGraF features form the basis for two new approaches that perform dense 3D reconstruction from a single vehicle-mounted camera. The first approach tracks DeGraF features in real-time while performing image stabilisation with minimal computational cost. This means that despite camera vibration the algorithm can accurately predict the real-world coordinates of each image pixel in real-time by comparing each motion-vector to the ego-motion vector of the vehicle. The performance of this approach has been compared to different 3D reconstruction methods in order to determine their accuracy, depth-map density, noise-resistance and computational complexity. The second approach proposes the use of local frequency analysis of i ii gradient features for estimating relative depth. This novel method is based on the fact that DeGraF gradients can accurately measure local image variance with subpixel accuracy. It is shown that the local frequency by which the centroid oscillates around the gradient window centre is proportional to the depth of each gradient centroid in the real world. The lower computational complexity of this methodology comes at the expense of depth map accuracy as the camera velocity increases, but it is at least five times faster than the other evaluated approaches. This work also proposes a novel technique for deriving visual saliency maps by using Division of Gaussians (DIVoG). In this context, saliency maps express the difference of each image pixel is to its surrounding pixels across multiple pyramid levels. This approach is shown to be both fast and accurate when evaluated against other state-of-the-art approaches. Subsequently, the saliency information is combined with depth information to identify salient regions close to the host vehicle. The fused map allows faster detection of high-risk areas where obstacles are likely to exist. As a result, existing object detection algorithms, such as the Histogram of Oriented Gradients (HOG) can execute at least five times faster. In conclusion, through a step-wise approach computationally-expensive algorithms have been optimised or replaced by novel methodologies to produce a fast object detection system that is aligned to the requirements of the automotive domain

    ๋ฌด์ธ ์ž์œจ์ฃผํ–‰ ์ฐจ๋Ÿ‰์„ ์œ„ํ•œ ๋‹จ์•ˆ ์นด๋ฉ”๋ผ ๊ธฐ๋ฐ˜ ์‹ค์‹œ๊ฐ„ ์ฃผํ–‰ ํ™˜๊ฒฝ ์ธ์‹ ๊ธฐ๋ฒ•์— ๊ด€ํ•œ ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐ๊ณตํ•™๋ถ€, 2014. 2. ์„œ์Šน์šฐ.Homo Faber, refers to humans as controlling the environments through tools. From the beginning of the world, humans create tools for chasing the convenient life. The desire for the rapid movement let the human ride on horseback, make the wagon and finally make the vehicle. The vehicle made humans possible to travel the long distance very quickly as well as conveniently. However, since human being itself is imperfect, plenty of people have died due to the car accident, and people are dying at this moment. The research for autonomous vehicle has been conducted to satisfy the humans desire of the safety as the best alternative. And, the dream of autonomous vehicle will be come true in the near future. For the implementation of autonomous vehicle, many kinds of techniques are required, among which, the recognition of the environment around the vehicle is one of the most fundamental and important problems. For the recognition of surrounding objects many kinds of sensors can be utilized, however, the monocular camera can collect the largest information among sensors as well as can be utilized for the variety of purposes, and can be adopted for the various vehicle types due to the good price competitiveness. I expect that the research using the monocular camera for autonomous vehicle is very practical and useful. In this dissertation, I cover four important recognition problems for autonomous driving by using monocular camera in vehicular environment. Firstly, to drive the way autonomously the vehicle has to recognize lanes and keep its lane. However, the detection of lane markings under the various illuminant variation is very difficult in the image processing area. Nevertheless, it must be solved for the autonomous driving. The first research topic is the robust lane marking extraction under the illumination variations for multilane detection. I proposed the new lane marking extraction filter that can detect the imperfect lane markings as well as the new false positive cancelling algorithm that can eliminate noise markings. This approach can extract lane markings successfully even under the bad illumination conditions. Secondly, the problem to tackle, is if there is no lane marking on the road, then how the autonomous vehicle can recognize the road to run? In addition, what is the current lane position of the road? The latter is the important question since we can make a decision for lane change or keeping depending on the current position of lane. The second research is for handling those two problems, and I proposed the approach for the fusing the road detection and the lane position estimation. Thirdly, to drive more safely, keeping the safety distance is very important. Additionally, many equipments for the driving safety require the distance information. Measuring accurate inter-vehicle distance by using monocular camera and line laser is the third research topic. To measure the inter-vehicle distance, I illuminate the line laser on the front side of vehicle, and measure the length of the laser line and lane width in the image. Based on the imaging geometry, the distance calculation problem can be solved with accuracy. There are still many of important problems are remaining to be solved, and I proposed some approaches by using the monocular camera to handle the important problems. I expect very active researches will be continuously conducted and, based on the researches, the era of autonomous vehicle will come in the near future.1 Introduction 1.1 Background and Motivations 1.2 Contributions and Outline of the Dissertation 1.2.1 Illumination-Tolerant Lane Marking Extraction for Multilane Detection 1.2.2 Fusing Road Detection and Lane Position Estimation for the Robust Road Boundary Estimation 1.2.3 Accurate Inter-Vehicle Distance Measurement based on Monocular Camera and Line Laser 2 Illumination-Tolerant Lane Marking Extraction for Multilane Detection 2.1 Introduction 2.2 Lane Marking Candidate Extraction Filter 2.2.1 Requirements of the Filter 2.2.2 A Comparison of Filter Characteristics 2.2.3 Cone Hat Filter 2.3 Overview of the Proposed Algorithm 2.3.1 Filter Width Estimation 2.3.2 Top Hat (Cone Hat) Filtering 2.3.3 Reiterated Extraction 2.3.4 False Positive Cancelling 2.3.4.1 Lane Marking Center Point Extraction 2.3.4.2 Fast Center Point Segmentation 2.3.4.3 Vanishing Point Detection 2.3.4.4 Segment Extraction 2.3.4.5 False Positive Filtering 2.4 Experiments and Evaluation 2.4.1 Experimental Set-up 2.4.2 Conventional Algorithm for Evaluation 2.4.2.1 Global threshold 2.4.2.2 Positive Negative Gradient 2.4.2.3 Local Threshold 2.4.2.4 Symmetry Local Threshold 2.4.2.5 Double Extraction using Symmetry Local Threshold 2.4.2.6 Gaussian Filter 2.4.3 Experimental Results 2.4.4 Summary 3 Fusing Road Detection and Lane Position Estimation for the Robust Road Boundary Estimation 3.1 Introduction 3.2 Chromaticity-based Flood-fill Method 3.2.1 Illuminant-Invariant Space 3.2.2 Road Pixel Selection 3.2.3 Flood-fill Algorithm 3.3 Lane Position Estimation 3.3.1 Lane Marking Extraction 3.3.2 Proposed Lane Position Detection Algorithm 3.3.3 Birds-eye View Transformation by using the Proposed Dynamic Homography Matrix Generation 3.3.4 Next Lane Position Estimation based on the Cross-ratio 3.3.5 Forward-looking View Transformation 3.4 Information Fusion Between Road Detection and Lane Position Estimation 3.4.1 The Case of Detection Failures 3.4.2 The Benefit of Information Fusion 3.5 Experiments and Evaluation 3.6 Summary 4 Accurate Inter-Vehicle Distance Measurement based on Monocular Camera and Line Laser 4.1 Introduction 4.2 Proposed Distance Measurement Algorithm 4.3 Experiments and Evaluation 4.3.1 Experimental System Set-up 4.3.2 Experimental Results 4.4 Summary 5 ConclusionDocto

    Video based vehicle detection for advance warning Intelligent Transportation System

    Full text link
    Video based vehicle detection and surveillance technologies are an integral part of Intelligent Transportation System (ITS), due to its non-intrusiveness and capability or capturing global and specific vehicle behavior data. The initial goal of this thesis is to develop an efficient advance warning ITS system for detection of congestion at work zones and special events based on video detection. The goals accomplished by this thesis are: (1) successfully developed the advance warning ITS system using off-the-shelf components and, (2) Develop and evaluate an improved vehicle detection and tracking algorithm. The advance warning ITS system developed includes many off-the-shelf equipments like Autoscope (video based vehicle detector), Digital Video Recorders, RF transceivers, high gain Yagi antennas, variable message signs and interface processors. The video based detection system used requires calibration and fine tuning of configuration parameters for accurate results. Therefore, an in-house video based vehicle detection system was developed using the Corner Harris algorithm to eliminate the need of complex calibration and contrasts modifications. The algorithm was implemented using OpenCV library on a Arcom\u27s Olympus Windows XP Embedded development kit running WinXPE operating system. The algorithm performance is for accuracy in vehicle speed and count is evaluated. The performance of the proposed algorithm is equivalent or better to the Autoscope system without any modifications to calibration and lamination adjustments

    Automatic vehicle detection and tracking in aerial video

    Get PDF
    This thesis is concerned with the challenging tasks of automatic and real-time vehicle detection and tracking from aerial video. The aim of this thesis is to build an automatic system that can accurately localise any vehicles that appear in aerial video frames and track the target vehicles with trackers. Vehicle detection and tracking have many applications and this has been an active area of research during recent years; however, it is still a challenge to deal with certain realistic environments. This thesis develops vehicle detection and tracking algorithms which enhance the robustness of detection and tracking beyond the existing approaches. The basis of the vehicle detection system proposed in this thesis has different object categorisation approaches, with colour and texture features in both point and area template forms. The thesis also proposes a novel Self-Learning Tracking and Detection approach, which is an extension to the existing Tracking Learning Detection (TLD) algorithm. There are a number of challenges in vehicle detection and tracking. The most difficult challenge of detection is distinguishing and clustering the target vehicle from the background objects and noises. Under certain conditions, the images captured from Unmanned Aerial Vehicles (UAVs) are also blurred; for example, turbulence may make the vehicle shake during flight. This thesis tackles these challenges by applying integrated multiple feature descriptors for real-time processing. In this thesis, three vehicle detection approaches are proposed: the HSV-GLCM feature approach, the ISM-SIFT feature approach and the FAST-HoG approach. The general vehicle detection approaches used have highly flexible implicit shape representations. They are based on training samples in both positive and negative sets and use updated classifiers to distinguish the targets. It has been found that the detection results attained by using HSV-GLCM texture features can be affected by blurring problems; the proposed detection algorithms can further segment the edges of the vehicles from the background. Using the point descriptor feature can solve the blurring problem, however, the large amount of information contained in point descriptors can lead to processing times that are too long for real-time applications. So the FAST-HoG approach combining the point feature and the shape feature is proposed. This new approach is able to speed up the process that attains the real-time performance. Finally, a detection approach using HoG with the FAST feature is also proposed. The HoG approach is widely used in object recognition, as it has a strong ability to represent the shape vector of the object. However, the original HoG feature is sensitive to the orientation of the target; this method improves the algorithm by inserting the direction vectors of the targets. For the tracking process, a novel tracking approach was proposed, an extension of the TLD algorithm, in order to track multiple targets. The extended approach upgrades the original system, which can only track a single target, which must be selected before the detection and tracking process. The greatest challenge to vehicle tracking is long-term tracking. The target object can change its appearance during the process and illumination and scale changes can also occur. The original TLD feature assumed that tracking can make errors during the tracking process, and the accumulation of these errors could cause tracking failure, so the original TLD proposed using a learning approach in between the tracking and the detection by adding a pair of inspectors (positive and negative) to constantly estimate errors. This thesis extends the TLD approach with a new detection method in order to achieve multiple-target tracking. A Forward and Backward Tracking approach has been proposed to eliminate tracking errors and other problems such as occlusion. The main purpose of the proposed tracking system is to learn the features of the targets during tracking and re-train the detection classifier for further processes. This thesis puts particular emphasis on vehicle detection and tracking in different extreme scenarios such as crowed highway vehicle detection, blurred images and changes in the appearance of the targets. Compared with currently existing detection and tracking approaches, the proposed approaches demonstrate a robust increase in accuracy in each scenario

    Lane and Road Marking Detection with a High Resolution Automotive Radar for Automated Driving

    Get PDF
    Die Automobilindustrie erlebt gerade einen beispiellosen Wandel, und die Fahrerassistenz und das automatisierte Fahren spielen dabei eine entscheidende Rolle. Automatisiertes Fahren System umfasst haupts\"achlich drei Schritte: Wahrnehmung und Modellierung der Umgebung, Fahrtrichtungsplanung, und Fahrzeugsteuerung. Mit einer guten Wahrnehmung und Modellierung der Umgebung kann ein Fahrzeug Funktionen wie intelligenter Tempomat, Notbremsassistent, Spurwechselassistent, usw. erfolgreich durchf\"uhren. F\"ur Fahrfunktionen, die die Fahrpuren erkennen m\"ussen, werden gegenw\"artig ausnahmslos Kamerasensoren eingesetzt. Bei wechselnden Lichtverh\"altnissen, unzureichender Beleuchtung oder bei Sichtbehinderungen z.B. durch Nebel k\"onnen Videokameras aber empfindlich gest\"ort werden. Um diese Nachteile auszugleichen, wird in dieser Doktorarbeit eine \glqq Radar\textendash taugliche\grqq{} Fahrbahnmakierungerkennung entwickelt, mit der das Fahrzeug die Fahrspuren bei allen Lichtverh\"altnissen erkennen kann. Dazu k\"onnen bereits im Fahrzeug verbaute Radare eingesetzt werden. Die heutigen Fahrbahnmarkierungen k\"onnen mit Kamerasensoren sehr gut erfasst werden. Wegen unzureichender R\"uckstreueigenschaften der existierenden Fahrbahnmarkierungen f\"ur Radarwellen werden diese vom Radar nicht erkannt. Um dies zu bewerkstelligen, werden in dieser Arbeit die R\"uckstreueigenschaften von verschiedenen Reflektortypen, sowohl durch Simulationen als auch mit praktischen Messungen, untersucht und ein Reflektortyp vorgeschlagen, der zur Verarbeitung in heutige Fahrbahnmakierungen oder sogar f\"ur direkten Verbau in der Fahrbahn geeignet ist. Ein weiterer Schwerpunkt dieser Doktorarbeit ist der Einsatz von K\"unstliche Intelligenz (KI), um die Fahrspuren auch mit Radar zu detektieren und zu klassifizieren. Die aufgenommenen Radardaten werden mittels semantischer Segmentierung analysiert und Fahrspurverl\"aufe sowie Freifl\"achenerkennung detektiert. Gleichzeitig wird das Potential von KI\textendash tauglichen Umgebungverstehen mit bildgebenden Radardaten aufgezeigt

    Lane detection in autonomous vehicles : A systematic review

    Get PDF
    One of the essential systems in autonomous vehicles for ensuring a secure circumstance for drivers and passengers is the Advanced Driver Assistance System (ADAS). Adaptive Cruise Control, Automatic Braking/Steer Away, Lane-Keeping System, Blind Spot Assist, Lane Departure Warning System, and Lane Detection are examples of ADAS. Lane detection displays information specific to the geometrical features of lane line structures to the vehicle's intelligent system to show the position of lane markings. This article reviews the methods employed for lane detection in an autonomous vehicle. A systematic literature review (SLR) has been carried out to analyze the most delicate approach to detecting the road lane for the benefit of the automation industry. One hundred and two publications from well-known databases were chosen for this review. The trend was discovered after thoroughly examining the selected articles on the method implemented for detecting the road lane from 2018 until 2021. The selected literature used various methods, with the input dataset being one of two types: self-collected or acquired from an online public dataset. In the meantime, the methodologies include geometric modeling and traditional methods, while AI includes deep learning and machine learning. The use of deep learning has been increasingly researched throughout the last four years. Some studies used stand-Alone deep learning implementations for lane detection problems. Meanwhile, some research focuses on merging deep learning with other machine learning techniques and classical methodologies. Recent advancements imply that attention mechanism has become a popular combined strategy with deep learning methods. The use of deep algorithms in conjunction with other techniques showed promising outcomes. This research aims to provide a complete overview of the literature on lane detection methods, highlighting which approaches are currently being researched and the performance of existing state-of-The-Art techniques. Also, the paper covered the equipment used to collect the dataset for the training process and the dataset used for network training, validation, and testing. This review yields a valuable foundation on lane detection techniques, challenges, and opportunities and supports new research works in this automation field. For further study, it is suggested to put more effort into accuracy improvement, increased speed performance, and more challenging work on various extreme conditions in detecting the road lane
    • โ€ฆ
    corecore