226 research outputs found

    Efficient vanishing point detection method in unstructured road environments based on dark channel prior

    Get PDF
    Vanishing point detection is a key technique in the fields such as road detection, camera calibration and visual navigation. This study presents a new vanishing point detection method, which delivers efficiency by using a dark channel priorโ€based segmentation method and an adaptive straight lines search mechanism in the road region. First, the dark channel prior information is used to segment the image into a series of regions. Then the straight lines are extracted from the region contours, and the straight lines in the road region are estimated by a vertical envelope and a perspective quadrilateral constraint. The vertical envelope roughly divides the whole image into sky region, vertical region and road region. The perspective quadrilateral constraint, as the authors defined herein, eliminates the vertical lines interference inside the road region to extract the approximate straight lines in the road region. Finally, the vanishing point is estimated by the meanshift clustering method, which are computed based on the proposed grouping strategies and the intersection principles. Experiments have been conducted with a large number of road images under different environmental conditions, and the results demonstrate that the authorsโ€™ proposed algorithm can estimate vanishing point accurately and efficiently in unstructured road scenes

    A Smart System for Detection of Road Lane and Divider

    Get PDF
    Road Lane and divider detection is a core part in environmental perception for driver assistance system. The paper discusses about machine learning and computer vision approach to identify road lanes and dividers and proposes a system for detection of road lane and divider. Driving assistance systems heavily rely on the road lane and divider recognition. Voting classification was implemented using 7 different classifiers. The combination of Scale Invariant Feature Transform (SIFT) and Oriented FAST and Rotated BRIEF (ORB) provided feature extraction. Principle Component Analysis (PCA) provided dimension reduction. The performance has been examined on the basis of accuracy (95.50%), precision (78.81%), recall (65.71%), and F1 score (71.67%). The proposed solution is helpful in solving issues related to road safety and reducing road accidents

    Lane Line Detection and Object Scene Segmentation Using Otsu Thresholding and the Fast Hough Transform for Intelligent Vehicles in Complex Road Conditions

    Get PDF
    An Otsu-threshold- and Canny-edge-detection-based fast Hough transform (FHT) approach to lane detection was proposed to improve the accuracy of lane detection for autonomous vehicle driving. During the last two decades, autonomous vehicles have become very popular, and it is constructive to avoid traffic accidents due to human mistakes. The new generation needs automatic vehicle intelligence. One of the essential functions of a cutting-edge automobile system is lane detection. This study recommended the idea of lane detection through improved (extended) Canny edge detection using a fast Hough transform. The Gaussian blur filter was used to smooth out the image and reduce noise, which could help to improve the edge detection accuracy. An edge detection operator known as the Sobel operator calculated the gradient of the image intensity to identify edges in an image using a convolutional kernel. These techniques were applied in the initial lane detection module to enhance the characteristics of the road lanes, making it easier to detect them in the image. The Hough transform was then used to identify the routes based on the mathematical relationship between the lanes and the vehicle. It did this by converting the image into a polar coordinate system and looking for lines within a specific range of contrasting points. This allowed the algorithm to distinguish between the lanes and other features in the image. After this, the Hough transform was used for lane detection, making it possible to distinguish between left and right lane marking detection extraction; the region of interest (ROI) must be extracted for traditional approaches to work effectively and easily. The proposed methodology was tested on several image sequences. The least-squares fitting in this region was then used to track the lane. The proposed system demonstrated high lane detection in experiments, demonstrating that the identification method performed well regarding reasoning speed and identification accuracy, which considered both accuracy and real-time processing and could satisfy the requirements of lane recognition for lightweight automatic driving systems

    A parallel windowing approach to the Hough transform for line segment detection

    Get PDF
    In the wide range of image processing and computer vision problems, line segment detection has always been among the most critical headlines. Detection of primitives such as linear features and straight edges has diverse applications in many image understanding and perception tasks. The research presented in this dissertation is a contribution to the detection of straight-line segments by identifying the location of their endpoints within a two-dimensional digital image. The proposed method is based on a unique domain-crossing approach that takes both image and parameter domain information into consideration. First, the straight-line parameters, i.e. location and orientation, have been identified using an advanced Fourier-based Hough transform. As well as producing more accurate and robust detection of straight-lines, this method has been proven to have better efficiency in terms of computational time in comparison with the standard Hough transform. Second, for each straight-line a window-of-interest is designed in the image domain and the disturbance caused by the other neighbouring segments is removed to capture the Hough transform buttery of the target segment. In this way, for each straight-line a separate buttery is constructed. The boundary of the buttery wings are further smoothed and approximated by a curve fitting approach. Finally, segments endpoints were identified using buttery boundary points and the Hough transform peak. Experimental results on synthetic and real images have shown that the proposed method enjoys a superior performance compared with the existing similar representative works

    ๋ฌด์ธ ์ž์œจ์ฃผํ–‰ ์ฐจ๋Ÿ‰์„ ์œ„ํ•œ ๋‹จ์•ˆ ์นด๋ฉ”๋ผ ๊ธฐ๋ฐ˜ ์‹ค์‹œ๊ฐ„ ์ฃผํ–‰ ํ™˜๊ฒฝ ์ธ์‹ ๊ธฐ๋ฒ•์— ๊ด€ํ•œ ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐ๊ณตํ•™๋ถ€, 2014. 2. ์„œ์Šน์šฐ.Homo Faber, refers to humans as controlling the environments through tools. From the beginning of the world, humans create tools for chasing the convenient life. The desire for the rapid movement let the human ride on horseback, make the wagon and finally make the vehicle. The vehicle made humans possible to travel the long distance very quickly as well as conveniently. However, since human being itself is imperfect, plenty of people have died due to the car accident, and people are dying at this moment. The research for autonomous vehicle has been conducted to satisfy the humans desire of the safety as the best alternative. And, the dream of autonomous vehicle will be come true in the near future. For the implementation of autonomous vehicle, many kinds of techniques are required, among which, the recognition of the environment around the vehicle is one of the most fundamental and important problems. For the recognition of surrounding objects many kinds of sensors can be utilized, however, the monocular camera can collect the largest information among sensors as well as can be utilized for the variety of purposes, and can be adopted for the various vehicle types due to the good price competitiveness. I expect that the research using the monocular camera for autonomous vehicle is very practical and useful. In this dissertation, I cover four important recognition problems for autonomous driving by using monocular camera in vehicular environment. Firstly, to drive the way autonomously the vehicle has to recognize lanes and keep its lane. However, the detection of lane markings under the various illuminant variation is very difficult in the image processing area. Nevertheless, it must be solved for the autonomous driving. The first research topic is the robust lane marking extraction under the illumination variations for multilane detection. I proposed the new lane marking extraction filter that can detect the imperfect lane markings as well as the new false positive cancelling algorithm that can eliminate noise markings. This approach can extract lane markings successfully even under the bad illumination conditions. Secondly, the problem to tackle, is if there is no lane marking on the road, then how the autonomous vehicle can recognize the road to run? In addition, what is the current lane position of the road? The latter is the important question since we can make a decision for lane change or keeping depending on the current position of lane. The second research is for handling those two problems, and I proposed the approach for the fusing the road detection and the lane position estimation. Thirdly, to drive more safely, keeping the safety distance is very important. Additionally, many equipments for the driving safety require the distance information. Measuring accurate inter-vehicle distance by using monocular camera and line laser is the third research topic. To measure the inter-vehicle distance, I illuminate the line laser on the front side of vehicle, and measure the length of the laser line and lane width in the image. Based on the imaging geometry, the distance calculation problem can be solved with accuracy. There are still many of important problems are remaining to be solved, and I proposed some approaches by using the monocular camera to handle the important problems. I expect very active researches will be continuously conducted and, based on the researches, the era of autonomous vehicle will come in the near future.1 Introduction 1.1 Background and Motivations 1.2 Contributions and Outline of the Dissertation 1.2.1 Illumination-Tolerant Lane Marking Extraction for Multilane Detection 1.2.2 Fusing Road Detection and Lane Position Estimation for the Robust Road Boundary Estimation 1.2.3 Accurate Inter-Vehicle Distance Measurement based on Monocular Camera and Line Laser 2 Illumination-Tolerant Lane Marking Extraction for Multilane Detection 2.1 Introduction 2.2 Lane Marking Candidate Extraction Filter 2.2.1 Requirements of the Filter 2.2.2 A Comparison of Filter Characteristics 2.2.3 Cone Hat Filter 2.3 Overview of the Proposed Algorithm 2.3.1 Filter Width Estimation 2.3.2 Top Hat (Cone Hat) Filtering 2.3.3 Reiterated Extraction 2.3.4 False Positive Cancelling 2.3.4.1 Lane Marking Center Point Extraction 2.3.4.2 Fast Center Point Segmentation 2.3.4.3 Vanishing Point Detection 2.3.4.4 Segment Extraction 2.3.4.5 False Positive Filtering 2.4 Experiments and Evaluation 2.4.1 Experimental Set-up 2.4.2 Conventional Algorithm for Evaluation 2.4.2.1 Global threshold 2.4.2.2 Positive Negative Gradient 2.4.2.3 Local Threshold 2.4.2.4 Symmetry Local Threshold 2.4.2.5 Double Extraction using Symmetry Local Threshold 2.4.2.6 Gaussian Filter 2.4.3 Experimental Results 2.4.4 Summary 3 Fusing Road Detection and Lane Position Estimation for the Robust Road Boundary Estimation 3.1 Introduction 3.2 Chromaticity-based Flood-fill Method 3.2.1 Illuminant-Invariant Space 3.2.2 Road Pixel Selection 3.2.3 Flood-fill Algorithm 3.3 Lane Position Estimation 3.3.1 Lane Marking Extraction 3.3.2 Proposed Lane Position Detection Algorithm 3.3.3 Birds-eye View Transformation by using the Proposed Dynamic Homography Matrix Generation 3.3.4 Next Lane Position Estimation based on the Cross-ratio 3.3.5 Forward-looking View Transformation 3.4 Information Fusion Between Road Detection and Lane Position Estimation 3.4.1 The Case of Detection Failures 3.4.2 The Benefit of Information Fusion 3.5 Experiments and Evaluation 3.6 Summary 4 Accurate Inter-Vehicle Distance Measurement based on Monocular Camera and Line Laser 4.1 Introduction 4.2 Proposed Distance Measurement Algorithm 4.3 Experiments and Evaluation 4.3.1 Experimental System Set-up 4.3.2 Experimental Results 4.4 Summary 5 ConclusionDocto

    Scalable Hierarchical Gaussian Process Models for Regression and Pattern Classification

    Get PDF
    Gaussian processes, which are distributions over functions, are powerful nonparametric tools for the two major machine learning tasks: regression and classification. Both tasks are concerned with learning input-output mappings from example input-output pairs. In Gaussian process (GP) regression and classification, such mappings are modeled by Gaussian processes. In GP regression, the likelihood is Gaussian for continuous outputs, and hence closed-form solutions for prediction and model selection can be obtained. In GP classification, the likelihood is non-Gaussian for discrete/categorical outputs, and hence closed-form solutions are not available, and approximate inference methods must be resorted
    • โ€ฆ
    corecore