13,206 research outputs found

    Fast traffic sign recognition using color segmentation and deep convolutional networks

    Get PDF
    The use of Computer Vision techniques for the automatic recognition of road signs is fundamental for the development of intelli- gent vehicles and advanced driver assistance systems. In this paper, we describe a procedure based on color segmentation, Histogram of Ori- ented Gradients (HOG), and Convolutional Neural Networks (CNN) for detecting and classifying road signs. Detection is speeded up by a pre- processing step to reduce the search space, while classication is carried out by using a Deep Learning technique. A quantitative evaluation of the proposed approach has been conducted on the well-known German Traf- c Sign data set and on the novel Data set of Italian Trac Signs (DITS), which is publicly available and contains challenging sequences captured in adverse weather conditions and in an urban scenario at night-time. Experimental results demonstrate the eectiveness of the proposed ap- proach in terms of both classication accuracy and computational speed

    Traffic sign recognition based on human visual perception.

    Get PDF
    This thesis presents a new approach, based on human visual perception, for detecting and recognising traffic signs under different viewing conditions. Traffic sign recognition is an important issue within any driver support system as it is fundamental to traffic safety and increases the drivers' awareness of situations and possible decisions that are ahead. All traffic signs possess similar visual characteristics, they are often the same size, shape and colour. However shapes may be distorted when viewed from different viewing angles and colours are affected by overall luminosity and the presence of shadows. Human vision can identify traffic signs correctly by ignoring this variance of colours and shapes. Consequently traffic sign recognition based on human visual perception has been researched during this project. In this approach two human vision models are adopted to solve the problems above: Colour Appearance Model (CIECAM97s) and Behavioural Model of Vision (BMV). Colour Appearance Model (CIECAM97s) is used to segment potential traffic signs from the image background under different weather conditions. Behavioural Model of Vision (BMV) is used to recognize the potential traffic signs. Results show that segmentation based on CIECAM97s performs better than, or comparable to, other perceptual colour spaces in terms of accuracy. In addition, results illustrate that recognition based on BMV can be used in this project effectively to detect a certain range of shape transformations. Furthermore, a fast method of distinguishing and recognizing the different weather conditions within images has been developed. The results show that 84% recognition rate can be achieved under three weather and different viewing conditions

    IVVI: Intelligent vehicle based on visual information

    Get PDF
    Human errors are the cause of most traffic accidents, with drivers’ inattention and wrong driving decisions being the two main sources. These errors can be reduced, but not completely eliminated. That is why Advanced Driver Assistance Systems (ADAS) can reduce the number, danger and severity of traffic accidents. Several ADAS, which nowadays are being researched for Intelligent vehicles, are based on Artificial Intelligence and Robotics technologies. In this article a research platform for the implementation of systems based on computer vision is presented, and different visual perception modules useful for some ADAS such as Line Keeping System, Adaptive Cruise Control, Pedestrian Protector, or Speed Supervisor, are described.This work was supported in part by the Spanish government under CICYT grant TRA2004-07441-C03-01

    Local object gist: meaningful shapes and spatial layout at a very early stage of visual processing

    Get PDF
    In his introduction, Pinna (2010) quoted one of Wertheimer’s observations: “I stand at the window and see a house, trees, sky. Theoretically I might say there were 327 brightnesses and nuances of color. Do I have ‘327’? No. I have sky, house, and trees.” This seems quite remarkable, for Max Wertheimer, together with Kurt Koffka and Wolfgang Koehler, was a pioneer of Gestalt Theory: perceptual organisation was tackled considering grouping rules of line and edge elements in relation to figure-ground segregation, i.e., a meaningful object (the figure) as perceived against a complex background (the ground). At the lowest level – line and edge elements – Wertheimer (1923) himself formulated grouping principles on the basis of proximity, good continuation, convexity, symmetry and, often forgotten, past experience of the observer. Rubin (1921) formulated rules for figure-ground segregation using surroundedness, size and orientation, but also convexity and symmetry. Almost a century of research into Gestalt later, Pinna and Reeves (2006) introduced the notion of figurality, meant to represent the integrated set of properties of visual objects, from the principles of grouping and figure-ground to the colour and volume of objects with shading. Pinna, in 2010, went one important step further and studied perceptual meaning, i.e., the interpretation of complex figures on the basis of past experience of the observer. Re-establishing a link to Wertheimer’s rule about past experience, he formulated five propositions, three definitions and seven properties on the basis of observations made on graphically manipulated patterns. For example, he introduced the illusion of meaning by comics-like elements suggesting wind, therefore inducing a learned interpretation. His last figure shows a regular array of squares but with irregular positions on the right side. This pile of (ir)regular squares can be interpreted as the result of an earthquake which destroyed part of an apartment block. This is much more intuitive, direct and economic than describing the complexity of the array of squares

    Fast and robust road sign detection in driver assistance systems

    Full text link
    © 2018, Springer Science+Business Media, LLC, part of Springer Nature. Road sign detection plays a critical role in automatic driver assistance systems. Road signs possess a number of unique visual qualities in images due to their specific colors and symmetric shapes. In this paper, road signs are detected by a two-level hierarchical framework that considers both color and shape of the signs. To address the problem of low image contrast, we propose a new color visual saliency segmentation algorithm, which uses the ratios of enhanced and normalized color values to capture color information. To improve computation efficiency and reduce false alarm rate, we modify the fast radial symmetry transform (RST) algorithm, and propose to use an edge pairwise voting scheme to group feature points based on their underlying symmetry in the candidate regions. Experimental results on several benchmarking datasets demonstrate the superiority of our method over the state-of-the-arts on both efficiency and robustness

    Robust Traffic Sign Detection by means of Vision and V2I Communications

    Get PDF
    14th International IEEE Annual Conference on Intelligent Transportation Systems - ITSC, , 05/10/2011-07/10/2011, Washington DC, Estados UnidosThis paper presents a complete traffic sign recognition system, including the steps of detection, recognition and tracking. The Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM), and is able to recognize up to one hundred of the main road signs. Besides a novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed, for that purpose vehicle-to-infrastructure (V2I) communication and stereo information is used. This paper presents plenty of tests in real driving conditions, both day and night, in which a high success rate and low number of false negatives and true positives were obtained, and an average runtime of 35 ms, allowing real-time performance
    • 

    corecore