4 research outputs found

    Automated Mapping Of Accessibility Signs With Deep Learning From Ground-level Imagery and Open Data

    Get PDF
    International audienceIn some areas or regions, accessible parking spots are not geolocalized and therefore both difficult to find online and excluded from open data sources. In this paper, we aim at detecting accessible parking signs from street view panoramas and geolocalize them. Object detection is an open challenge in computer vision, and numerous methods exist whether based on handcrafted features or deep learning. Our method consists of processing Google Street View images of French cities in order to geolocalize the accessible parking signs on posts and on the ground where the parking spot is not available on GIS systems. To accomplish this, we rely on the deep learning object detection method called Faster R-CNN with Region Proposal Networks which has proven excellent performance in object detection benchmarks. This helps to map accurate locations of where the parking areas do exist, which can be used to build services or update online mapping services such as Open Street Map. We provide some preliminary results which show the feasibility and relevance of our approach

    Assessment of Driver\u27s Attention to Traffic Signs through Analysis of Gaze and Driving Sequences

    Get PDF
    A driver’s behavior is one of the most significant factors in Advance Driver Assistance Systems. One area that has received little study is just how observant drivers are in seeing and recognizing traffic signs. In this contribution, we present a system considering the location where a driver is looking (points of gaze) as a factor to determine that whether the driver has seen a sign. Our system detects and classifies traffic signs inside the driver’s attentional visual field to identify whether the driver has seen the traffic signs or not. Based on the results obtained from this stage which provides quantitative information, our system is able to determine how observant of traffic signs that drivers are. We take advantage of the combination of Maximally Stable Extremal Regions algorithm and Color information in addition to a binary linear Support Vector Machine classifier and Histogram of Oriented Gradients as features detector for detection. In classification stage, we use a multi class Support Vector Machine for classifier also Histogram of Oriented Gradients for features. In addition to the detection and recognition of traffic signs, our system is capable of determining if the sign is inside the attentional visual field of the drivers. It means the driver has kept his gaze on traffic signs and sees the sign, while if the sign is not inside this area, the driver did not look at the sign and sign has been missed

    A traffic sign detection pipeline based on interest region extraction

    No full text
    none5noIn this paper we present a pipeline for automatic detection of traffic signs in images. The proposed system can deal with high appearance variations, which typically occur in traffic sign recognition applications, especially with strong illumination changes and dramatic scale changes. Unlike most existing systems, our pipeline is based on interest regions extraction rather than a sliding window detection scheme. The proposed approach has been specialized and tested in three variants, each aimed at detecting one of the three categories of Mandatory, Prohibitory and Danger traffic signs. Our proposal has been evaluated experimentally within the German Traffic Sign Detection Benchmark competition.mixedSamuele Salti; Alioscia Petrelli; Federico Tombari; Nicola Fioraio; Luigi Di StefanoSamuele Salti; Alioscia Petrelli; Federico Tombari; Nicola Fioraio; Luigi Di Stefan
    corecore