38 research outputs found

    A Hybrid Connecting Character Based Text Recognition and Extraction Algorithm

    Get PDF
    Traffic sign recognition is a technology by which a vehicle is able to recognize the traffic signs put on the road e.g. "speed limit" or "children" or "turn ahead". In this paper a novel Connecting Character based text recognition and extraction algorithm is designed which uses Maximally Stable Extremely Regions (MSER) for test candidate recognition and extraction from traffic signs. Despite their auspicious properties, MSER has been conveyed to be delicate towards blurred Image. To allow for detecting small letters in images of limited resolution or blurred Image, the complimentary properties of Lucy-Richardson Algorithm and canny edge Algorithm is used

    Using customised image processing for noise reduction to extract data from early 20th century African newspapers

    Get PDF
    A research report submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science in Engineering, 2017The images from the African articles dataset presented challenges to the Optical Character Recognition (OCR) tool. Despite successful binerisation in the Image Processing step of the pipeline, noise remained in the foreground of the images. This noise caused the OCR tool to misinterpret the text from the images and thus needed removal from the foreground. The technique involved the application of the Maximally Stable Extremal Region (MSER) algorithm, borrowed from Scene-Text Detection, and supervised machine learning classifiers. The algorithm creates regions from the foreground elements. Regions are classifiable into noise and characters based on the characteristics of their shapes. Classifiers were trained to recognise noise and characters. The technique is useful for a researcher wanting to process and analyse the large dataset. They could semi-automate the foreground noise-removal process using this technique. This would allow for better quality OCR output, for use in the Text Analysis step of the pipeline. Better OCR quality means less compromises would be required at the Text Analysis step. These concessions can lead to false results when searching noisy text. Fewer compromises means simpler, less error-prone analysis and more trustworthy results. The technique was tested against specifically selected images from the dataset which exhibited noise. It involved a number of steps. Training regions were selected and manually classified. After training and running many classifiers, the highest performing classifier was selected. The classifier categorised regions from all images. New images were created by removing noise regions from the original images. To discover whether an improvement in the OCR output was achieved, a text comparison was conducted. OCR text was generated from both the original and processed images. The two outputs of each image were compared for similarity against the test text. The test text was a manually created version of the expected OCR output per image. The similarity test for both original and processed images produced a score. A change in the similarity score indicated whether the technique had successfully removed noise or not. The test results showed that blotches in the foreground could be removed, and OCR output improved. Bleed-through and page fold noise was not removable. For images affected by noise blotches, this technique can be applied and hence less concessions will be needed when processing the text generated from those images.CK201

    Detection and Recognition of Traffic Sign using FCM with SVM

    Get PDF
    This paper mainly focuses on Traffic Sign and board Detection systems that have been placed on roads and highway. This system aims to deal with real-time traffic sign and traffic board recognition, i.e. localizing what type of traffic sign and traffic board are appears in which area of an input image at a fast processing time. Our detection module is based on proposed extraction and classification of traffic signs built upon a color probability model using HAAR feature Extraction and color Histogram of Orientated Gradients (HOG).HOG technique is used to convert original image into gray color then applies RGB for foreground. Then the Support Vector Machine (SVM) fetches the object from the above result and compares with database. At the same time Fuzzy Cmeans cluster (FCM) technique get the same output from above result and thenĂ‚  to compare with the database images. By using this method, accuracy of identifying the signs could be improved. Also the dynamic updating of new signals can be done. The goal of this work is to provide optimized prediction on the given sign

    Real-Time Video Road Sign Detection And Tracking Using Image Processing And Autonomous Car

    Get PDF
    Detection and monitoring of real-time road signs are becoming today's study in the autonomous car industry. The number of car users in Malaysia risen every year as well as the rate of car crashes. Different types, shapes, and colour of road signs lead the driver to neglect them, and this attitude contributing to a high rate of accidents. The purpose of this paper is to implement image processing using the real-time video Road Sign Detection and Tracking (RSDT) with an autonomous car. The detection of road signs is carried out by using Video and Image Processing technique control in Python by applying deep learning process to detect an object in a video’s motion. The extracted features from the video frame will continue to template matching on recognition processes which are based on the database. The experiment for the fixed distance shows an accuracy of 99.9943% while the experiment with the various distance showed the inversely proportional relation between distances and accuracies. This system was also able to detect and recognize five types of road signs using a convolutional neural network. Lastly, the experimental results proved the system capability to detect and recognize the road sign accurately

    Vision-Based Traffic Sign Detection and Recognition Systems: Current Trends and Challenges

    Get PDF
    The automatic traffic sign detection and recognition (TSDR) system is very important research in the development of advanced driver assistance systems (ADAS). Investigations on vision-based TSDR have received substantial interest in the research community, which is mainly motivated by three factors, which are detection, tracking and classification. During the last decade, a substantial number of techniques have been reported for TSDR. This paper provides a comprehensive survey on traffic sign detection, tracking and classification. The details of algorithms, methods and their specifications on detection, tracking and classification are investigated and summarized in the tables along with the corresponding key references. A comparative study on each section has been provided to evaluate the TSDR data, performance metrics and their availability. Current issues and challenges of the existing technologies are illustrated with brief suggestions and a discussion on the progress of driver assistance system research in the future. This review will hopefully lead to increasing efforts towards the development of future vision-based TSDR system. Document type: Articl

    Curve Sign Inventorying Method Using Smartphones and Deep Learning Technologies

    Get PDF
    The objective of the proposed research is to develop and assess a system using smartphones and deep learning technologies to automatically establish an intelligent and sustainable curve sign inventory from videos. The Manual on the Uniform Traffic Control Devices (MUTCD) is the nationwide regulator that defines the standards used for transportation asset installation and maintenance. The proposed system is one of the components of a larger methodology whose purpose is to accomplish a frequent and cost-effective MUTCD curve sign compliance checking and other curve safety checking in order to reduce the number of deadly crashes on curves. To automatically build an effective sign inventory from videos, four modules are needed: sign detection, classification, tracking and localization. For this purpose, a pipeline has been developed in the past by former students of the Transportation laboratory of Georgia Tech. However, this pipeline is not accurate enough and its different modules have never been critically tested and assessed. Therefore, the objective of this study is to improve the different modules and particularly the detection module, which is the most important module of the pipeline, and to critically assess these improved modules to determine the pipeline ability to build an effective sign inventory. The proposed system has been tested and assessed in real conditions on a mountain road with many curves and curve signs; it has shown that the detection module is able to detect every single curve sign with a very low number of detected non-curve signs (false positive), resulting in a precision of 0.97 and a recall of 1. The other modules also showed very promising results. Overall, this study demonstrates that the proposed system is suitable for building an accurate curve sign inventory that can be used by transportation agencies to get a precise idea of the condition of the curve sign networks on a particular road.M.S

    CIRCULAR TRAFFIC SIGN CLASSIFICATION USING HOGBASED RING PARTITIONED MATCHING

    Get PDF
    This paper presents a technique to classify the circular traffic sign based-on HOG (histogram of oriented gradients) and a ring partitioned matching. The method divides an image into several ring areas, and calculates the HOG feature on each ring area. In the matching process, the weight is assigned to each ring for calculating the distance of HOG feature between tested image and reference image. The experimental results show that the proposed algorithm achieves a high classification rate of 97.8%, without the need of many prepared sample images. The results also show that the best values of the number of orientation bins and the cell size of the HOG parameters are 5 and 10 x 10 pixels respectively. Index terms: HOG, traffic sign classification, ring partitioned, template matching

    Extracting Semantic Information from Visual Data: A Survey

    Get PDF
    The traditional environment maps built by mobile robots include both metric ones and topological ones. These maps are navigation-oriented and not adequate for service robots to interact with or serve human users who normally rely on the conceptual knowledge or semantic contents of the environment. Therefore, the construction of semantic maps becomes necessary for building an effective human-robot interface for service robots. This paper reviews recent research and development in the field of visual-based semantic mapping. The main focus is placed on how to extract semantic information from visual data in terms of feature extraction, object/place recognition and semantic representation methods
    corecore