707 research outputs found

    Generalized Completed Local Binary Patterns for Time-Efficient Steel Surface Defect Classification

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted ncomponent of this work in other works.Efficient defect classification is one of the most important preconditions to achieve online quality inspection for hot-rolled strip steels. It is extremely challenging owing to various defect appearances, large intraclass variation, ambiguous interclass distance, and unstable gray values. In this paper, a generalized completed local binary patterns (GCLBP) framework is proposed. Two variants of improved completed local binary patterns (ICLBP) and improved completed noise-invariant local-structure patterns (ICNLP) under the GCLBP framework are developed for steel surface defect classification. Different from conventional local binary patterns variants, descriptive information hidden in nonuniform patterns is innovatively excavated for the better defect representation. This paper focuses on the following aspects. First, a lightweight searching algorithm is established for exploiting the dominant nonuniform patterns (DNUPs). Second, a hybrid pattern code mapping mechanism is proposed to encode all the uniform patterns and DNUPs. Third, feature extraction is carried out under the GCLBP framework. Finally, histogram matching is efficiently accomplished by simple nearest-neighbor classifier. The classification accuracy and time efficiency are verified on a widely recognized texture database (Outex) and a real-world steel surface defect database [Northeastern University (NEU)]. The experimental results promise that the proposed method can be widely applied in online automatic optical inspection instruments for hot-rolled strip steel.Peer reviewe

    Traffic sign detection using a cascade method with fast feature extraction and saliency test

    Get PDF
    Automatic traffic sign detection is challenging due to the complexity of scene images, and fast detection is required in real applications such as driver assistance systems. In this paper, we propose a fast traffic sign detection method based on a cascade method with saliency test and neighboring scale awareness. In the cascade method, feature maps of several channels are extracted efficiently using approximation techniques. Sliding windows are pruned hierarchically using coarse-to-fine classifiers and the correlation between neighboring scales. The cascade system has only one free parameter, while the multiple thresholds are selected by a data-driven approach. To further increase speed, we also use a novel saliency test based on mid-level features to pre-prune background windows. Experiments on two public traffic sign data sets show that the proposed method achieves competing performance and runs 27 times as fast as most of the state-of-the-art methods

    Advanced Driver-Assistance System with Traffic Sign Recognition for Safe and Efficient Driving

    Get PDF
    Advanced Driver-Assistance Systems (ADAS) coupled with traffic sign recognition could lead to safer driving environments. This study presents a sophisticated, yet robust and accurate traffic sign detection system using computer vision and ML, for ADAS. Unavailability of large local traffic sign datasets and the unbalances of traffic sign distribution are the key bottlenecks of this research.  Hence, we choose to work with support vector machines (SVM) with a custom-built unbalance dataset, to build a lightweight model with excellent classification accuracy.  The SVM model delivered optimum performance with the radial basis kernel, C=10, and gamma=0.0001. In the proposed method, same priority was given to processing time (testing time) and accuracy, as traffic sign identification is time critical. The final accuracy obtained was 87% (with confidence interval 84%-90%) with a processing time of 0.64s (with confidence interval of 0.57s-0.67s) for correct detection at testing, which emphasizes the effectiveness of the proposed method

    Motorcycles detection using Haar-like features and Support Vector Machine on CCTV camera image

    Get PDF
    Traffic monitoring system allows operators to monitor and analyze each traffic point via CCTV camera. However, it is difficult to monitor each traffic point all the time. This problem leads to the development of intelligent traffic monitoring system using computer vision technology which one of the features is vehicle detection. Vehicle detection still poses a challenge especially when dealing with motorcycles that occupy the majority of the road in Indonesia. In this research, a motorcycle detection method using Haar-like features and Support Vector Machine (SVM) on CCTV camera image is proposed. A set of preprocessing procedure is performed on the input image before Haar-like features extraction. The features then classified using trained SVM model via sliding window technique to detect motorcycles. The test result shows 0.0 log average miss rate and 0.9 average precision. From the low miss rate and high precision, the proposed method shows promising solution in detecting motorcycle from CCTV camera image

    Detection of U.S. Traffic Signs

    Get PDF

    Sistema de reconhecimento de expressões faciais para deteção de stress

    Get PDF
    Stress is the body's natural reaction to external and internal stimuli. Despite being something natural, prolonged exposure to stressors can contribute to serious health problems. These reactions are reflected not only physiologically, but also psychologically, translating into emotions and facial expressions. Once this relationship between the experience of stressful situations and the demonstration of certain emotions in response was understood, it was decided to develop a system capable of classifying facial expressions and thereby creating a stress detector. The proposed solution consists of two main blocks. A convolutional neural network capable of classifying facial expressions, and an application that uses this model to classify real-time images of the user's face and thereby verify whether or not it shows signs of stress. The application consists in capturing real-time images from the webcam, extract the user's face, classify which facial expression he expresses, and with these classifications assess whether or not he shows signs of stress in a given time interval. As soon as the application determines the presence of signs of stress, it notifies the user. For the creation of the classification model, was used transfer learning, together with finetuning. In this way, we took advantage of the pre-trained networks VGG16, VGG19, and Inception-ResNet V2 to solve the problem at hand. For the transfer learning process, were also tried two classifier architectures. After several experiments, it was determined that VGG16, together with a classifier made up of a convolutional layer, was the candidate with the best performance at classifying stressful emotions. Having presented an MCC of 0.8969 in the test images of the KDEF dataset, 0.5551 in the Net Images dataset, and 0.4250 in the CK +.O stress é uma reação natural do corpo a estímulos externos e internos. Apesar de ser algo natural, a exposição prolongada a stressors pode contribuir para sérios problemas de saúde. Essas reações refletem-se não só fisiologicamente, mas também psicologicamente. Traduzindose em emoções e expressões faciais. Uma vez compreendida esta relação entre a experiência de situações stressantes e a demonstração de determinadas emoções como resposta, decidiu-se desenvolver um sistema capaz de classificar expressões faciais e com isso criar um detetor de stress. A solução proposta é constituida por dois blocos fundamentais. Uma rede neuronal convolucional capaz de classificar expressões faciais e uma aplicação que utiliza esse modelo para classificar imagens em tempo real do rosto do utilizador e assim averiguar se este apresenta ou não sinais de stress. A aplicação consiste em captar imagens em tempo real a partir da webcam, extrair o rosto do utilizador, classificar qual a expressão facial que este manifesta, e com essas classificações avaliar se num determinado intervalo temporal este apresenta ou não sinais de stress. Assim que a aplicação determine a presença de sinais de stress, esta irá notificar o utilizador. Para a criação do modelo de classificação, foi utilizado transfer learning, juntamente com finetuning. Desta forma tirou-se partido das redes pre-treinadas VGG16, VGG19, e InceptionResNet V2 para a resolução do problema em mãos. Para o processo de transfer learning foram também experimentadas duas arquiteturas de classificadores. Após várias experiências, determinou-se que a VGG16, juntamente com um classificador constituido por uma camada convolucional era a candidata com melhor desempenho a classificar emoções stressantes. Tendo apresentado um MCC de 0,8969 nas imagens de teste do conjunto de dados KDEF, 0,5551 no conjunto de dados Net Images, e 0,4250 no CK+
    corecore