3,722 research outputs found

    Feature extraction of the wear label of carpets by using a novel 3D scanner

    Get PDF
    In the textile industry, the quality of carpets is still determined through visual assessment by human experts. Human assessment is somewhat subjective, so there is a need for a more objective assessment which yields to automated systems. However, existing computer models are at this moment not yet capable of matching the human expertise. Most attempts at automated assessment have focused on image analysis of two dimensional images of worn carpet. These do not adequately capture the three dimensional structure of the carpet that is also evaluated by the experts and the image processing is very dependent on the lighting conditions. One previous attempt however used a laser scanner to obtain three dimensional images of the carpet and process them for carpet assessment. This paper describes the development of a new scanner to acquire wear label characteristics in three dimensions based on a structured light pattern. Now an appropriate technique based on the local binary patterns (LBP) and the Kullback-Leibler divergence has been developed. We show that the new laser scanning system is less dependent on the lighting conditions and color of the carpet and obtains data points on a structured grid instead of sparse points. The new system is also more than five times cheaper, scans more than seven times faster and is specifically designed for scanning carpets instead of 3D objects. Previous attempts to classify the carpet wear were based on several extracted features. Only one of them - the height difference between worn and unworn part - showed a good correlation of 0.70 with the carpet wear label. However, experiments demonstrate that our approach - using the LBP technique - gives rise to promising results, with correlation factors from 0.89 to 0.99 between the Kullback-Leibler divergence and quality labels. This new laser scanner system is a significant step forward in the automated assessment of carpet wear using 3D images

    Robot Detection Using Gradient and Color Signatures

    Get PDF
    Tasks which are simple for a human can be some of the most challenging for a robot. Finding and classifying objects in an image is a complex computer vision problem that computer scientists are constantly working to solve. In the context of the RoboCup Standard Platform League (SPL) Competition, in which humanoid robots are programmed to autonomously play soccer, identifying other robots on the field is an example of this difficult computer vision problem. Without obstacle detection in RoboCup, the robotic soccer players are unable to smoothly move around the field and can be penalized for walking into another robot. This project aims to use gradient and color signatures to identify robots in an image as a novel approach to visual robot detection. The method, Fastgrad , is presented and analyzed in the context of the Bowdoin College Northern Bites codebase and then compared to other common methods of robot detection in RoboCup SPL

    A Neural Network Method for Classification of Sunlit and Shaded Components of Wheat Canopies in the Field Using High-Resolution Hyperspectral Imagery

    Get PDF
    (1) Background: Information rich hyperspectral sensing, together with robust image analysis, is providing new research pathways in plant phenotyping. This combination facilitates the acquisition of spectral signatures of individual plant organs as well as providing detailed information about the physiological status of plants. Despite the advances in hyperspectral technology in field-based plant phenotyping, little is known about the characteristic spectral signatures of shaded and sunlit components in wheat canopies. Non-imaging hyperspectral sensors cannot provide spatial information; thus, they are not able to distinguish the spectral reflectance differences between canopy components. On the other hand, the rapid development of high-resolution imaging spectroscopy sensors opens new opportunities to investigate the reflectance spectra of individual plant organs which lead to the understanding of canopy biophysical and chemical characteristics. (2) Method: This study reports the development of a computer vision pipeline to analyze ground-acquired imaging spectrometry with high spatial and spectral resolutions for plant phenotyping. The work focuses on the critical steps in the image analysis pipeline from pre-processing to the classification of hyperspectral images. In this paper, two convolutional neural networks (CNN) are employed to automatically map wheat canopy components in shaded and sunlit regions and to determine their specific spectral signatures. The first method uses pixel vectors of the full spectral features as inputs to the CNN model and the second method integrates the dimension reduction technique known as linear discriminate analysis (LDA) along with the CNN to increase the feature discrimination and improves computational efficiency. (3) Results: The proposed technique alleviates the limitations and lack of separability inherent in existing pre-defined hyperspectral classification methods. It optimizes the use of hyperspectral imaging and ensures that the data provide information about the spectral characteristics of the targeted plant organs, rather than the background. We demonstrated that high-resolution hyperspectral imagery along with the proposed CNN model can be powerful tools for characterizing sunlit and shaded components of wheat canopies in the field. The presented method will provide significant advances in the determination and relevance of spectral properties of shaded and sunlit canopy components under natural light conditions

    Ballistics Image Processing and Analysis for Firearm Identification

    Get PDF
    Firearm identification is an intensive and time-consuming process that requires physical interpretation of forensic ballistics evidence. Especially as the level of violent crime involving firearms escalates, the number of firearms to be identified accumulates dramatically. The demand for an automatic firearm identification system arises. This chapter proposes a new, analytic system for automatic firearm identification based on the cartridge and projectile specimens. Not only do we present an approach for capturing and storing the surface image of the spent projectiles at high resolution using line-scan imaging technique for the projectiles database, but we also present a novel and effective FFT-based analysis technique for analyzing and identifying the projectiles

    Eyes-Free Vision-Based Scanning of Aligned Barcodes and Information Extraction from Aligned Nutrition Tables

    Get PDF
    Visually impaired (VI) individuals struggle with grocery shopping and have to rely on either friends, family or grocery store associates for shopping. ShopMobile 2 is a proof-of-concept system that allows VI shoppers to shop independently in a grocery store using only their smartphone. Unlike other assistive shopping systems that use dedicated hardware, this system is a software only solution that relies on fast computer vision algorithms. It consists of three modules - an eyes free barcode scanner, an optical character recognition (OCR) module, and a tele-assistance module. The eyes-free barcode scanner allows VI shoppers to locate and retrieve products by scanning barcodes on shelves and on products. The OCR module allows shoppers to read nutrition facts on products and the tele-assistance module allows them to obtain help from sighted individuals at remote locations. This dissertation discusses, provides implementations of, and presents laboratory and real-world experiments related to all three modules

    Identity verification using computer vision for automatic garage door opening

    Get PDF
    We present a novel system for automatic identification of vehicles as part of an intelligent access control system for a garage entrance. Using a camera in the door, cars are detected and matched to the database of authenticated cars. Once a car is detected, License Plate Recognition (LPR) is applied using character detection and recognition. The found license plate number is matched with the database of authenticated plates. If the car is allowed access, the door will open automatically. The recognition of both cars and characters (LPR) is performed using state-ofthe- art shape descriptors and a linear classifier. Experiments have revealed that 90% of all cars are correctly authenticated from a single image only. Analysis of the computational complexity shows that an embedded implementation allows user authentication within approximately 300ms, which is well within the application constraints
    • …
    corecore