974 research outputs found

    Computer vision reading on stickers and direct part marking on horticultural products : challenges and possible solutions

    Get PDF
    Traceability of products from production to the consumer has led to a technological advancement in product identification. There has been development from the use of traditional one-dimensional barcodes (EAN-13, Code 128, etc.) to 2D (two-dimensional) barcodes such as QR (Quick Response) and Data Matrix codes. Over the last two decades there has been an increased use of Radio Frequency Identification (RFID) and Direct Part Marking (DPM) using lasers for product identification in agriculture. However, in agriculture there are still considerable challenges to adopting barcodes, RFID and DPM technologies, unlike in industry where these technologies have been very successful. This study was divided into three main objectives. Firstly, determination of the effect of speed, dirt, moisture and bar width on barcode detection was carried out both in the laboratory and a flower producing company, Brandkamp GmbH. This study developed algorithms for automation and detection of Code 128 barcodes under rough production conditions. Secondly, investigations were carried out on the effect of low laser marking energy on barcode size, print growth, colour and contrast on decoding 2D Data Matrix codes printed directly on apples. Three different apple varieties (Golden Delicious, Kanzi and Red Jonaprince) were marked with various levels of energy and different barcode sizes. Image processing using Halcon 11.0.1 (MvTec) was used to evaluate the markings on the apples. Finally, the third objective was to evaluate both algorithms for 1D and 2D barcodes. According to the results, increasing the speed and angle of inclination of the barcode decreased barcode recognition. Also, increasing the dirt on the surface of the barcode resulted in decreasing the successful detection of those barcodes. However, there was 100% detection of the Code 128 barcode at the company’s production speed (0.15 m/s) with the proposed algorithm. Overall, the results from the company showed that the image-based system has a future prospect for automation in horticultural production systems. It overcomes the problem of using laser barcode readers. The results for apples showed that laser energy, barcode size, print growth, type of product, contrast between the markings and the colour of the products, the inertia of the laser system and the days of storage all singularly or in combination with each other influence the readability of laser Data Matrix codes and implementation on apples. There was poor detection of the Data Matrix code on Kanzi and Red Jonaprince due to the poor contrast between the markings on their skins. The proposed algorithm is currently working successfully on Golden Delicious with 100% detection for 10 days using energy 0.108 J mm-2 and a barcode size of 10 × 10 mm2. This shows that there is a future prospect of not only marking barcodes on apples but also on other agricultural products for real time production

    A Pattern Classification Based approach for Blur Classification

    Get PDF
    Blur type identification is one of the most crucial step of image restoration. In case of blind restoration of such images, it is generally assumed that the blur type is known prior to restoration of such images. However, it is not practical in real applications. So, blur type identification is extremely desirable before application of blind restoration technique to restore a blurred image. An approach to categorize blur in three classes namely motion, defocus, and combined blur is presented in this paper. Curvelet transform based energy features are utilized as features of blur patterns and a neural network is designed for classification. The simulation results show preciseness of proposed approach

    The selection and evaluation of a sensory technology for interaction in a warehouse environment

    Get PDF
    In recent years, Human-Computer Interaction (HCI) has become a significant part of modern life as it has improved human performance in the completion of daily tasks in using computerised systems. The increase in the variety of bio-sensing and wearable technologies on the market has propelled designers towards designing more efficient, effective and fully natural User-Interfaces (UI), such as the Brain-Computer Interface (BCI) and the Muscle-Computer Interface (MCI). BCI and MCI have been used for various purposes, such as controlling wheelchairs, piloting drones, providing alphanumeric inputs into a system and improving sports performance. Various challenges are experienced by workers in a warehouse environment. Because they often have to carry objects (referred to as hands-full) it is difficult to interact with traditional devices. Noise undeniably exists in some industrial environments and it is known as a major factor that causes communication problems. This has reduced the popularity of using verbal interfaces with computer applications, such as Warehouse Management Systems. Another factor that effects the performance of workers are action slips caused by a lack of concentration during, for example, routine picking activities. This can have a negative impact on job performance and allow a worker to incorrectly execute a task in a warehouse environment. This research project investigated the current challenges workers experience in a warehouse environment and the technologies utilised in this environment. The latest automation and identification systems and technologies are identified and discussed, specifically the technologies which have addressed known problems. Sensory technologies were identified that enable interaction between a human and a computerised warehouse environment. Biological and natural behaviours of humans which are applicable in the interaction with a computerised environment were described and discussed. The interactive behaviours included the visionary, auditory, speech production and physiological movement where other natural human behaviours such paying attention, action slips and the action of counting items were investigated. A number of modern sensory technologies, devices and techniques for HCI were identified with the aim of selecting and evaluating an appropriate sensory technology for MCI. iii MCI technologies enable a computer system to recognise hand and other gestures of a user, creating means of direct interaction between a user and a computer as they are able to detect specific features extracted from a specific biological or physiological activity. Thereafter, Machine Learning (ML) is applied in order to train a computer system to detect these features and convert them to a computer interface. An application of biomedical signals (bio-signals) in HCI using a MYO Armband for MCI is presented. An MCI prototype (MCIp) was developed and implemented to allow a user to provide input to an HCI, in a hands-free and hands-full situation. The MCIp was designed and developed to recognise the hand-finger gestures of a person when both hands are free or when holding an object, such a cardboard box. The MCIp applies an Artificial Neural Network (ANN) to classify features extracted from the surface Electromyography signals acquired by the MYO Armband around the forearm muscle. The MCIp provided the results of data classification for gesture recognition to an accuracy level of 34.87% with a hands-free situation. This was done by employing the ANN. The MCIp, furthermore, enabled users to provide numeric inputs to the MCIp system hands-full with an accuracy of 59.7% after a training session for each gesture of only 10 seconds. The results were obtained using eight participants. Similar experimentation with the MYO Armband has not been found to be reported in any literature at submission of this document. Based on this novel experimentation, the main contribution of this research study is a suggestion that the application of a MYO Armband, as a commercially available muscle-sensing device on the market, has the potential as an MCI to recognise the finger gestures hands-free and hands-full. An accurate MCI can increase the efficiency and effectiveness of an HCI tool when it is applied to different applications in a warehouse where noise and hands-full activities pose a challenge. Future work to improve its accuracy is proposed

    Vision Based Extraction of Nutrition Information from Skewed Nutrition Labels

    Get PDF
    An important component of a healthy diet is the comprehension and retention of nutritional information and understanding of how different food items and nutritional constituents affect our bodies. In the U.S. and many other countries, nutritional information is primarily conveyed to consumers through nutrition labels (NLs) which can be found in all packaged food products. However, sometimes it becomes really challenging to utilize all this information available in these NLs even for consumers who are health conscious as they might not be familiar with nutritional terms or find it difficult to integrate nutritional data collection into their daily activities due to lack of time, motivation, or training. So it is essential to automate this data collection and interpretation process by integrating Computer Vision based algorithms to extract nutritional information from NLs because it improves the user’s ability to engage in continuous nutritional data collection and analysis. To make nutritional data collection more manageable and enjoyable for the users, we present a Proactive NUTrition Management System (PNUTS). PNUTS seeks to shift current research and clinical practices in nutrition management toward persuasion, automated nutritional information processing, and context-sensitive nutrition decision support. PNUTS consists of two modules, firstly a barcode scanning module which runs on smart phones and is capable of vision-based localization of One Dimensional (1D) Universal Product Code (UPC) and International Article Number (EAN) barcodes with relaxed pitch, roll, and yaw camera alignment constraints. The algorithm localizes barcodes in images by computing Dominant Orientations of Gradients (DOGs) of image segments and grouping smaller segments with similar DOGs into larger connected components. Connected components that pass given morphological criteria are marked as potential barcodes. The algorithm is implemented in a distributed, cloud-based system. The system’s front end is a smartphone application that runs on Android smartphones with Android 4.2 or higher. The system’s back end is deployed on a five node Linux cluster where images are processed. The algorithm was evaluated on a corpus of 7,545 images extracted from 506 videos of bags, bottles, boxes, and cans in a supermarket. The DOG algorithm was coupled to our in-place scanner for 1D UPC and EAN barcodes. The scanner receives from the DOG algorithm the rectangular planar dimensions of a connected component and the component’s dominant gradient orientation angle referred to as the skew angle. The scanner draws several scan lines at that skew angle within the component to recognize the barcode in place without any rotations. The scanner coupled to the localizer was tested on the same corpus of 7,545 images. Laboratory experiments indicate that the system can localize and scan barcodes of any orientation in the yaw plane, of up to 73.28 degrees in the pitch plane, and of up to 55.5 degrees in the roll plane. The videos have been made public for all interested research communities to replicate our findings or to use them in their own research. The front end Android application is available for free download at Google Play under the title of NutriGlass. This module is also coupled to a comprehensive NL database from which nutritional information can be retrieved on demand. Currently our NL database consists of more than 230,000 products. The second module of PNUTS is an algorithm whose objective is to determine the text skew angle of an NL image without constraining the angle’s magnitude. The horizontal, vertical, and diagonal matrices of the (Two Dimensional) 2D Haar Wavelet Transform are used to identify 2D points with significant intensity changes. The set of points is bounded with a minimum area rectangle whose rotation angle is the text’s skew. The algorithm’s performance is compared with the performance of five text skew detection algorithms on 1001 U.S. nutrition label images and 2200 single- and multi-column document images in multiple languages. To ensure the reproducibility of the reported results, the source code of the algorithm and the image data have been made publicly available. If the skew angle is estimated correctly, optical character recognition (OCR) techniques can be used to extract nutrition information

    A regularization approach to blind deblurring and denoising of QR barcodes

    Get PDF
    QR bar codes are prototypical images for which part of the image is a priori known (required patterns). Open source bar code readers, such as ZBar, are readily available. We exploit both these facts to provide and assess purely regularization-based methods for blind deblurring of QR bar codes in the presence of noise

    A Regularization Approach to Blind Deblurring and Denoising of QR Barcodes

    Full text link
    QR bar codes are prototypical images for which part of the image is a priori known (required patterns). Open source bar code readers, such as ZBar, are readily available. We exploit both these facts to provide and assess purely regularization-based methods for blind deblurring of QR bar codes in the presence of noise.Comment: 14 pages, 19 figures (with a total of 57 subfigures), 1 table; v3: previously missing reference [35] adde

    Design of an Ultra-wideband Radio Frequency Identification System with Chipless Transponders

    Get PDF
    The state-of-the-art commercially available radio-frequency identification (RFID) transponders are usually composed of an antenna and an application specific integrated circuit chip, which still makes them very costly compared to the well-established barcode technology. Therefore, a novel low-cost RFID system solution based on passive chipless RFID transponders manufactured using conductive strips on flexible substrates is proposed in this work. The chipless RFID transponders follow a specific structure design, which aim is to modify the shape of the impinged electromagnetic wave to embed anidentification code in it and then backscatter the encoded signal to the reader. This dissertation comprises a multidisciplinary research encompassing the design of low-cost chipless RFID transponders with a novel frequency coding technique, unlike usually disregarded in literature, this approach considers the communication channel effects and assigns a unique frequency response to each transponder. Hence, the identification codes are different enough, to reduce the detection error and improve their automatic recognition by the reader while working under normal conditions. The chipless RFID transponders are manufactured using different materials and state-of-the-art mass production fabrication processes, like printed electronics. Moreover, two different reader front-ends working in the ultra-wideband (UWB) frequency range are used to interrogate the chipless RFID transponders. The first one is built using high-performance off-theshelf components following the stepped frequency modulation (SFM) radar principle, and the second one is a commercially available impulse radio (IR) radar. Finally, the two readers are programmed with algorithms based on the conventional minimum distance and maximum likelihood detection techniques, considering the whole transponder radio frequency (RF) response, instead of following the commonly used approach of focusing on specific parts of the spectrum to detect dips or peaks. The programmed readers automatically identify when a chipless RFID transponder is placed within their interrogation zones and proceed to the successful recognition of its embedded identification code. Accomplishing in this way, two novel fully automatic SFM- and IRRFID readers for chipless transponders. The SFM-RFID system is capable to successfully decode up to eight different chipless RFID transponders placed sequentially at a maximum reading range of 36 cm. The IR-RFID system up to four sequentially and two simultaneously placed different chipless RFID transponders within a 50 cm range.:Acknowledgments Abstract Kurzfassung Table of Contents Index of Figures Index of Tables Index of Abbreviations Index of Symbols 1 Introduction 1.1 Motivation 1.2 Scope of Application 1.3 Objectives and Structure Fundamentals of the RFID Technology 2.1 Automatic Identification Systems Background 2.1.1 Barcode Technology 2.1.2 Optical Character Recognition 2.1.3 Biometric Procedures 2.1.4 Smart Cards 2.1.5 RFID Systems 2.2 RFID System Principle 2.2.1 RFID Features 2.3 RFID with Chipless Transponders 2.3.1 Time Domain Encoding 2.3.2 Frequency Domain Encoding 2.4 Summary Manufacturing Technologies 3.1 Organic and Printed Electronics 3.1.1 Substrates 3.1.2 Organic Inks 3.1.3 Screen Printing 3.1.4 Flexography 3.2 The Printing Process 3.3 A Fabrication Alternative with Aluminum or Copper Strips 3.4 Fabrication Technologies for Chipless RFID Transponders 3.5 Summary UWB Chipless RFID Transponder Design 4.1 Scattering Theory 4.1.1 Radar Cross-Section Definition 4.1.2 Radar Absorbing Material’s Principle 4.1.3 Dielectric Multilayers Wave Matrix Analysis 4.1.4 Frequency Selective Surfaces 4.2 Double-Dipoles UWB Chipless RFID Transponder 4.2.1 An Infinite Double-Dipole Array 4.2.2 Double-Dipoles UWB Chipless Transponder Design 4.2.3 Prototype Fabrication 4.3 UWB Chipless RFID Transponder with Concentric Circles 4.3.1 Concentric Circles UWB Chipless Transponder 4.3.2 Concentric Rings UWB Chipless RFID Transponder 4.4 Concentric Octagons UWB Chipless Transponders 4.4.1 Concentric Octagons UWB Chipless Transponder Design 1 4.4.2 Concentric Octagons UWB Chipless Transponder Design 2 4.5 Summary 5. RFID Readers for Chipless Transponders 5.1 Background 5.1.1 The Radar Range Equation 5.1.2 Range Resolution 5.1.3 Frequency Band Selection 5.2 Frequency Domain Reader Test System 5.2.1 Stepped Frequency Waveforms 5.2.2 Reader Architecture 5.2.3 Test System Results 5.3 Time Domain Reader 5.3.1 Novelda Radar 5.3.2 Test System Results 5.4 Summary Detection of UWB Chipless RFID Transponders 6.1 Background 6.2 The Communication Channel 6.2.1 AWGN Channel Modeling and Detection 6.2.2 Free-Space Path Loss Modeling and Normalization 6.3 Detection and Decoding of Chipless RFID Transponders 6.3.1 Minimum Distance Detector 6.3.2 Maximum Likelihood Detector 6.3.3 Correlator Detector 6.3.4 Test Results 6.4 Simultaneous Detection of Multiple UWB Chipless Transponders 6.5 Summary System Implementation 7.1 SFM-UWB RFID System with CR-Chipless Transponders 7.2 IR-UWB RFID System with COD1-Chipless Transponders 7.3 Summary Conclusion and Outlook References Publications Appendix A RCS Calculation Measurement Setups Appendix B Resistance and Skin Depth Calculation Appendix C List of Videos Test Videos Consortium Videos Curriculum Vita

    Blur Classification Using Segmentation Based Fractal Texture Analysis

    Get PDF
    The objective of vision based gesture recognition is to design a system, which can understand the human actions and convey the acquired information with the help of captured images. An image restoration approach is extremely required whenever image gets blur during acquisition process since blurred images can severely degrade the performance of such systems. Image restoration recovers a true image from a degraded version. It is referred as blind restoration if blur information is unidentified. Blur identification is essential before application of any blind restoration algorithm. This paper presents a blur identification approach which categories a hand gesture image into one of the sharp, motion, defocus and combined blurred categories. Segmentation based fractal texture analysis extraction algorithm is utilized for featuring the neural network based classification system. The simulation results demonstrate the preciseness of proposed method

    Eyes-Free Vision-Based Scanning of Aligned Barcodes and Information Extraction from Aligned Nutrition Tables

    Get PDF
    Visually impaired (VI) individuals struggle with grocery shopping and have to rely on either friends, family or grocery store associates for shopping. ShopMobile 2 is a proof-of-concept system that allows VI shoppers to shop independently in a grocery store using only their smartphone. Unlike other assistive shopping systems that use dedicated hardware, this system is a software only solution that relies on fast computer vision algorithms. It consists of three modules - an eyes free barcode scanner, an optical character recognition (OCR) module, and a tele-assistance module. The eyes-free barcode scanner allows VI shoppers to locate and retrieve products by scanning barcodes on shelves and on products. The OCR module allows shoppers to read nutrition facts on products and the tele-assistance module allows them to obtain help from sighted individuals at remote locations. This dissertation discusses, provides implementations of, and presents laboratory and real-world experiments related to all three modules
    • …
    corecore