26 research outputs found

    Universal Barcode Detector via Semantic Segmentation

    Full text link
    Barcodes are used in many commercial applications, thus fast and robust reading is important. There are many different types of barcodes, some of them look similar while others are completely different. In this paper we introduce new fast and robust deep learning detector based on semantic segmentation approach. It is capable of detecting barcodes of any type simultaneously both in the document scans and in the wild by means of a single model. The detector achieves state-of-the-art results on the ArTe-Lab 1D Medium Barcode Dataset with detection rate 0.995. Moreover, developed detector can deal with more complicated object shapes like very long but narrow or very small barcodes. The proposed approach can also identify types of detected barcodes and performs at real-time speed on CPU environment being much faster than previous state-of-the-art approaches

    Comparison of manual and automatic barcode detection in rough horticultural production systems

    Get PDF
    Automation of production in the nurseries of flower producing companies using barcode scanners have been attempted but with little success. Stationary laser barcode scanners which have been used for automation have failed due to the close proximity between the barcode and the scanner, and factors such as speed, angle of inclination of the barcode, damage to the barcode and dirt on the barcode. Furthermore, laser barcode scanners are still being used manually in the nurseries making work laborious and time consuming, thereby leading to reduced productivity. Therefore, an automated image-based barcode detection system to help solve the aforementioned problems was proposed. Experiments were conducted under different situations with clean and artificially soiled Code 128 barcodes in both the laboratory and under real production conditions in a flower producing company. The images were analyzed with a specific algorithm developed with the software tool Halcon. Overall the results from the company showed that the image-based system has a future prospect for automation in the nursery

    A local real-time bar detector based on the multiscale Radon transform

    Get PDF
    We propose a local bar-shaped structure detector that works in real time on high-resolution images. It is based on the Radon transform. Specifically in the muti-scale variant, which is especially fast because it works in integer mathematics and does not use interpolation. The Radon transform conventionally works on the whole image, and not locally. In this paper we describe how by stopping at the early stages of the Radon transform we are able to locate structures locally. We also provide an evaluation of the performance of the algorithm running on the CPU, GPU and DSP of mobile devices to process at acquisition time the images coming from the device’s camera

    Machine Learning-based Detection of Compensatory Balance Responses and Environmental Fall Risks Using Wearable Sensors

    Get PDF
    Falls are the leading cause of fatal and non-fatal injuries among seniors worldwide, with serious and costly consequences. Compensatory balance responses (CBRs) are reactions to recover stability following a loss of balance, potentially resulting in a fall if sufficient recovery mechanisms are not activated. While performance of CBRs are demonstrated risk factors for falls in seniors, the frequency, type, and underlying cause of these incidents occurring in everyday life have not been well investigated. This study was spawned from the lack of research on development of fall risk assessment methods that can be used for continuous and long-term mobility monitoring of the geri- atric population, during activities of daily living, and in their dwellings. Wearable sensor systems (WSS) offer a promising approach for continuous real-time detection of gait and balance behavior to assess the risk of falling during activities of daily living. To detect CBRs, we record movement signals (e.g. acceleration) and activity patterns of four muscles involving in maintaining balance using wearable inertial measurement units (IMUs) and surface electromyography (sEMG) sensors. To develop more robust detection methods, we investigate machine learning approaches (e.g., support vector machines, neural networks) and successfully detect lateral CBRs, during normal gait with accuracies of 92.4% and 98.1% using sEMG and IMU signals, respectively. Moreover, to detect environmental fall-related hazards that are associated with CBRs, and affect balance control behavior of seniors, we employ an egocentric mobile vision system mounted on participants chest. Two algorithms (e.g. Gabor Barcodes and Convolutional Neural Networks) are developed. Our vision-based method detects 17 different classes of environmental risk factors (e.g., stairs, ramps, curbs) with 88.5% accuracy. To the best of the authors knowledge, this study is the first to develop and evaluate an automated vision-based method for fall hazard detection

    SmartGuia: Shopping Assistant for Blind People

    Get PDF
    Dissertação de Mestrado Integrado em Engenharia Biomédica apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra.Tendo em conta as limita c~oes das pessoas com de ci^encia, o presente trabalho pretende agir num cen ario espec co melhorando a qualidade da vidas destas pessoas. Integrados num grupo de investiga c~ao em Internet das Coisas, colabor amos com tr^es institui c~oes, levantando requisitos no que concerne a aplicabilidade da Internet das Coisas para ajudar pessoas com de ci^encia. Depois disso, escolhemos e foc amo-nos no plano de trabalho adotado, sendo um dos cen arios identi cados onde poder amos ajudar. Com este trabalho e proposto um sistema para apoiar pessoas cegas na ida as compras e navega c~ao dentro de edif cios. O sistema proposto ajuda o utilizador a circular pelo edif cio e encontrar servi cos e produtos desejados ou dispon veis. Tem como objetivo aumentar a autonomia das pessoas cegas nas atividades do dia-a-dia dentro de edif cios. A pessoa cega n~ao tem acesso a informa c~ao dispon vel por meios visuais. O sistema consiste numa aplica c~ao para smartphone que oferece navega c~ao assistida em edif cios p ublicos, respondendo a quest~oes, guiando a pessoa, e disponibilizando informa c~ao objetiva sobre os espa cos, servi cos e produtos dispon veis. O sistema proposto inclui tamb em um sistema de informa c~ao. Este sistema de informa c~ao identi ca o destino pretendido e oferece informa c~ao concisa ou detalhada sobre os produtos ou sevi cos dispon veis. O sistema determina constantemente a localiza c~ao do utilizador, calcula rotas, guia a pessoa dentro do edif cio, e identi ca pontos de interesse na vizinhan ca da pessoa. O sistema requer tecnologia de localiza c~ao por beacons, que poder~ao por sua ver ser baseados nas tecnologias Bluetooth ou Wi-Fi. Relativamente ao estado da arte, a nossa solu c~ao oferece vantagens importantes: minimiza a intera c~ao com o utilizador, que seria necess aria para escolher destinos, produtos ou servios desejados em outros sistemas; lida com o ambiente din^amico resultxi ante da caminhada do utilizador, como a varia c~ao do n umero e posi c~ao de beacons ao alcance do utilizador; utiliza dispositivos que a pessoa cega j a possui, e eventualmente beacons que possam j a existir ou tenham uma custo de implanta c~ao baixo. Al em destas vantagens, desenh amos o sistema de forma a ser o mais f acil e intuitivo de utilizar poss vel, aplicando mecanismos acess veis a pessoas cegas. Para al em disso, o sistema est a desenhado para lidar com o dinamismo do caminhar, sendo que o n umero e posi c~ao dos beacons varia

    Active Vision-Based Guidance with a Mobile Device for People with Visual Impairments

    Get PDF
    The aim of this research is to determine whether an active-vision system with a human-in-the-loop can be implemented to guide a user with visual impairments in finding a target object. Active vision techniques have successfully been applied to various electro-mechanical object search and exploration systems to boost their effectiveness at a given task. However, despite the potential of intelligent visual sensor arrays to enhance a user’s vision capabilities and alleviate some of the impacts that visual deficiencies have on their day-to-day lives, active vision techniques with human-in-the-loop remains an open research topic. In this thesis, an active guidance system is presented, which uses visual input from an object detector and an initial understanding of a typical room layout to generate navigation cues that assist a user with visual impairments in finding a target object. A complete guidance system prototype is implemented, along with a new audio-based interface and a state-of-the-art object detector, onto a mobile device and evaluated with a set of users in real environments. The results show that an active guidance approach performs well compared to other unguided solutions. This research highlights the potential benefits of the proposed active guidance controller and audio interface, which could enhance current vision-based guidance systems and travel aids for people with visual impairments
    corecore