1,655 research outputs found

    A practical multirobot localization system

    Get PDF
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the software is a novel and efficient algorithm for black and white pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost cameras, the core algorithm is able to process hundreds of images per second while tracking hundreds of objects with a millimeter precision. In addition, we present the method's mathematical model, which allows to estimate the expected localization precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions is verified in several experiments. Apart from the method description, we also make its source code public at \emph{http://purl.org/robotics/whycon}; so, it can be used as an enabling technology for various mobile robotic problems

    External localization system for mobile robotics

    Get PDF
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the proposed localization system is an efficient method for black and white circular pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision, and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost camera, its core algorithm is able to process hundreds of images per second while tracking hundreds of objects with millimeter precision. We propose a mathematical model of the method that allows to calculate its precision, area of coverage, and processing speed from the camera’s intrinsic parameters and hardware’s processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions are verified in several experiments. Apart from the method description, we also publish its source code; so, it can be used as an enabling technology for various mobile robotics problems

    Exploiting Sensor Symmetry for Generalized Tactile Perception in Biomimetic Touch

    Get PDF

    Grading multiple choice exams with low-cost andportable computer-vision techniques

    Get PDF
    Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, Eyegrade, a system for automatic grading of multiple choice exams is presented. While most current solutions are based on expensive scanners, Eyegrade offers a truly low-cost solution requiring only a regular off-the-shelf webcam. Additionally, Eyegrade performs both mark recognition as well as optical character recognition of handwritten student identification numbers, which avoids the use of bubbles in the answer sheet. When compared with similar webcam-based systems, the user interface in Eyegrade has been designed to provide a more efficient and error-free data collection procedure. The tool has been validated with a set of experiments that show the ease of use (both setup and operation), the reduction in grading time, and an increase in the reliability of the results when compared with conventional, more expensive systems.This work was partially funded by the EEE project, “Plan Nacional de I+D+I TIN2011-28308-C03-01” and the “Emadrid: Investigación y desarrollo de tecnologias para el e-learning en la Comunidad de Madrid” project (S2009/TIC-1650).Publicad

    Eye and Voice-Controlled Human Machine Interface System for Wheelchairs Using Image Gradient Approach

    Get PDF
    © 2020 The Author(s). This is an open access article distributed under the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Rehabilitative mobility aids are being used extensively for physically impaired people. Efforts are being made to develop human machine interfaces (HMIs), manipulating the biosignals to better control the electromechanical mobility aids, especially the wheelchairs. Creating precise control commands such as move forward, left, right, backward and stop, via biosignals, in an appropriate HMI is the actual challenge, as the people with a high level of disability (quadriplegia and paralysis, etc.) are unable to drive conventional wheelchairs. Therefore, a novel system driven by optical signals addressing the needs of such a physically impaired population is introduced in this paper. The present system is divided into two parts: the first part comprises of detection of eyeball movements together with the processing of the optical signal, and the second part encompasses the mechanical assembly module, i.e., control of the wheelchair through motor driving circuitry. A web camera is used to capture real-time images. The processor used is Raspberry-Pi with Linux operating system. In order to make the system more congenial and reliable, the voice-controlled mode is incorporated in the wheelchair. To appraise the system’s performance, a basic wheelchair skill test (WST) is carried out. Basic skills like movement on plain and rough surfaces in forward, reverse direction and turning capability were analyzed for easier comparison with other existing wheelchair setups on the bases of controlling mechanisms, compatibility, design models, and usability in diverse conditions. System successfully operates with average response time of 3 s for eye and 3.4 s for voice control mode.Peer reviewedFinal Published versio

    Sistema de miografia Ăłptica para reconhecimento de gestos e posturas de mĂŁo

    Get PDF
    Orientador: Éric FujiwaraDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecânicaResumo: Nesse projeto, demonstrou-se um sistema de miografia óptica como uma alternativa promissora para monitorar as posturas da mão e os gestos do usuário. Essa técnica se fundamenta em acompanhar as atividades musculares responsáveis pelos movimentos da mão com uma câmera externa, relacionando a distorção visual verificada no antebraço com a contração e o relaxamento necessários para dada postura. Três configurações de sensores foram propostas, estudadas e avaliadas. A primeira propôs monitorar a atividade muscular analisando a variação da frequência espacial de uma textura de listras uniformes impressa sobre a pele, enquanto que a segunda se caracteriza pela contagem de pixels de pele visível dentro da região de interesse. Ambas as configurações se mostraram inviáveis pela baixa robustez e alta demanda por condições experimentais controladas. Por fim, a terceira recupera o estado da mão acompanhando o deslocamento de uma série de marcadores coloridos distribuídos ao longo do antebraço. Com um webcam de 24 fps e 640 × 480 pixels, essa última configuração foi validada para oito posturas distintas, explorando principalmente a flexão e extensão dos dedos e do polegar, além da adução e abdução do último. Os dados experimentais, adquiridos off-line, são submetidos a uma rotina de processamento de imagens para extrair a informação espacial e de cor dos marcadores em cada quadro, dados esses utilizados para rastrear os mesmos marcadores ao longo de todos os quadros. Para reduzir a influência das vibrações naturais e inerentes ao corpo humano, um sistema de referencial local é ainda adotado dentro da própria região de interesse. Finalmente, os dados quadro a quadro com o ground truth são alimentados a uma rede neural artificial sequencial, responsável pela calibração supervisionada do sensor e posterior classificação das posturas. O desempenho do sistema para a classificação das oito posturas foi avaliado com base na validação cruzada com 10-folds, com a câmera monitorando o antebraço pela superfície interna ou externa. O sensor apresentou uma precisão de ?92.4% e exatidão de ?97.9% para o primeiro caso, e uma precisão de ?75.1% e exatidão de ?92.5% para o segundo, sendo comparável a outras técnicas de miografia, demonstrando a viabilidade do projeto e abrindo perspectivas para aplicações em interfaces humano-robôAbstract: In this work, an optical myography system is demonstrated as a promising alternative to monitor hand posture and gestures of the user. This technique is based on accompanying muscular activities responsible for hand motion with an external camera, and relating the visual deformation observed on the forearm to the muscular contractions/relaxations for a given posture. Three sensor designs were proposed, studied and evaluated. The first one intended to monitor muscular activity by analyzing the spatial frequency variation of a uniformly distributed stripe pattern stamped on the skin, whereas the second one is characterized by reckoning visible skin pixels inside the region of interest. Both designs are impracticable due to their low robustness and high demand for controlled experimental conditions. At last, the third design retrieves hand configuration by tracking visually the displacements of a series of color markers distributed over the forearm. With a webcam of 24 fps and 640 × 480 pixels, this design was validated for eight different postures, exploring fingers and thumb flexion/extension, plus thumb adduction/abduction. The experimental data are acquired offline and, then, submitted to an image processing routine to extract color and spatial information of the markers in each frame; the extracted data is subsequently used to track the same markers along all frames. To reduce the influence of human body natural and inherent vibrations, a local reference frame is yet adopted in the region of interest. Finally, the frame by frame data, along with the ground truth posture, are fed into a sequential artificial neural network, responsible for sensor supervised calibration and subsequent posture classification. The system performance was evaluated in terms of eight postures classification via 10-fold cross-validation, with the camera monitoring either the underside or the back of the forearm. The sensor presented a ?92.4% precision and ?97.9% accuracy for the former, and a ?75.1% precision and ?92.5% accuracy for the latter, being thus comparable to other myographic techniques; it also demonstrated that the project is feasible and offers prospects for human-robot interaction applicationsMestradoEngenharia MecanicaMestre em Engenharia Mecânica33003017CAPE

    Toward Real-Time Video-Enhanced Augmented Reality for Medical Visualization and Simulation

    Get PDF
    In this work we demonstrate two separate forms of augmented reality environments for use with minimally-invasive surgical techniques. In Chapter 2 it is demonstrated how a video feed from a webcam, which could mimic a laparoscopic or endoscopic camera used during an interventional procedure, can be used to identify the pose of the camera with respect to the viewed scene and augment the video feed with computer-generated information, such as rendering of internal anatomy not visible beyond the image surface, resulting in a simple augmented reality environment. Chapter 3 details our implementation of a similar system to the one previously mentioned, albeit with an external tracking system. Additionally, we discuss the challenges and considerations for expanding this system to support an external tracking system, specifically the Polaris Spectra optical tracker. Because of the relocation of the tracking origin to a point other than the camera center, there is an additional registration step necessary to establish the position of all components within the scene. This modification is expected to increase accuracy and robustness of the system
    • …
    corecore