280 research outputs found

    An EM transfer learning algorithm with applications in bionic hand prostheses

    Get PDF
    Paaßen B, Schulz A, Hahne J, Hammer B. An EM transfer learning algorithm with applications in bionic hand prostheses. In: Verleysen M, ed. Proceedings of the 25th European Symposium on Artificial Neural Networks (ESANN 2017). Bruges: i6doc.com; 2017: 129-134.Modern bionic hand prostheses feature unprecedented functionality, permitting motion in multiple degrees of freedom (DoFs). However, conventional user interfaces allow for contolling only one DoF at a time. An intuitive, direct and simultaneous control of multiple DoFs requires machine learning models. Unfortunately, such models are not yet sufficiently robust to real-world disturbances, such as electrode shifts. We propose a novel expectation maximization approach for transfer learning to rapidly recalibrate a machine learning model if disturbances occur. In our experimental evaluation we show that even if few data points are available which do not cover all classes, our proposed approach finds a viable transfer mapping which improves classification accuracy significantly and outperforms all tested baselines

    Cost-effective 3D scanning and printing technologies for outer ear reconstruction: Current status

    Get PDF
    Current 3D scanning and printing technologies offer not only state-of-the-art developments in the field of medical imaging and bio-engineering, but also cost and time effective solutions for surgical reconstruction procedures. Besides tissue engineering, where living cells are used, bio-compatible polymers or synthetic resin can be applied. The combination of 3D handheld scanning devices or volumetric imaging, (open-source) image processing packages, and 3D printers form a complete workflow chain that is capable of effective rapid prototyping of outer ear replicas. This paper reviews current possibilities and latest use cases for 3D-scanning, data processing and printing of outer ear replicas with a focus on low-cost solutions for rehabilitation engineering

    Metric Learning for Structured Data

    Get PDF
    Paaßen B. Metric Learning for Structured Data. Bielefeld: Universität Bielefeld; 2019.Distance measures form a backbone of machine learning and information retrieval in many application fields such as computer vision, natural language processing, and biology. However, general-purpose distances may fail to capture semantic particularities of a domain, leading to wrong inferences downstream. Motivated by such failures, the field of metric learning has emerged. Metric learning is concerned with learning a distance measure from data which pulls semantically similar data closer together and pushes semantically dissimilar data further apart. Over the past decades, metric learning approaches have yielded state-of-the-art results in many applications. Unfortunately, these successes are mostly limited to vectorial data, while metric learning for structured data remains a challenge. In this thesis, I present a metric learning scheme for a broad class of sequence edit distances which is compatible with any differentiable cost function, and a scalable, interpretable, and effective tree edit distance learning scheme, thus pushing the boundaries of metric learning for structured data. Furthermore, I make learned distances more useful by providing a novel algorithm to perform time series prediction solely based on distances, a novel algorithm to infer a structured datum from edit distances, and a novel algorithm to transfer a learned distance to a new domain using only little data and computation time. Finally, I apply these novel algorithms to two challenging application domains. First, I support students in intelligent tutoring systems. If a student gets stuck before completing a learning task, I predict how capable students would proceed in their situation and guide the student in that direction via edit hints. Second, I use transfer learning to counteract disturbances for bionic hand prostheses to make these prostheses more robust in patients' everyday lives

    Semi-autonomous control of prosthetic hands based on multimodal sensing, human grasp demonstration and user intention

    Get PDF
    Semi-autonomous control strategies for prosthetic hands provide a promising way to simplify and improve the grasping process for the user by adopting techniques usually applied in robotic grasping. Such strategies endow prosthetic hands with the ability to autonomously select and execute grasps while keeping the user in the loop to intervene at any time for triggering, accepting or rejecting decisions taken by the controller in an intuitive and easy way. In this paper, we present a semi-autonomous control strategy that allows the user to perform fluent grasping of everyday objects based on a single EMG channel and a multi-modal sensor system embedded in the hand for object perception and autonomous grasp execution. We conduct a user study with 20 subjects to assess the effectiveness and intuitiveness of our semi-autonomous control strategy and compare it to a conventional electromyography-based control strategy. The results show that the workload is reduced by 25.9 % compared to conventional electromyographic control, the physical demand is reduced by 60 % and the grasping process is accelerated by 19.4 %

    Sistema de miografia óptica para reconhecimento de gestos e posturas de mão

    Get PDF
    Orientador: Éric FujiwaraDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecânicaResumo: Nesse projeto, demonstrou-se um sistema de miografia óptica como uma alternativa promissora para monitorar as posturas da mão e os gestos do usuário. Essa técnica se fundamenta em acompanhar as atividades musculares responsáveis pelos movimentos da mão com uma câmera externa, relacionando a distorção visual verificada no antebraço com a contração e o relaxamento necessários para dada postura. Três configurações de sensores foram propostas, estudadas e avaliadas. A primeira propôs monitorar a atividade muscular analisando a variação da frequência espacial de uma textura de listras uniformes impressa sobre a pele, enquanto que a segunda se caracteriza pela contagem de pixels de pele visível dentro da região de interesse. Ambas as configurações se mostraram inviáveis pela baixa robustez e alta demanda por condições experimentais controladas. Por fim, a terceira recupera o estado da mão acompanhando o deslocamento de uma série de marcadores coloridos distribuídos ao longo do antebraço. Com um webcam de 24 fps e 640 × 480 pixels, essa última configuração foi validada para oito posturas distintas, explorando principalmente a flexão e extensão dos dedos e do polegar, além da adução e abdução do último. Os dados experimentais, adquiridos off-line, são submetidos a uma rotina de processamento de imagens para extrair a informação espacial e de cor dos marcadores em cada quadro, dados esses utilizados para rastrear os mesmos marcadores ao longo de todos os quadros. Para reduzir a influência das vibrações naturais e inerentes ao corpo humano, um sistema de referencial local é ainda adotado dentro da própria região de interesse. Finalmente, os dados quadro a quadro com o ground truth são alimentados a uma rede neural artificial sequencial, responsável pela calibração supervisionada do sensor e posterior classificação das posturas. O desempenho do sistema para a classificação das oito posturas foi avaliado com base na validação cruzada com 10-folds, com a câmera monitorando o antebraço pela superfície interna ou externa. O sensor apresentou uma precisão de ?92.4% e exatidão de ?97.9% para o primeiro caso, e uma precisão de ?75.1% e exatidão de ?92.5% para o segundo, sendo comparável a outras técnicas de miografia, demonstrando a viabilidade do projeto e abrindo perspectivas para aplicações em interfaces humano-robôAbstract: In this work, an optical myography system is demonstrated as a promising alternative to monitor hand posture and gestures of the user. This technique is based on accompanying muscular activities responsible for hand motion with an external camera, and relating the visual deformation observed on the forearm to the muscular contractions/relaxations for a given posture. Three sensor designs were proposed, studied and evaluated. The first one intended to monitor muscular activity by analyzing the spatial frequency variation of a uniformly distributed stripe pattern stamped on the skin, whereas the second one is characterized by reckoning visible skin pixels inside the region of interest. Both designs are impracticable due to their low robustness and high demand for controlled experimental conditions. At last, the third design retrieves hand configuration by tracking visually the displacements of a series of color markers distributed over the forearm. With a webcam of 24 fps and 640 × 480 pixels, this design was validated for eight different postures, exploring fingers and thumb flexion/extension, plus thumb adduction/abduction. The experimental data are acquired offline and, then, submitted to an image processing routine to extract color and spatial information of the markers in each frame; the extracted data is subsequently used to track the same markers along all frames. To reduce the influence of human body natural and inherent vibrations, a local reference frame is yet adopted in the region of interest. Finally, the frame by frame data, along with the ground truth posture, are fed into a sequential artificial neural network, responsible for sensor supervised calibration and subsequent posture classification. The system performance was evaluated in terms of eight postures classification via 10-fold cross-validation, with the camera monitoring either the underside or the back of the forearm. The sensor presented a ?92.4% precision and ?97.9% accuracy for the former, and a ?75.1% precision and ?92.5% accuracy for the latter, being thus comparable to other myographic techniques; it also demonstrated that the project is feasible and offers prospects for human-robot interaction applicationsMestradoEngenharia MecanicaMestre em Engenharia Mecânica33003017CAPE

    The future of upper extremity rehabilitation robotics: research and practice

    Full text link
    The loss of upper limb motor function can have a devastating effect on people’s lives. To restore upper limb control and functionality, researchers and clinicians have developed interfaces to interact directly with the human body’s motor system. In this invited review, we aim to provide details on the peripheral nerve interfaces and brain‐machine interfaces that have been developed in the past 30 years for upper extremity control, and we highlight the challenges that still remain to transition the technology into the clinical market. The findings show that peripheral nerve interfaces and brain‐machine interfaces have many similar characteristics that enable them to be concurrently developed. Decoding neural information from both interfaces may lead to novel physiological models that may one day fully restore upper limb motor function for a growing patient population.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/155489/1/mus26860_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/155489/2/mus26860.pd

    Multimodal human hand motion sensing and analysis - a review

    Get PDF
    corecore