59 research outputs found

    A fully connected deep learning approach to upper limb gesture recognition in a secure FES rehabilitation environment

    Get PDF
    Stroke is one of the leading causes of death and disability in the world. The rehabilitation of Patients' limb functions has great medical value, for example, the therapy of functional electrical stimulation (FES) systems, but suffers from effective rehabilitation evaluation. In this paper, six gestures of upper limb rehabilitation were monitored and collected using microelectromechanical systems sensors, where data stability was guaranteed using data preprocessing methods, that is, deweighting, interpolation, and feature extraction. A fully connected neural network has been proposed investigating the effects of different hidden layers, and determining its activation functions and optimizers. Experiments have depicted that a three‐hidden‐layer model with a softmax function and an adaptive gradient descent optimizer can reach an average gesture recognition rate of 97.19%. A stop mechanism has been used via recognition of dangerous gesture to ensure the safety of the system, and the lightweight cryptography has been used via hash to ensure the security of the system. Comparison to the classification models, for example, k‐nearest neighbor, logistic regression, and other random gradient descent algorithms, was conducted to verify the outperformance in recognition of upper limb gesture data. This study also provides an approach to creating health profiles based on large‐scale rehabilitation data and therefore consequent diagnosis of the effects of FES rehabilitation

    Wearable and Nearable Biosensors and Systems for Healthcare

    Get PDF
    Biosensors and systems in the form of wearables and “nearables” (i.e., everyday sensorized objects with transmitting capabilities such as smartphones) are rapidly evolving for use in healthcare. Unlike conventional approaches, these technologies can enable seamless or on-demand physiological monitoring, anytime and anywhere. Such monitoring can help transform healthcare from the current reactive, one-size-fits-all, hospital-centered approach into a future proactive, personalized, decentralized structure. Wearable and nearable biosensors and systems have been made possible through integrated innovations in sensor design, electronics, data transmission, power management, and signal processing. Although much progress has been made in this field, many open challenges for the scientific community remain, especially for those applications requiring high accuracy. This book contains the 12 papers that constituted a recent Special Issue of Sensors sharing the same title. The aim of the initiative was to provide a collection of state-of-the-art investigations on wearables and nearables, in order to stimulate technological advances and the use of the technology to benefit healthcare. The topics covered by the book offer both depth and breadth pertaining to wearable and nearable technology. They include new biosensors and data transmission techniques, studies on accelerometers, signal processing, and cardiovascular monitoring, clinical applications, and validation of commercial devices

    Intelligence artificielle: Les défis actuels et l'action d'Inria - Livre blanc Inria

    Get PDF
    Livre blanc Inria N°01International audienceInria white papers look at major current challenges in informatics and mathematics and show actions conducted by our project-teams to address these challenges. This document is the first produced by the Strategic Technology Monitoring & Prospective Studies Unit. Thanks to a reactive observation system, this unit plays a lead role in supporting Inria to develop its strategic and scientific orientations. It also enables the institute to anticipate the impact of digital sciences on all social and economic domains. It has been coordinated by Bertrand Braunschweig with contributions from 45 researchers from Inria and from our partners. Special thanks to Peter Sturm for his precise and complete review.Les livres blancs d’Inria examinent les grands dĂ©fis actuels du numĂ©rique et prĂ©sentent les actions menĂ©es par nosĂ©quipes-projets pour rĂ©soudre ces dĂ©fis. Ce document est le premier produit par la cellule veille et prospective d’Inria. Cette unitĂ©, par l’attention qu’elle porte aux Ă©volutions scientifiques et technologiques, doit jouer un rĂŽle majeur dans la dĂ©termination des orientations stratĂ©giques et scientifiques d’Inria. Elle doit Ă©galement permettre Ă  l’Institut d’anticiper l’impact des sciences du numĂ©rique dans tous les domaines sociaux et Ă©conomiques. Ce livre blanc a Ă©tĂ© coordonnĂ© par Bertrand Braunschweig avec des contributions de 45 chercheurs d’Inria et de ses partenaires. Un grand merci Ă  Peter Sturm pour sa relecture prĂ©cise et complĂšte. Merci Ă©galement au service STIP du centre de Saclay – Île-de-France pour la correction finale de la version française

    ON THE INTERPLAY BETWEEN BRAIN-COMPUTER INTERFACES AND MACHINE LEARNING ALGORITHMS: A SYSTEMS PERSPECTIVE

    Get PDF
    Today, computer algorithms use traditional human-computer interfaces (e.g., keyboard, mouse, gestures, etc.), to interact with and extend human capabilities across all knowledge domains, allowing them to make complex decisions underpinned by massive datasets and machine learning. Machine learning has seen remarkable success in the past decade in obtaining deep insights and recognizing unknown patterns in complex data sets, in part by emulating how the brain performs certain computations. As we increase our understanding of the human brain, brain-computer interfaces can benefit from the power of machine learning, both as an underlying model of how the brain performs computations and as a tool for processing high-dimensional brain recordings. The technology (machine learning) has come full circle and is being applied back to understanding the brain and any electric residues of the brain activity over the scalp (EEG). Similarly, domains such as natural language processing, machine translation, and scene understanding remain beyond the scope of true machine learning algorithms and require human participation to be solved. In this work, we investigate the interplay between brain-computer interfaces and machine learning through the lens of end-user usability. Specifically, we propose the systems and algorithms to enable synergistic and user-friendly integration between computers (machine learning) and the human brain (brain-computer interfaces). In this context, we provide our research contributions in two interrelated aspects by, (i) applying machine learning to solve challenges with EEG-based BCIs, and (ii) enabling human-assisted machine learning with EEG-based human input and implicit feedback.Ph.D

    Text Similarity Between Concepts Extracted from Source Code and Documentation

    Get PDF
    Context: Constant evolution in software systems often results in its documentation losing sync with the content of the source code. The traceability research field has often helped in the past with the aim to recover links between code and documentation, when the two fell out of sync. Objective: The aim of this paper is to compare the concepts contained within the source code of a system with those extracted from its documentation, in order to detect how similar these two sets are. If vastly different, the difference between the two sets might indicate a considerable ageing of the documentation, and a need to update it. Methods: In this paper we reduce the source code of 50 software systems to a set of key terms, each containing the concepts of one of the systems sampled. At the same time, we reduce the documentation of each system to another set of key terms. We then use four different approaches for set comparison to detect how the sets are similar. Results: Using the well known Jaccard index as the benchmark for the comparisons, we have discovered that the cosine distance has excellent comparative powers, and depending on the pre-training of the machine learning model. In particular, the SpaCy and the FastText embeddings offer up to 80% and 90% similarity scores. Conclusion: For most of the sampled systems, the source code and the documentation tend to contain very similar concepts. Given the accuracy for one pre-trained model (e.g., FastText), it becomes also evident that a few systems show a measurable drift between the concepts contained in the documentation and in the source code.</p
    • 

    corecore