3,076 research outputs found

    JNER at 15 years: analysis of the state of neuroengineering and rehabilitation.

    Get PDF
    On JNER's 15th anniversary, this editorial analyzes the state of the field of neuroengineering and rehabilitation. I first discuss some ways that the nature of neurorehabilitation research has evolved in the past 15 years based on my perspective as editor-in-chief of JNER and a researcher in the field. I highlight increasing reliance on advanced technologies, improved rigor and openness of research, and three, related, new paradigms - wearable devices, the Cybathlon competition, and human augmentation studies - indicators that neurorehabilitation is squarely in the age of wearability. Then, I briefly speculate on how the field might make progress going forward, highlighting the need for new models of training and learning driven by big data, better personalization and targeting, and an increase in the quantity and quality of usability and uptake studies to improve translation

    Toward Real-Time, Robust Wearable Sensor Fall Detection Using Deep Learning Methods: A Feasibility Study

    Get PDF
    Real-time fall detection using a wearable sensor remains a challenging problem due to high gait variability. Furthermore, finding the type of sensor to use and the optimal location of the sensors are also essential factors for real-time fall-detection systems. This work presents real-time fall-detection methods using deep learning models. Early detection of falls, followed by pneumatic protection, is one of the most effective means of ensuring the safety of the elderly. First, we developed and compared different data-segmentation techniques for sliding windows. Next, we implemented various techniques to balance the datasets because collecting fall datasets in the real-time setting has an imbalanced nature. Moreover, we designed a deep learning model that combines a convolution-based feature extractor and deep neural network blocks, the LSTM block, and the transformer encoder block, followed by a position-wise feedforward layer. We found that combining the input sequence with the convolution-learned features of different kernels tends to increase the performance of the fall-detection model. Last, we analyzed that the sensor signals collected by both accelerometer and gyroscope sensors can be leveraged to develop an effective classifier that can accurately detect falls, especially differentiating falls from near-falls. Furthermore, we also used data from sixteen different body parts and compared them to determine the better sensor position for fall-detection methods. We found that the shank is the optimal position for placing our sensors, with an F1 score of 0.97, and this could help other researchers collect high-quality fall datasets

    Deep learning and wearable sensors for the diagnosis and monitoring of Parkinson’s disease: A systematic review

    Get PDF
    Parkinson’s disease (PD) is a neurodegenerative disorder that produces both motor and non-motor complications, degrading the quality of life of PD patients. Over the past two decades, the use of wearable devices in combination with machine learning algorithms has provided promising methods for more objective and continuous monitoring of PD. Recent advances in artificial intelligence have provided new methods and algorithms for data analysis, such as deep learning (DL). The aim of this article is to provide a comprehensive review of current applications where DL algorithms are employed for the assessment of motor and nonmotor manifestations (NMM) using data collected via wearable sensors. This paper provides the reader with a summary of the current applications of DL and wearable devices for the diagnosis, prognosis, and monitoring of PD, in the hope of improving the adoption, applicability, and impact of both technologies as support tools. Following PRISMA (Systematic Reviews and Meta-Analyses) guidelines, sixty-nine studies were selected and analyzed. For each study, information on sample size, sensor configuration, DL approaches, validation methods, and results according to the specific symptom under study were extracted and summarized. Furthermore, quality assessment was conducted according to the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) method. The majority of studies (74%) were published within the last three years, demonstrating the increasing focus on wearable technology and DL approaches for PD assessment. However, most papers focused on monitoring (59%) and computer-assisted diagnosis (37%), while few papers attempted to predict treatment response. Motor symptoms (86%) were treated much more frequently than NMM (14%). Inertial sensors were the most commonly used technology, followed by force sensors and microphones. Finally, convolutional neural networks (52%) were preferred to other DL approaches, while extracted features (38%) and raw data (37%) were similarly used as input for DL models. The results of this review highlight several challenges related to the use of wearable technology and DL methods in the assessment of PD, despite the advantages this technology could bring in the development and implementation of automated systems for PD assessment

    An Overview of Human Activity Recognition Using Wearable Sensors: Healthcare and Artificial Intelligence

    Full text link
    With the rapid development of the internet of things (IoT) and artificial intelligence (AI) technologies, human activity recognition (HAR) has been applied in a variety of domains such as security and surveillance, human-robot interaction, and entertainment. Even though a number of surveys and review papers have been published, there is a lack of HAR overview papers focusing on healthcare applications that use wearable sensors. Therefore, we fill in the gap by presenting this overview paper. In particular, we present our projects to illustrate the system design of HAR applications for healthcare. Our projects include early mobility identification of human activities for intensive care unit (ICU) patients and gait analysis of Duchenne muscular dystrophy (DMD) patients. We cover essential components of designing HAR systems including sensor factors (e.g., type, number, and placement location), AI model selection (e.g., classical machine learning models versus deep learning models), and feature engineering. In addition, we highlight the challenges of such healthcare-oriented HAR systems and propose several research opportunities for both the medical and the computer science community

    Just find it: The Mymo approach to recommend running shoes

    Get PDF
    Wearing inappropriate running shoes may lead to unnecessary injury through continued strain upon the lower extremities; potentially damaging a runner’s performance. Many technologies have been developed for accurate shoe recommendation, which centre on running gait analysis. However, these often require supervised use in the laboratory/shop or exhibit too high a cost for personal use. This work addresses the need for a deployable, inexpensive product with the ability to accurately assess running shoe-type recommendation. This was achieved through quantitative analysis of the running gait from 203 individuals through use of a tri-axial accelerometer and tri-axial gyroscope-based wearable (Mymo). In combination with a custom neural network to provide the shoe-type classifications running within the cloud, we experience an accuracy of 94.6 in classifying the correct type of shoe across unseen test data

    Deep Neural Networks for the Recognition and Classification of Heart Murmurs Using Neuromorphic Auditory Sensors

    Get PDF
    Auscultation is one of the most used techniques for detecting cardiovascular diseases, which is one of the main causes of death in the world. Heart murmurs are the most common abnormal finding when a patient visits the physician for auscultation. These heart sounds can either be innocent, which are harmless, or abnormal, which may be a sign of a more serious heart condition. However, the accuracy rate of primary care physicians and expert cardiologists when auscultating is not good enough to avoid most of both type-I (healthy patients are sent for echocardiogram) and type-II (pathological patients are sent home without medication or treatment) errors made. In this paper, the authors present a novel convolutional neural network based tool for classifying between healthy people and pathological patients using a neuromorphic auditory sensor for FPGA that is able to decompose the audio into frequency bands in real time. For this purpose, different networks have been trained with the heart murmur information contained in heart sound recordings obtained from nine different heart sound databases sourced from multiple research groups. These samples are segmented and preprocessed using the neuromorphic auditory sensor to decompose their audio information into frequency bands and, after that, sonogram images with the same size are generated. These images have been used to train and test different convolutional neural network architectures. The best results have been obtained with a modified version of the AlexNet model, achieving 97% accuracy (specificity: 95.12%, sensitivity: 93.20%, PhysioNet/CinC Challenge 2016 score: 0.9416). This tool could aid cardiologists and primary care physicians in the auscultation process, improving the decision making task and reducing type-I and type-II errors.Ministerio de EconomĂ­a y Competitividad TEC2016-77785-

    Testing for Convolutional Neural Network-based Gait Authentication in Smartphones

    Get PDF
    Most online fraud involves identity thief, especially in financial services such as banking, commercial services, or home security. Passwords have always been one of the most reliable and common way to protect user identities. However, passwords can be guessed or breached. Biometric authentications have emerged to be a compliment way to improve the security. Nevertheless, biometric factors such as fingerprint or face recognition can also be spoofed. Additionally, those factors require either user interaction (touch to unlock) or additional hardware (surveillance camera). Therefore, the next level of security with lower risk of attack and less user friction is essentially needed. gait authentication is one of the viable solutions since gait is the signature of the way humans walk, and the analysis can be done passively without any user interactions. Several breakthroughs in terms of model accuracy and efficiency were reported across several state-of-the-art papers. For example, DeepSense reported the accuracy of 0.942±0.032 in Human Activity Recognition and 0.997±0.001 in User Identification. Although there have been research focusing on gait-analysis recently, there has not been a stan- dardized way to define proper testing workflow and techniques that are required to ensure the correctness and efficiency of gait application system, especially when it is done in production scale. This thesis will present a general workflow of Machine Learning (ML) system testing in gait au- thentication using V-model, as well as identifying the areas and components that requires testing, including data testing and performance testing in each ML-related components. This thesis will also suggest some adversarial cases that the model can fail to predict. Traditional testing technique such as differential testing will also be introduced as a testing candidate for gait segmentation. In addition, several metrics and testing ideas will also be suggested and experimented. At last, some interesting findings will be reported in the experimental results section, and some areas for further future work will also be mentioned

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    A review on visual privacy preservation techniques for active and assisted living

    Get PDF
    This paper reviews the state of the art in visual privacy protection techniques, with particular attention paid to techniques applicable to the field of Active and Assisted Living (AAL). A novel taxonomy with which state-of-the-art visual privacy protection methods can be classified is introduced. Perceptual obfuscation methods, a category in this taxonomy, is highlighted. These are a category of visual privacy preservation techniques, particularly relevant when considering scenarios that come under video-based AAL monitoring. Obfuscation against machine learning models is also explored. A high-level classification scheme of privacy by design, as defined by experts in privacy and data protection law, is connected to the proposed taxonomy of visual privacy preservation techniques. Finally, we note open questions that exist in the field and introduce the reader to some exciting avenues for future research in the area of visual privacy.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work is part of the visuAAL project on Privacy-Aware and Acceptable Video-Based Technologies and Services for Active and Assisted Living (https://www.visuaal-itn.eu/). This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie SkƂodowska-Curie grant agreement No 861091. The authors would also like to acknowledge the contribution of COST Action CA19121 - GoodBrother, Network on Privacy-Aware Audio- and Video-Based Applications for Active and Assisted Living (https://goodbrother.eu/), supported by COST (European Cooperation in Science and Technology) (https://www.cost.eu/)

    Classifying Unstable and Stable Walking Patterns Using Electroencephalography Signals and Machine Learning Algorithms

    Get PDF
    Analyzing unstable gait patterns from Electroencephalography (EEG) signals is vital to develop real-time brain-computer interface (BCI) systems to prevent falls and associated injuries. This study investigates the feasibility of classification algorithms to detect walking instability utilizing EEG signals. A 64-channel Brain Vision EEG system was used to acquire EEG signals from 13 healthy adults. Participants performed walking trials for four different stable and unstable conditions: (i) normal walking, (ii) normal walking with medial-lateral perturbation (MLP), (iii) normal walking with dual-tasking (Stroop), (iv) normal walking with center of mass visual feedback. Digital biomarkers were extracted using wavelet energy and entropies from the EEG signals. Algorithms like the ChronoNet, SVM, Random Forest, gradient boosting and recurrent neural networks (LSTM) could classify with 67 to 82% accuracy. The classification results show that it is possible to accurately classify different gait patterns (from stable to unstable) using EEG-based digital biomarkers. This study develops various machine-learning-based classification models using EEG datasets with potential applications in detecting unsteady gait neural signals and intervening by preventing falls and injuries
    • 

    corecore