293 research outputs found

    Low-Cost Sensors and Biological Signals

    Get PDF
    Many sensors are currently available at prices lower than USD 100 and cover a wide range of biological signals: motion, muscle activity, heart rate, etc. Such low-cost sensors have metrological features allowing them to be used in everyday life and clinical applications, where gold-standard material is both too expensive and time-consuming to be used. The selected papers present current applications of low-cost sensors in domains such as physiotherapy, rehabilitation, and affective technologies. The results cover various aspects of low-cost sensor technology from hardware design to software optimization

    Predicting Continuous Locomotion Modes via Multidimensional Feature Learning from sEMG

    Full text link
    Walking-assistive devices require adaptive control methods to ensure smooth transitions between various modes of locomotion. For this purpose, detecting human locomotion modes (e.g., level walking or stair ascent) in advance is crucial for improving the intelligence and transparency of such robotic systems. This study proposes Deep-STF, a unified end-to-end deep learning model designed for integrated feature extraction in spatial, temporal, and frequency dimensions from surface electromyography (sEMG) signals. Our model enables accurate and robust continuous prediction of nine locomotion modes and 15 transitions at varying prediction time intervals, ranging from 100 to 500 ms. In addition, we introduced the concept of 'stable prediction time' as a distinct metric to quantify prediction efficiency. This term refers to the duration during which consistent and accurate predictions of mode transitions are made, measured from the time of the fifth correct prediction to the occurrence of the critical event leading to the task transition. This distinction between stable prediction time and prediction time is vital as it underscores our focus on the precision and reliability of mode transition predictions. Experimental results showcased Deep-STP's cutting-edge prediction performance across diverse locomotion modes and transitions, relying solely on sEMG data. When forecasting 100 ms ahead, Deep-STF surpassed CNN and other machine learning techniques, achieving an outstanding average prediction accuracy of 96.48%. Even with an extended 500 ms prediction horizon, accuracy only marginally decreased to 93.00%. The averaged stable prediction times for detecting next upcoming transitions spanned from 28.15 to 372.21 ms across the 100-500 ms time advances.Comment: 10 pages,7 figure

    Customizable Wearable Vibrotactile Display for Gait Biofeedback Research

    Full text link
    ME450 Capstone Design and Manufacturing Experience: Winter 2021Approximately a third of American adults experience balance problems throughout their lifetime which can lead to a fear of falling, activity avoidance, and an increasingly sedentary lifestyle. While gait and balance training regimens are the most common therapeutic solution for adults with increased risk for falling, interventions that involve personalized biofeedback have been successfully shown to improve standing balance in research studies; however, it is still unclear how best to provide meaningful biofeedback during gait-related activities. Current gait correction systems are limited to providing feedback on a single gait parameter which cannot capture the full complexity of gait, and commonly use only one feedback scheme/modality. Additionally, many devices cannot provide the device wearer with immediate feedback. Therefore, there is a need to develop a customizable/reconfigurable wearable device to be used in a research setting, which will explore the effects of vibrotactile feedback on individuals with vestibular disorders. This device must be able to gather information on multiple kinematic parameters related to gait and provide vibrotactile feedback for the device wearer to interpret and correct their balance irregularities within each testing trial. Ultimately, this research platform will inform the development of a clinic-based and home-based biofeedback system.Christopher DiCesare, Safa Jabri, Kathleen Sienko: Sienko Research Labhttp://deepblue.lib.umich.edu/bitstream/2027.42/167651/1/Team_7-Customizable_Wearable_Vibrotactile_Display_for_Gait_Biofeedback_Research.pd

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe

    Intent sensing for assistive technology

    Get PDF
    This thesis aims to develop systems for intent sensing – the measurement and prediction of what it is that a user wants to happen. Being able to sense intent could be hugely beneficial for control of assistive devices, and could make a great impact on the wider medical device industry. Initially, a literature review is performed to determine the current state-of-the-art for intent sensing, and identifies that a holistic intent sensing system that properly captures all aspects of intent has not yet been developed. This is therefore followed by the development of such a novel intent sensing system. To achieve this, algorithms are developed to combine multiple sensors together into a modular Probabilistic Sensor Network. The performance of such a network is modelled mathematically, with these models tested and verified on real data. The intent sensing system then developed from these models is tested for sensing modalities such as Electromyography (EMG), motion data from Inertial Measurement Units (IMUs), and audio. The benefits of constructing a modular system in this way are demonstrated, showcasing improvement in accuracy with a fixed amount of training data, and in robustness to sensor unavailability – a common problem in prosthetics, where sensor lift-off from the skin is a frequent issue. Initially, the algorithm is developed to classify intent after activity completion, and this is then developed to allow it to run in real-time. Different classification methods are proposed and tested including K-nearest-neighbours (KNN), before deep learning is selected as an effective classifier for this task. In order to apply deep learning without requiring a prohibitively large training data set, a time-segmentation method is developed to limit the complexity of the model and make better use of the available data. Finally, the techniques developed in the thesis are combined into a single continuous, multi-modal intent sensing system that is modular in both sensor composition and in time. At every stage of this process, the algorithms are tested against real data, initially from non-disabled volunteer participants and in the later chapters on data from patients with Parkinson’s disease (a group who may benefit greatly from an intent sensing system). The final system is found to achieve an accuracy of 97.4% almost immediately after activity inception, increasing to 99.9918% over the course of the activity. This high accuracy can be seen both in the patient group and the control group, demonstrating that intent sensing is indeed viable with currently available technology, and should be further developed into future control systems for assistive devices to improve quality of life for both disabled and non-disabled users alike

    Exploring the Application of Wearable Movement Sensors in People with Knee Osteoarthritis

    Get PDF
    People with knee osteoarthritis have difficulty with functional activities, such as walking or get into/out of a chair. This thesis explored the clinical relevance of biomechanics and how wearable sensor technology may be used to assess how people move when their clinician is unable to directly observe them, such as at home or work. The findings of this thesis suggest that artificial intelligence can be used to process data from sensors to provide clinically important information about how people perform troublesome activities

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe
    • 

    corecore