117 research outputs found

    Differentiation of Patients with Balance Insufficiency (Vestibular Hypofunction) versus Normal Subjects Using a Low-Cost Small Wireless Wearable Gait Sensor

    Get PDF
    Balance disorders present a significant healthcare burden due to the potential for hospitalization or complications for the patient, especially among the elderly population when considering intangible losses such as quality of life, morbidities, and mortalities. This work is a continuation of our earlier works where we now examine feature extraction methodology on Dynamic Gait Index (DGI) tests and machine learning classifiers to differentiate patients with balance problems versus normal subjects on an expanded cohort of 60 patients. All data was obtained using our custom designed low-cost wireless gait analysis sensor (WGAS) containing a basic inertial measurement unit (IMU) worn by each subject during the DGI tests. The raw gait data is wirelessly transmitted from the WGAS for real-time gait data collection and analysis. Here we demonstrate predictive classifiers that achieve high accuracy, sensitivity, and specificity in distinguishing abnormal from normal gaits. These results show that gait data collected from our very low-cost wearable wireless gait sensor can effectively differentiate patients with balance disorders from normal subjects in real-time using various classifiers. Our ultimate goal is to be able to use a remote sensor such as the WGAS to accurately stratify an individual’s risk for falls

    Gait rehabilitation monitor

    Get PDF
    This work presents a simple wearable, non-intrusive affordable mobile framework that allows remote patient monitoring during gait rehabilitation, by doctors and physiotherapists. The system includes a set of 2 Shimmer3 9DoF Inertial Measurement Units (IMUs), Bluetooth compatible from Shimmer, an Android smartphone for collecting and primary processing of data and persistence in a local database. Low computational load algorithms based on Euler angles and accelerometer, gyroscope and magnetometer signals were developed and used for the classification and identification of several gait disturbances. These algorithms include the alignment of IMUs sensors data by means of a common temporal reference as well as heel strike and stride detection algorithms to help segmentation of the remotely collected signals by the System app to identify gait strides and extract relevant features to feed, train and test a classifier to predict gait abnormalities in gait sessions. A set of drivers from Shimmer manufacturer is used to make the connection between the app and the set of IMUs using Bluetooth. The developed app allows users to collect data and train a classification model for identifying abnormal and normal gait types. The system provides a REST API available in a backend server along with Java and Python libraries and a PostgreSQL database. The machine-learning type is Supervised using Extremely Randomized Trees method. Frequency, time and time-frequency domain features were extracted from the collected and processed signals to train the classifier. To test the framework a set of gait abnormalities and normal gait were used to train a model and test the classifier.Este trabalho apresenta uma estrutura móvel acessível, simples e não intrusiva, que permite a monitorização e a assistência remota de pacientes durante a reabilitação da marcha, por médicos e fisioterapeutas que monitorizam a reabilitação da marcha do paciente. O sistema inclui um conjunto de 2 IMUs (Inertial Mesaurement Units) Shimmer3 da marca Shimmer, compatíveís com Bluetooth, um smartphone Android para recolha, e pré-processamento de dados e armazenamento numa base de dados local. Algoritmos de baixa carga computacional baseados em ângulos Euler e sinais de acelerómetros, giroscópios e magnetómetros foram desenvolvidos e utilizados para a classificação e identificação de diversas perturbações da marcha. Estes algoritmos incluem o alinhamento e sincronização dos dados dos sensores IMUs usando uma referência temporal comum, além de algoritmos de detecção de passos e strides para auxiliar a segmentação dos sinais recolhidos remotamente pelaappdestaframeworke identificar os passos da marcha extraindo as características relevantes para treinar e testar um classificador que faça a predição de deficiências na marcha durante as sessões de monitorização. Um conjunto de drivers do fabricante Shimmer é usado para fazer a conexão entre a app e o conjunto de IMUs através de Bluetooth. A app desenvolvida permite aos utilizadores recolher dados e treinar um modelo de classificação para identificar os tipos de marcha normais e patológicos. O sistema fornece uma REST API disponível num servidor backend recorrendo a bibliotecas Java e Python e a uma base de dados PostgreSQL. O tipo de machine-learning é Supervisionado usando Extremely Randomized Trees. Features no domínio do tempo, da frequência e do tempo-frequência foram extraídas dos sinais recolhidos e processados para treinar o classificador. Para testar a estrutura, um conjunto de marchas patológicas e normais foram utilizadas para treinar um modelo e testar o classificador

    Multimodal radar sensing for ambient assisted living

    Get PDF
    Data acquired from health and behavioural monitoring of daily life activities can be exploited to provide real-time medical and nursing service with affordable cost and higher efficiency. A variety of sensing technologies for this purpose have been developed and presented in the literature, for instance, wearable IMU (Inertial Measurement Unit) to measure acceleration and angular speed of the person, cameras to record the images or video sequence, PIR (Pyroelectric infrared) sensor to detect the presence of the person based on Pyroelectric Effect, and radar to estimate distance and radial velocity of the person. Each sensing technology has pros and cons, and may not be optimal for the tasks. It is possible to leverage the strength of all these sensors through information fusion in a multimodal fashion. The fusion can take place at three different levels, namely, i) signal level where commensurate data are combined, ii) feature level where feature vectors of different sensors are concatenated and iii) decision level where confidence level or prediction label of classifiers are used to generate a new output. For each level, there are different fusion algorithms, the key challenge here is mainly on choosing the best existing fusion algorithm and developing novel fusion algorithms that more suitable for the current application. The fundamental contribution of this thesis is therefore exploring possible information fusion between radar, primarily FMCW (Frequency Modulated Continuous Wave) radar, and wearable IMU, between distributed radar sensors, and between UWB impulse radar and pressure sensor array. The objective is to sense and classify daily activities patterns, gait styles and micro-gestures as well as producing early warnings of high-risk events such as falls. Initially, only “snapshot” activities (single activity within a short X-s measurement) have been collected and analysed for verifying the accuracy improvement due to information fusion. Then continuous activities (activities that are performed one after another with random duration and transitions) have been collected to simulate the real-world case scenario. To overcome the drawbacks of conventional sliding-window approach on continuous data, a Bi-LSTM (Bidirectional Long Short-Term Memory) network is proposed to identify the transitions of daily activities. Meanwhile, a hybrid fusion framework is presented to exploit the power of soft and hard fusion. Moreover, a trilateration-based signal level fusion method has been successfully applied on the range information of three UWB (Ultra-wideband) impulse radar and the results show comparable performance as using micro-Doppler signature, at the price of much less computation loads. For classifying ‘snapshot’ activities, fusion between radar and wearable shows approximately 12% accuracy improvement compared to using radar only, whereas for classifying continuous activities and gaits, our proposed hybrid fusion and trilateration-based signal level improves roughly 6.8% (before 89%, after 95.8%) and 7.3% (before 85.4%, after 92.7%), respectively

    Human Activity Recognition and Control of Wearable Robots

    Get PDF
    abstract: Wearable robotics has gained huge popularity in recent years due to its wide applications in rehabilitation, military, and industrial fields. The weakness of the skeletal muscles in the aging population and neurological injuries such as stroke and spinal cord injuries seriously limit the abilities of these individuals to perform daily activities. Therefore, there is an increasing attention in the development of wearable robots to assist the elderly and patients with disabilities for motion assistance and rehabilitation. In military and industrial sectors, wearable robots can increase the productivity of workers and soldiers. It is important for the wearable robots to maintain smooth interaction with the user while evolving in complex environments with minimum effort from the user. Therefore, the recognition of the user's activities such as walking or jogging in real time becomes essential to provide appropriate assistance based on the activity. This dissertation proposes two real-time human activity recognition algorithms intelligent fuzzy inference (IFI) algorithm and Amplitude omega (AωA \omega) algorithm to identify the human activities, i.e., stationary and locomotion activities. The IFI algorithm uses knee angle and ground contact forces (GCFs) measurements from four inertial measurement units (IMUs) and a pair of smart shoes. Whereas, the AωA \omega algorithm is based on thigh angle measurements from a single IMU. This dissertation also attempts to address the problem of online tuning of virtual impedance for an assistive robot based on real-time gait and activity measurement data to personalize the assistance for different users. An automatic impedance tuning (AIT) approach is presented for a knee assistive device (KAD) in which the IFI algorithm is used for real-time activity measurements. This dissertation also proposes an adaptive oscillator method known as amplitude omega adaptive oscillator (AωAOA\omega AO) method for HeSA (hip exoskeleton for superior augmentation) to provide bilateral hip assistance during human locomotion activities. The AωA \omega algorithm is integrated into the adaptive oscillator method to make the approach robust for different locomotion activities. Experiments are performed on healthy subjects to validate the efficacy of the human activities recognition algorithms and control strategies proposed in this dissertation. Both the activity recognition algorithms exhibited higher classification accuracy with less update time. The results of AIT demonstrated that the KAD assistive torque was smoother and EMG signal of Vastus Medialis is reduced, compared to constant impedance and finite state machine approaches. The AωAOA\omega AO method showed real-time learning of the locomotion activities signals for three healthy subjects while wearing HeSA. To understand the influence of the assistive devices on the inherent dynamic gait stability of the human, stability analysis is performed. For this, the stability metrics derived from dynamical systems theory are used to evaluate unilateral knee assistance applied to the healthy participants.Dissertation/ThesisDoctoral Dissertation Aerospace Engineering 201

    Sensing via signal analysis, analytics, and cyberbiometric patterns

    Get PDF
    Includes bibliographical references.2022 Fall.Internet-connected, or Internet of Things (IoT), sensor technologies have been increasingly incorporated into everyday technology and processes. Their functions are situationally dependent and have been used for vital recordings such as electrocardiograms, gait analysis and step counting, fall detection, and environmental analysis. For instance, environmental sensors, which exist through various technologies, are used to monitor numerous domains, including but not limited to pollution, water quality, and the presence of biota, among others. Past research into IoT sensors has varied depending on the technology. For instance, previous environmental gas sensor IoT research has focused on (i) the development of these sensors for increased sensitivity and increased lifetimes, (ii) integration of these sensors into sensor arrays to combat cross-sensitivity and background interferences, and (iii) sensor network development, including communication between widely dispersed sensors in a large-scale environment. IoT inertial measurement units (IMU's), such as accelerometers and gyroscopes, have been previously researched for gait analysis, movement detection, and gesture recognition, which are often related to human-computer interface (HCI). Methods of IoT Device feature-based pattern recognition for machine learning (ML) and artificial intelligence (AI) are frequently investigated as well, including primitive classification methods and deep learning techniques. The result of this research gives insight into each of these topics individually, i.e., using a specific sensor technology to detect carbon monoxide in an indoor environment, or using accelerometer readings for gesture recognition. Less research has been performed on analyzing the systems aspects of the IoT sensors themselves. However, an important part of attaining overall situational awareness is authenticating the surroundings, which in the case of IoT means the individual sensors, humans interacting with the sensors, and other elements of the surroundings. There is a clear opportunity for the systematic evaluation of the identity and performance of an IoT sensor/sensor array within a system that is to be utilized for "full situational awareness". This awareness may include (i) non-invasive diagnostics (i.e., what is occurring inside the body), (ii) exposure analysis (i.e., what has gone into the body through both respiratory and eating/drinking pathways), and (iii) potential risk of exposure (i.e., what the body is exposed to environmentally). Simultaneously, the system has the capability to harbor security measures through the same situational assessment in the form of multiple levels of biometrics. Through the interconnective abilities of the IoT sensors, it is possible to integrate these capabilities into one portable, hand-held system. The system will exist within a "magic wand", which will be used to collect the various data needed to assess the environment of the user, both inside and outside of their bodies. The device can also be used to authenticate the user, as well as the system components, to discover potential deception within the system. This research introduces levels of biometrics for various scenarios through the investigation of challenge-based biometrics; that is, biometrics based upon how the sensor, user, or subject of study responds to a challenge. These will be applied to multiple facets surrounding "situational awareness" for living beings, non-human beings, and non-living items or objects (which we have termed "abiometrics"). Gesture recognition for intent of sensing was first investigated as a means of deliberate activation of sensors/sensor arrays for situational awareness while providing a level of user authentication through biometrics. Equine gait analysis was examined next, and the level of injury in the lame limbs of the horse was quantitatively measured and classified using data from IoT sensors. Finally, a method of evaluating the identity and health of a sensor/sensory array was examined through different challenges to their environments
    corecore