2,070 research outputs found

    Is the timed-up and go test feasible in mobile devices? A systematic review

    Get PDF
    The number of older adults is increasing worldwide, and it is expected that by 2050 over 2 billion individuals will be more than 60 years old. Older adults are exposed to numerous pathological problems such as Parkinson’s disease, amyotrophic lateral sclerosis, post-stroke, and orthopedic disturbances. Several physiotherapy methods that involve measurement of movements, such as the Timed-Up and Go test, can be done to support efficient and effective evaluation of pathological symptoms and promotion of health and well-being. In this systematic review, the authors aim to determine how the inertial sensors embedded in mobile devices are employed for the measurement of the different parameters involved in the Timed-Up and Go test. The main contribution of this paper consists of the identification of the different studies that utilize the sensors available in mobile devices for the measurement of the results of the Timed-Up and Go test. The results show that mobile devices embedded motion sensors can be used for these types of studies and the most commonly used sensors are the magnetometer, accelerometer, and gyroscope available in off-the-shelf smartphones. The features analyzed in this paper are categorized as quantitative, quantitative + statistic, dynamic balance, gait properties, state transitions, and raw statistics. These features utilize the accelerometer and gyroscope sensors and facilitate recognition of daily activities, accidents such as falling, some diseases, as well as the measurement of the subject's performance during the test execution.info:eu-repo/semantics/publishedVersio

    Development of a real-time classifier for the identification of the Sit-To-Stand motion pattern

    Get PDF
    The Sit-to-Stand (STS) movement has significant importance in clinical practice, since it is an indicator of lower limb functionality. As an optimal trade-off between costs and accuracy, accelerometers have recently been used to synchronously recognise the STS transition in various Human Activity Recognition-based tasks. However, beyond the mere identification of the entire action, a major challenge remains the recognition of clinically relevant phases inside the STS motion pattern, due to the intrinsic variability of the movement. This work presents the development process of a deep-learning model aimed at recognising specific clinical valid phases in the STS, relying on a pool of 39 young and healthy participants performing the task under self-paced (SP) and controlled speed (CT). The movements were registered using a total of 6 inertial sensors, and the accelerometric data was labelised into four sequential STS phases according to the Ground Reaction Force profiles acquired through a force plate. The optimised architecture combined convolutional and recurrent neural networks into a hybrid approach and was able to correctly identify the four STS phases, both under SP and CT movements, relying on the single sensor placed on the chest. The overall accuracy estimate (median [95% confidence intervals]) for the hybrid architecture was 96.09 [95.37 - 96.56] in SP trials and 95.74 [95.39 \u2013 96.21] in CT trials. Moreover, the prediction delays ( 4533 ms) were compatible with the temporal characteristics of the dataset, sampled at 10 Hz (100 ms). These results support the implementation of the proposed model in the development of digital rehabilitation solutions able to synchronously recognise the STS movement pattern, with the aim of effectively evaluate and correct its execution

    Deep HMResNet Model for Human Activity-Aware Robotic Systems

    Full text link
    Endowing the robotic systems with cognitive capabilities for recognizing daily activities of humans is an important challenge, which requires sophisticated and novel approaches. Most of the proposed approaches explore pattern recognition techniques which are generally based on hand-crafted features or learned features. In this paper, a novel Hierarchal Multichannel Deep Residual Network (HMResNet) model is proposed for robotic systems to recognize daily human activities in the ambient environments. The introduced model is comprised of multilevel fusion layers. The proposed Multichannel 1D Deep Residual Network model is, at the features level, combined with a Bottleneck MLP neural network to automatically extract robust features regardless of the hardware configuration and, at the decision level, is fully connected with an MLP neural network to recognize daily human activities. Empirical experiments on real-world datasets and an online demonstration are used for validating the proposed model. Results demonstrated that the proposed model outperforms the baseline models in daily human activity recognition.Comment: Presented at AI-HRI AAAI-FSS, 2018 (arXiv:1809.06606

    Exploring the Application of Wearable Movement Sensors in People with Knee Osteoarthritis

    Get PDF
    People with knee osteoarthritis have difficulty with functional activities, such as walking or get into/out of a chair. This thesis explored the clinical relevance of biomechanics and how wearable sensor technology may be used to assess how people move when their clinician is unable to directly observe them, such as at home or work. The findings of this thesis suggest that artificial intelligence can be used to process data from sensors to provide clinically important information about how people perform troublesome activities

    Real-time human ambulation, activity, and physiological monitoring:taxonomy of issues, techniques, applications, challenges and limitations

    Get PDF
    Automated methods of real-time, unobtrusive, human ambulation, activity, and wellness monitoring and data analysis using various algorithmic techniques have been subjects of intense research. The general aim is to devise effective means of addressing the demands of assisted living, rehabilitation, and clinical observation and assessment through sensor-based monitoring. The research studies have resulted in a large amount of literature. This paper presents a holistic articulation of the research studies and offers comprehensive insights along four main axes: distribution of existing studies; monitoring device framework and sensor types; data collection, processing and analysis; and applications, limitations and challenges. The aim is to present a systematic and most complete study of literature in the area in order to identify research gaps and prioritize future research directions

    Synthetic Sensor Data for Human Activity Recognition

    Get PDF
    Human activity recognition (HAR) based on wearable sensors has emerged as an active topic of research in machine learning and human behavior analysis because of its applications in several fields, including health, security and surveillance, and remote monitoring. Machine learning algorithms are frequently applied in HAR systems to learn from labeled sensor data. The effectiveness of these algorithms generally relies on having access to lots of accurately labeled training data. But labeled data for HAR is hard to come by and is often heavily imbalanced in favor of one or other dominant classes, which in turn leads to poor recognition performance. In this study we introduce a generative adversarial network (GAN)-based approach for HAR that we use to automatically synthesize balanced and realistic sensor data. GANs are robust generative networks, typically used to create synthetic images that cannot be distinguished from real images. Here we explore and construct a model for generating several types of human activity sensor data using a Wasserstein GAN (WGAN). We assess the synthetic data using two commonly-used classifier models, Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM). We evaluate the quality and diversity of the synthetic data by training on synthetic data and testing on real sensor data, and vice versa. We then use synthetic sensor data to oversample the imbalanced training set. We demonstrate the efficacy of the proposed method on two publicly available human activity datasets, the Sussex-Huawei Locomotion (SHL) and Smoking Activity Dataset (SAD). We achieve improvements of using WGAN augmented training data over the imbalanced case, for both SHL (0.85 to 0.95 F1-score), and for SAD (0.70 to 0.77 F1-score) when using a CNN activity classifier

    IoT in smart communities, technologies and applications.

    Get PDF
    Internet of Things is a system that integrates different devices and technologies, removing the necessity of human intervention. This enables the capacity of having smart (or smarter) cities around the world. By hosting different technologies and allowing interactions between them, the internet of things has spearheaded the development of smart city systems for sustainable living, increased comfort and productivity for citizens. The Internet of Things (IoT) for Smart Cities has many different domains and draws upon various underlying systems for its operation, in this work, we provide a holistic coverage of the Internet of Things in Smart Cities by discussing the fundamental components that make up the IoT Smart City landscape, the technologies that enable these domains to exist, the most prevalent practices and techniques which are used in these domains as well as the challenges that deployment of IoT systems for smart cities encounter and which need to be addressed for ubiquitous use of smart city applications. It also presents a coverage of optimization methods and applications from a smart city perspective enabled by the Internet of Things. Towards this end, a mapping is provided for the most encountered applications of computational optimization within IoT smart cities for five popular optimization methods, ant colony optimization, genetic algorithm, particle swarm optimization, artificial bee colony optimization and differential evolution. For each application identified, the algorithms used, objectives considered, the nature of the formulation and constraints taken in to account have been specified and discussed. Lastly, the data setup used by each covered work is also mentioned and directions for future work have been identified. Within the smart health domain of IoT smart cities, human activity recognition has been a key study topic in the development of cyber physical systems and assisted living applications. In particular, inertial sensor based systems have become increasingly popular because they do not restrict users’ movement and are also relatively simple to implement compared to other approaches. Fall detection is one of the most important tasks in human activity recognition. With an increasingly aging world population and an inclination by the elderly to live alone, the need to incorporate dependable fall detection schemes in smart devices such as phones, watches has gained momentum. Therefore, differentiating between falls and activities of daily living (ADLs) has been the focus of researchers in recent years with very good results. However, one aspect within fall detection that has not been investigated much is direction and severity aware fall detection. Since a fall detection system aims to detect falls in people and notify medical personnel, it could be of added value to health professionals tending to a patient suffering from a fall to know the nature of the accident. In this regard, as a case study for smart health, four different experiments have been conducted for the task of fall detection with direction and severity consideration on two publicly available datasets. These four experiments not only tackle the problem on an increasingly complicated level (the first one considers a fall only scenario and the other two a combined activity of daily living and fall scenario) but also present methodologies which outperform the state of the art techniques as discussed. Lastly, future recommendations have also been provided for researchers

    The Development of an assistive chair for elderly with sit to stand problems

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyStanding up from a seated position, known as sit-to-stand (STS) movement, is one of the most frequently performed activities of daily living (ADLs). However, the aging generation are often encountered with STS issues owning to their declined motor functions and sensory capacity for postural control. The motivated is rooted from the contemporary market available STS assistive devices that are lack of genuine interaction with elderly users. Prior to the software implementation, the robot chair platform with integrated sensing footmat is developed with STS biomechanical concerns for the elderly. The work has its main emphasis on recognising the personalised behavioural patterns from the elderly users’ STS movements, namely the STS intentions and personalised STS feature prediction. The former is known as intention recognition while the latter is defined as assistance prediction, both achieved by innovative machine learning techniques. The proposed intention recognition performs well in multiple subjects scenarios with different postures involved thanks to its competence of handling these uncertainties. To the provision of providing the assistance needed by the elderly user, a time series prediction model is presented, aiming to configure the personalised ground reaction force (GRF) curve over time which suggests successful movement. This enables the computation of deficits between the predicted oncoming GRF curve and the personalised one. A multiple steps ahead prediction into the future is also implemented so that the completion time of actuation in reality is taken into account
    • …
    corecore