33,973 research outputs found

    Comparing clothing-mounted sensors with wearable sensors for movement analysis and activity classification

    Get PDF
    Inertial sensors are a useful instrument for long term monitoring in healthcare. In many cases, inertial sensor devices can be worn as an accessory or integrated into smart textiles. In some situations, it may be beneficial to have data from multiple inertial sensors, rather than relying on a single worn sensor, since this may increase the accuracy of the analysis and better tolerate sensor errors. Integrating multiple sensors into clothing improves the feasibility and practicality of wearing multiple devices every day, in approximately the same location, with less likelihood of incorrect sensor orientation. To facilitate this, the current work investigates the consequences of attaching lightweight sensors to loose clothes. The intention of this paper is to discuss how data from these clothing sensors compare with similarly placed body worn sensors, with additional consideration of the resulting effects on activity recognition. This study compares the similarity between the two signals (body worn and clothing), collected from three different clothing types (slacks, pencil skirt and loose frock), across multiple daily activities (walking, running, sitting, and riding a bus) by calculating correlation coefficients for each sensor pair. Even though the two data streams are clearly different from each other, the results indicate that there is good potential of achieving high classification accuracy when using inertial sensors in clothing

    Real-time human ambulation, activity, and physiological monitoring:taxonomy of issues, techniques, applications, challenges and limitations

    Get PDF
    Automated methods of real-time, unobtrusive, human ambulation, activity, and wellness monitoring and data analysis using various algorithmic techniques have been subjects of intense research. The general aim is to devise effective means of addressing the demands of assisted living, rehabilitation, and clinical observation and assessment through sensor-based monitoring. The research studies have resulted in a large amount of literature. This paper presents a holistic articulation of the research studies and offers comprehensive insights along four main axes: distribution of existing studies; monitoring device framework and sensor types; data collection, processing and analysis; and applications, limitations and challenges. The aim is to present a systematic and most complete study of literature in the area in order to identify research gaps and prioritize future research directions

    User indoor localisation system enhances activity recognition: A proof of concept

    Get PDF
    Older people would like to live independently in their home as long as possible. They want to reduce the risk of domestic accidents because of polypharmacy, physical weakness and other mental illnesses, which could increase the risks of domestic accidents (i.e. a fall). Changes in the behaviour of healthy older people could be correlated with cognitive disorders; consequently, early intervention could delay the deterioration of the disease. Over the last few years, activity recognition systems have been developed to support the management of senior citizensâ\u80\u99 daily life. In this context, this paper aims to go beyond the state-of-the-art presenting a proof of concept where information on body movement, vital signs and userâ\u80\u99s indoor locations are aggregated to improve the activity recognition task. The presented system has been tested in a realistic environment with three users in order to assess the feasibility of the proposed method. These results encouraged the use of this approach in activity recognition applications; indeed, the overall accuracy values, amongst others, are satisfactory increased (+2.67% DT, +7.39% SVM, +147.37% NN)

    Recognition of elementary arm movements using orientation of a tri-axial accelerometer located near the wrist

    No full text
    In this paper we present a method for recognising three fundamental movements of the human arm (reach and retrieve, lift cup to mouth, rotation of the arm) by determining the orientation of a tri-axial accelerometer located near the wrist. Our objective is to detect the occurrence of such movements performed with the impaired arm of a stroke patient during normal daily activities as a means to assess their rehabilitation. The method relies on accurately mapping transitions of predefined, standard orientations of the accelerometer to corresponding elementary arm movements. To evaluate the technique, kinematic data was collected from four healthy subjects and four stroke patients as they performed a number of activities involved in a representative activity of daily living, 'making-a-cup-of-tea'. Our experimental results show that the proposed method can independently recognise all three of the elementary upper limb movements investigated with accuracies in the range 91–99% for healthy subjects and 70–85% for stroke patients

    Human activity recognition using wearable sensors: a deep learning approach

    Get PDF
    In the past decades, Human Activity Recognition (HAR) grabbed considerable research attentions from a wide range of pattern recognition and human–computer interaction researchers due to its prominent applications such as smart home health care. The wealth of information requires efficient classification and analysis methods. Deep learning represents a promising technique for large-scale data analytics. There are various ways of using different sensors for human activity recognition in a smartly controlled environment. Among them, physical human activity recognition through wearable sensors provides valuable information about an individual’s degree of functional ability and lifestyle. There is abundant research that works upon real time processing and causes more power consumption of mobile devices. Mobile phones are resource-limited devices. It is a thought-provoking task to implement and evaluate different recognition systems on mobile devices. This work proposes a Deep Belief Network (DBN) model for successful human activity recognition. Various experiments are performed on a real-world wearable sensor dataset to verify the effectiveness of the deep learning algorithm. The results show that the proposed DBN performs competitively in comparison with other algorithms and achieves satisfactory activity recognition performance. Some open problems and ideas are also presented and should be investigated as future research

    Multi-sensor fusion based on multiple classifier systems for human activity identification

    Get PDF
    Multimodal sensors in healthcare applications have been increasingly researched because it facilitates automatic and comprehensive monitoring of human behaviors, high-intensity sports management, energy expenditure estimation, and postural detection. Recent studies have shown the importance of multi-sensor fusion to achieve robustness, high-performance generalization, provide diversity and tackle challenging issue that maybe difficult with single sensor values. The aim of this study is to propose an innovative multi-sensor fusion framework to improve human activity detection performances and reduce misrecognition rate. The study proposes a multi-view ensemble algorithm to integrate predicted values of different motion sensors. To this end, computationally efficient classification algorithms such as decision tree, logistic regression and k-Nearest Neighbors were used to implement diverse, flexible and dynamic human activity detection systems. To provide compact feature vector representation, we studied hybrid bio-inspired evolutionary search algorithm and correlation-based feature selection method and evaluate their impact on extracted feature vectors from individual sensor modality. Furthermore, we utilized Synthetic Over-sampling minority Techniques (SMOTE) algorithm to reduce the impact of class imbalance and improve performance results. With the above methods, this paper provides unified framework to resolve major challenges in human activity identification. The performance results obtained using two publicly available datasets showed significant improvement over baseline methods in the detection of specific activity details and reduced error rate. The performance results of our evaluation showed 3% to 24% improvement in accuracy, recall, precision, F-measure and detection ability (AUC) compared to single sensors and feature-level fusion. The benefit of the proposed multi-sensor fusion is the ability to utilize distinct feature characteristics of individual sensor and multiple classifier systems to improve recognition accuracy. In addition, the study suggests a promising potential of hybrid feature selection approach, diversity-based multiple classifier systems to improve mobile and wearable sensor-based human activity detection and health monitoring system. - 2019, The Author(s).This research is supported by University of Malaya BKP Special Grant no vote BKS006-2018.Scopu
    corecore