66,762 research outputs found

    A Novel Approach to Complex Human Activity Recognition

    Get PDF
    Human activity recognition is a technology that offers automatic recognition of what a person is doing with respect to body motion and function. The main goal is to recognize a person\u27s activity using different technologies such as cameras, motion sensors, location sensors, and time. Human activity recognition is important in many areas such as pervasive computing, artificial intelligence, human-computer interaction, health care, health outcomes, rehabilitation engineering, occupational science, and social sciences. There are numerous ubiquitous and pervasive computing systems where users\u27 activities play an important role. The human activity carries a lot of information about the context and helps systems to achieve context-awareness. In the rehabilitation area, it helps with functional diagnosis and assessing health outcomes. Human activity recognition is an important indicator of participation, quality of life and lifestyle. There are two classes of human activities based on body motion and function. The first class, simple human activity, involves human body motion and posture, such as walking, running, and sitting. The second class, complex human activity, includes function along with simple human activity, such as cooking, reading, and watching TV. Human activity recognition is an interdisciplinary research area that has been active for more than a decade. Substantial research has been conducted to recognize human activities, but, there are many major issues still need to be addressed. Addressing these issues would provide a significant improvement in different aspects of the applications of the human activity recognition in different areas. There has been considerable research conducted on simple human activity recognition, whereas, a little research has been carried out on complex human activity recognition. However, there are many key aspects (recognition accuracy, computational cost, energy consumption, mobility) that need to be addressed in both areas to improve their viability. This dissertation aims to address the key aspects in both areas of human activity recognition and eventually focuses on recognition of complex activity. It also addresses indoor and outdoor localization, an important parameter along with time in complex activity recognition. This work studies accelerometer sensor data to recognize simple human activity and time, location and simple activity to recognize complex activity

    Human Localization and Activity Recognition Using Distributed Motion Sensors

    Get PDF
    The purpose of this thesis is to localize a human and recognize his/her activities in indoor environments using distributed motion sensors. We propose to use a test bed simulated as mock apartment for conducting our experiments. The two parts of the thesis are localization and activity recognition of the elderly person. We explain complete hardware and software setup used to provide these services. The hardware setup consists of two types of sensor end nodes and two sink nodes. The two types of end nodes are Passive Infrared sensor node and GridEye sensor node. Passive Infrared sensor nodes consist of Passive Infrared sensors for motion detection. GridEye sensor nodes consist of thermal array sensors. Data from these sensors are acquired using Arduino boards and transmitted using Xbee modules to the sink nodes. The sink nodes consist of receiver Xbee modules connected to a computer. The sensor nodes were strategically placed at different place inside the apartment. The thermal array sensor provides 64 pixel temperature values, while the PIR sensor provides binary information about motion in its field of view. Since the thermal array sensor provides more information, they were placed in large rooms such as living room and bed room. While PIR sensors were placed in kitchen and bathroom. Initially GridEye sensors are calibrated to obtain the transformation between pixel and real world coordinates. Data from these sensors were processed on computer and we were able to localize the human inside the apartment. We compared the location accuracy using ground truth data obtained from the OptiTrack system. GridEye sensors were also used for activity recognition. Basic human activities such as sitting, sleeping, standing and walking were recognized. We used Support Vector Machine (SVM) to recognize sitting and sleeping activities. Gait speed of human was used to recognize the standing and walking activities. Experiments were performed to obtain the accuracy of classification for these activities.Electrical Engineerin

    CHARM: A Hierarchical Deep Learning Model for Classification of Complex Human Activities Using Motion Sensors

    Full text link
    In this paper, we report a hierarchical deep learning model for classification of complex human activities using motion sensors. In contrast to traditional Human Activity Recognition (HAR) models used for event-based activity recognition, such as step counting, fall detection, and gesture identification, this new deep learning model, which we refer to as CHARM (Complex Human Activity Recognition Model), is aimed for recognition of high-level human activities that are composed of multiple different low-level activities in a non-deterministic sequence, such as meal preparation, house chores, and daily routines. CHARM not only quantitatively outperforms state-of-the-art supervised learning approaches for high-level activity recognition in terms of average accuracy and F1 scores, but also automatically learns to recognize low-level activities, such as manipulation gestures and locomotion modes, without any explicit labels for such activities. This opens new avenues for Human-Machine Interaction (HMI) modalities using wearable sensors, where the user can choose to associate an automated task with a high-level activity, such as controlling home automation (e.g., robotic vacuum cleaners, lights, and thermostats) or presenting contextually relevant information at the right time (e.g., reminders, status updates, and weather/news reports). In addition, the ability to learn low-level user activities when trained using only high-level activity labels may pave the way to semi-supervised learning of HAR tasks that are inherently difficult to label.Comment: 8 pages, 5 figure

    Deep human activity recognition using wearable sensors

    Get PDF
    This paper addresses the problem of classifying motion signals acquired via wearable sensors for the recognition of human activity. Automatic and accurate classification of motion signals is important in facilitating the development of an effective automated health monitoring system for the elderlies. Thus, we gathered hip motion signals from two different waist mounted sensors and for each individual sensor, we converted the motion signal into spectral image sequence. We use these images as inputs to independently train two Convolutional Neural Networks (CNN), one for each of the generated image sequences from the two sensors. The outputs of the trained CNNs are then fused together to predict the final class of the human activity. We evaluate the performance of the proposed method using the cross-subjects testing approach. Our method achieves recognition accuracy (F1 score) of 0.87 on a publicly available real-world human activity dataset. This performance is superior to that reported by another state-of-the-art method on the same dataset

    Trajectory Forecasting with Loose Clothing Using Left-to-Right Hidden Markov Model

    Full text link
    Trajectory forecasting has become an interesting research area driven by advancements in wearable sensing technology. Sensors can be seamlessly integrated into clothing using cutting-edge electronic textiles technology, allowing long-term recording of human movements outside the laboratory. Motivated by the recent findings that clothing-attached sensors can achieve higher activity recognition accuracy than body-attached sensors, this work investigates motion prediction and trajectory forecasting using rigid-attached and clothing-attached sensors. The future trajectory is forecasted from the probabilistic trajectory model formulated by left-to-right hidden Markov model (LR-HMM) and motion prediction accuracy is computed by the classification rule. Surprisingly, the results show that clothing-attached sensors can forecast the future trajectory and have better performance than body-attached sensors in terms of motion prediction accuracy. In some cases, the clothing-attached sensor can enhance accuracy by 45% compared to the body-attached sensor and requires approximately 80% less duration of the historical trajectory to achieve the same level of accuracy as the body-attached sensor

    Smartphone Sensor-Based Activity Recognition by Using Machine Learning and Deep Learning Algorithms

    Get PDF
    Article originally published International Journal of Machine Learning and ComputingSmartphones are widely used today, and it becomes possible to detect the user's environmental changes by using the smartphone sensors, as demonstrated in this paper where we propose a method to identify human activities with reasonably high accuracy by using smartphone sensor data. First, the raw smartphone sensor data are collected from two categories of human activity: motion-based, e.g., walking and running; and phone movement-based, e.g., left-right, up-down, clockwise and counterclockwise movement. Firstly, two types of features extraction are designed from the raw sensor data, and activity recognition is analyzed using machine learning classification models based on these features. Secondly, the activity recognition performance is analyzed through the Convolutional Neural Network (CNN) model using only the raw data. Our experiments show substantial improvement in the result with the addition of features and the use of CNN model based on smartphone sensor data with judicious learning techniques and good feature designs

    Feature Engineering for Activity Recognition from Wrist-worn Motion Sensors

    Get PDF
    With their integrated sensors, wrist-worn devices, such as smart watches, provide an ideal platform for human activity recognition. Particularly, the inertial sensors, such as accelerometer and gyroscope can efficiently capture the wrist and arm movements of the users. In this paper, we investigate the use of accelerometer sensor for recognizing thirteen different activities. Particularly, we analyse how different sets of features extracted from acceleration readings perform in activity recognition. We categorize the set of features into three classes: motion related features, orientation-related features and rotation-related features and we analyse the recognition performance using motion, orientation and rotation information both alone and in combination. We utilize a dataset collected from 10 participants and use different classification algorithms in the analysis. The results show that using orientation features achieve the highest accuracies when used alone and in combination wit h other sensors. Moreover, using only raw acceleration performs slightly better than using linear acceleration and similar compared with gyroscop

    Leveraging Smartphone Sensor Data for Human Activity Recognition

    Get PDF
    Using smartphones for human activity recognition (HAR) has a wide range of applications including healthcare, daily fitness recording, and anomalous situations alerting. This study focuses on human activity recognition based on smartphone embedded sensors. The proposed human activity recognition system recognizes activities including walking, running, sitting, going upstairs, and going downstairs. Embedded sensors (a tri-axial accelerometer and a gyroscope sensor) are employed for motion data collection. Both time-domain and frequency-domain features are extracted and analyzed. Our experiment results show that time-domain features are good enough to recognize basic human activities. The system is implemented in an Android smartphone platform. While the focus has been on human activity recognition systems based on a supervised learning approach, an incremental clustering algorithm is investigated. The proposed unsupervised (clustering) activity detection scheme works in an incremental manner, which contains two stages. In the first stage, streamed sensor data will be processed. A single-pass clustering algorithm is used to generate pre-clustered results for the next stage. In the second stage, pre-clustered results will be refined to form the final clusters, which means the clusters are built incrementally by adding one cluster at a time. Experiments on smartphone sensor data of five basic human activities show that the proposed scheme can get comparable results with traditional clustering algorithms but working in a streaming and incremental manner. In order to develop more accurate activity recognition systems independent of smartphone models, effects of sensor differences across various smartphone models are investigated. We present the impairments of different smartphone embedded sensor models on HAR applications. Outlier removal, interpolation, and filtering in pre-processing stage are proposed as mitigating techniques. Based on datasets collected from four distinct smartphones, the proposed mitigating techniques show positive effects on 10-fold cross validation, device-to-device validation, and leave-one-out validation. Improved performance for smartphone based human activity recognition is observed. With the efforts of developing human activity recognition systems based on supervised learning approach, investigating a clustering based incremental activity recognition system with its potential applications, and applying techniques for alleviating sensor difference effects, a robust human activity recognition system can be trained in either supervised or unsupervised way and can be adapted to multiple devices with being less dependent on different sensor specifications

    Fusion of wearable and visual sensors for human motion analysis

    No full text
    Human motion analysis is concerned with the study of human activity recognition, human motion tracking, and the analysis of human biomechanics. Human motion analysis has applications within areas of entertainment, sports, and healthcare. For example, activity recognition, which aims to understand and identify different tasks from motion can be applied to create records of staff activity in the operating theatre at a hospital; motion tracking is already employed in some games to provide an improved user interaction experience and can be used to study how medical staff interact in the operating theatre; and human biomechanics, which is the study of the structure and function of the human body, can be used to better understand athlete performance, pathologies in certain patients, and assess the surgical skill of medical staff. As health services strive to improve the quality of patient care and meet the growing demands required to care for expanding populations around the world, solutions that can improve patient care, diagnosis of pathology, and the monitoring and training of medical staff are necessary. Surgical workflow analysis, for example, aims to assess and optimise surgical protocols in the operating theatre by evaluating the tasks that staff perform and measurable outcomes. Human motion analysis methods can be used to quantify the activities and performance of staff for surgical workflow analysis; however, a number of challenges must be overcome before routine motion capture of staff in an operating theatre becomes feasible. Current commercial human motion capture technologies have demonstrated that they are capable of acquiring human movement with sub-centimetre accuracy; however, the complicated setup procedures, size, and embodiment of current systems make them cumbersome and unsuited for routine deployment within an operating theatre. Recent advances in pervasive sensing have resulted in camera systems that can detect and analyse human motion, and small wear- able sensors that can measure a variety of parameters from the human body, such as heart rate, fatigue, balance, and motion. The work in this thesis investigates different methods that enable human motion to be more easily, reliably, and accurately captured through ambient and wearable sensor technologies to address some of the main challenges that have limited the use of motion capture technologies in certain areas of study. Sensor embodiment and accuracy of activity recognition is one of the challenges that affect the adoption of wearable devices for monitoring human activity. Using a single inertial sensor, which captures the movement of the subject, a variety of motion characteristics can be measured. For patients, wearable inertial sensors can be used in long-term activity monitoring to better understand the condition of the patient and potentially identify deviations from normal activity. For medical staff, inertial sensors can be used to capture tasks being performed for automated workflow analysis, which is useful for staff training, optimisation of existing processes, and early indications of complications within clinical procedures. Feature extraction and classification methods are introduced in thesis that demonstrate motion classification accuracies of over 90% for five different classes of walking motion using a single ear-worn sensor. To capture human body posture, current capture systems generally require a large number of sensors or reflective reference markers to be worn on the body, which presents a challenge for many applications, such as monitoring human motion in the operating theatre, as they may restrict natural movements and make setup complex and time consuming. To address this, a method is proposed, which uses a regression method to estimate motion using a subset of fewer wearable inertial sensors. This method is demonstrated using three sensors on the upper body and is shown to achieve mean estimation accuracies as low as 1.6cm, 1.1cm, and 1.4cm for the hand, elbow, and shoulders, respectively, when compared with the gold standard optical motion capture system. Using a subset of three sensors, mean errors for hand position reach 15.5cm. Unlike human motion capture systems that rely on vision and reflective reference point markers, commonly known as marker-based optical motion capture, wearable inertial sensors are prone to inaccuracies resulting from an accumulation of inaccurate measurements, which becomes increasingly prevalent over time. Two methods are introduced in this thesis, which aim to solve this challenge using visual rectification of the assumed state of the subject. Using a ceiling-mounted camera, a human detection and human motion tracking method is introduced to improve the average mean accuracy of tracking to within 5.8cm in a laboratory of 3m × 5m. To improve the accuracy of capturing the position of body parts and posture for human biomechanics, a camera is also utilised to track the body part movements and provide visual rectification of human pose estimates from inertial sensing. For most subjects, deviations of less than 10% from the ground truth are achieved for hand positions, which exhibit the greatest error, and the occurrence of sources of other common visual and inertial estimation errors, such as measurement noise, visual occlusion, and sensor calibration are shown to be reduced.Open Acces
    • …
    corecore