2,672 research outputs found

    A Review of Physical Human Activity Recognition Chain Using Sensors

    Get PDF
    In the era of Internet of Medical Things (IoMT), healthcare monitoring has gained a vital role nowadays. Moreover, improving lifestyle, encouraging healthy behaviours, and decreasing the chronic diseases are urgently required. However, tracking and monitoring critical cases/conditions of elderly and patients is a great challenge. Healthcare services for those people are crucial in order to achieve high safety consideration. Physical human activity recognition using wearable devices is used to monitor and recognize human activities for elderly and patient. The main aim of this review study is to highlight the human activity recognition chain, which includes, sensing technologies, preprocessing and segmentation, feature extractions methods, and classification techniques. Challenges and future trends are also highlighted.

    Leveraging Smartphone Sensor Data for Human Activity Recognition

    Get PDF
    Using smartphones for human activity recognition (HAR) has a wide range of applications including healthcare, daily fitness recording, and anomalous situations alerting. This study focuses on human activity recognition based on smartphone embedded sensors. The proposed human activity recognition system recognizes activities including walking, running, sitting, going upstairs, and going downstairs. Embedded sensors (a tri-axial accelerometer and a gyroscope sensor) are employed for motion data collection. Both time-domain and frequency-domain features are extracted and analyzed. Our experiment results show that time-domain features are good enough to recognize basic human activities. The system is implemented in an Android smartphone platform. While the focus has been on human activity recognition systems based on a supervised learning approach, an incremental clustering algorithm is investigated. The proposed unsupervised (clustering) activity detection scheme works in an incremental manner, which contains two stages. In the first stage, streamed sensor data will be processed. A single-pass clustering algorithm is used to generate pre-clustered results for the next stage. In the second stage, pre-clustered results will be refined to form the final clusters, which means the clusters are built incrementally by adding one cluster at a time. Experiments on smartphone sensor data of five basic human activities show that the proposed scheme can get comparable results with traditional clustering algorithms but working in a streaming and incremental manner. In order to develop more accurate activity recognition systems independent of smartphone models, effects of sensor differences across various smartphone models are investigated. We present the impairments of different smartphone embedded sensor models on HAR applications. Outlier removal, interpolation, and filtering in pre-processing stage are proposed as mitigating techniques. Based on datasets collected from four distinct smartphones, the proposed mitigating techniques show positive effects on 10-fold cross validation, device-to-device validation, and leave-one-out validation. Improved performance for smartphone based human activity recognition is observed. With the efforts of developing human activity recognition systems based on supervised learning approach, investigating a clustering based incremental activity recognition system with its potential applications, and applying techniques for alleviating sensor difference effects, a robust human activity recognition system can be trained in either supervised or unsupervised way and can be adapted to multiple devices with being less dependent on different sensor specifications

    Intelligent Sensing and Decision Making in Smart Technologies

    Get PDF

    A Multilayer Interval Type-2 Fuzzy Extreme Learning Machine for the recognition of walking activities and gait events using wearable sensors

    Get PDF
    In this paper, a novel Multilayer Interval Type-2 Fuzzy Extreme Learning Machine (ML-IT2-FELM) for the recognition of walking activities and Gait events is presented. The ML-IT2-FELM uses a hierarchical learning scheme that consists of multiple layers of IT2 Fuzzy Autoencoders (FAEs), followed by a final classification layer based on an IT2-FELM architecture. The core building block in the ML-IT2-FELM is the IT2-FELM, which is a generalised model of the Interval Type-2 Radial Basis Function Neural Network (IT2-RBFNN) and that is functionally equivalent to a class of simplified IT2 Fuzzy Logic Systems (FLSs). Each FAE in the ML-IT2-FELM employs an output layer with a direct-defuzzification process based on the Nie-Tan algorithm, while the IT2-FELM classifier includes a Karnik-Mendel type-reduction method (KM). Real data was collected using three inertial measurements units attached to the thigh, shank and foot of twelve healthy participants. The validation of the ML-IT2-FELM method is performed with two different experiments. The first experiment involves the recognition of three different walking activities: Level-Ground Walking (LGW), Ramp Ascent (RA) and Ramp Descent (RD). The second experiment consists of the recognition of stance and swing phases during the gait cycle. In addition, to compare the efficiency of the ML-IT2-FELM with other ML fuzzy methodologies, a kernel-based ML-IT2-FELM that is inspired by kernel learning and called KML-IT2-FELM is also implemented. The results from the recognition of walking activities and gait events achieved an average accuracy of 99.98% and 99.84% with a decision time of 290.4ms and 105ms, respectively, by the ML-IT2-FELM, while the KML-IT2-FELM achieved an average accuracy of 99.98% and 99.93% with a decision time of 191.9ms and 94ms. The experiments demonstrate that the ML-IT2-FELM is not only an effective Fuzzy Logic-based approach in the presence of sensor noise, but also a fast extreme learning machine for the recognition of different walking activities

    A survey of machine and deep learning methods for privacy protection in the Internet of things

    Get PDF
    Recent advances in hardware and information technology have accelerated the proliferation of smart and interconnected devices facilitating the rapid development of the Internet of Things (IoT). IoT applications and services are widely adopted in environments such as smart cities, smart industry, autonomous vehicles, and eHealth. As such, IoT devices are ubiquitously connected, transferring sensitive and personal data without requiring human interaction. Consequently, it is crucial to preserve data privacy. This paper presents a comprehensive survey of recent Machine Learning (ML)- and Deep Learning (DL)-based solutions for privacy in IoT. First, we present an in depth analysis of current privacy threats and attacks. Then, for each ML architecture proposed, we present the implementations, details, and the published results. Finally, we identify the most effective solutions for the different threats and attacks.This work is partially supported by the Generalitat de Catalunya under grant 2017 SGR 962 and the HORIZON-GPHOENIX (101070586) and HORIZON-EUVITAMIN-V (101093062) projects.Peer ReviewedPostprint (published version

    A study of deep neural networks for human activity recognition

    Get PDF
    Human activity recognition and deep learning are two fields that have attracted attention in recent years. The former due to its relevance in many application domains, such as ambient assisted living or health monitoring, and the latter for its recent and excellent performance achievements in different domains of application such as image and speech recognition. In this article, an extensive analysis among the most suited deep learning architectures for activity recognition is conducted to compare its performance in terms of accuracy, speed, and memory requirements. In particular, convolutional neural networks (CNN), long short‐term memory networks (LSTM), bidirectional LSTM (biLSTM), gated recurrent unit networks (GRU), and deep belief networks (DBN) have been tested on a total of 10 publicly available datasets, with different sensors, sets of activities, and sampling rates. All tests have been designed under a multimodal approach to take advantage of synchronized raw sensor' signals. Results show that CNNs are efficient at capturing local temporal dependencies of activity signals, as well as at identifying correlations among sensors. Their performance in activity classification is comparable with, and in most cases better than, the performance of recurrent models. Their faster response and lower memory footprint make them the architecture of choice for wearable and IoT devices
    corecore