277 research outputs found

    Wearable Sensor Data Based Human Activity Recognition using Machine Learning: A new approach

    Get PDF
    Recent years have witnessed the rapid development of human activity recognition (HAR) based on wearable sensor data. One can find many practical applications in this area, especially in the field of health care. Many machine learning algorithms such as Decision Trees, Support Vector Machine, Naive Bayes, K-Nearest Neighbor, and Multilayer Perceptron are successfully used in HAR. Although these methods are fast and easy for implementation, they still have some limitations due to poor performance in a number of situations. In this paper, we propose a novel method based on the ensemble learning to boost the performance of these machine learning methods for HAR

    Feature fusion H-ELM based learned features and hand-crafted features for human activity recognition

    Get PDF
    Recognizing human activities is one of the main goals of human-centered intelligent systems. Smartphone sensors produce a continuous sequence of observations. These observations are noisy, unstructured and high dimensional. Therefore, efficient features have to be extracted in order to perform accurate classification. This paper proposes a combination of Hierarchical and kernel Extreme Learning Machine (HK-ELM) methods to learn features and map them to specific classes in a short time. Moreover, a feature fusion approach is proposed to combine H-ELM based learned features with hand-crafted ones. Our proposed method was found to outperform state-of-the-art in terms of accuracy and training time. It gives accuracy of 97.62 % and takes 3.4 seconds as a training time by using a normal Central Processing Unit (CPU)

    Single Input Single Head CNN-GRU-LSTM Architecture for Recognition of Human Activities

    Get PDF
    Due to its applications for the betterment of human life, human activity recognition has attracted more researchers in the recent past. Anticipation of intension behind the motion and behaviour recognition are intensive applications for research inside human activity recognition. Gyroscope, accelerometer, and magnetometer sensors are heavily used to obtain the data in time series for every timestep. The selection of temporal features is required for the successful recognition of human motion primitives. Different data pre-processing and feature extraction techniques were used in most past approaches with the constraint of sufficient domain knowledge. These approaches are heavily dependent on the quality of handcrafted features and are also time-consuming and not generalized. In this paper, a single head deep neural network-based approach with the combination of a convolutional neural network, Gated recurrent unit, and Long Short Term memory is proposed. The raw data from wearable sensors are used with minimum pre-processing steps and without the involvement of any feature extraction method. 93.48 % and 98.51% accuracy are obtained on UCI-HAR and WISDM datasets. This single-head deep neural network-based model shows higher classification performance over other architectures under deep neural networks

    Wearable Sensor Data Based Human Activity Recognition using Machine Learning: A new approach

    Get PDF
    International audienceRecent years have witnessed the rapid development of human activity recognition (HAR) based on werable sensor data. One can find many practical applications in this area, especially in the field of health care. Many machine learning algorithms such as Decision Trees, Support Vector Machine, Naive Bayes, K-Nearest Neighbor and Multilayer Perceptron are successfully used in HAR. Although these methods are fast and easy for implementation, they still have some limitations due to poor performance in a number of situations. In this paper, we propose a novel method based on the ensemble learning to boost the performance of these machine learning methods for HAR

    Deep CNN-LSTM With Self-Attention Model for Human Activity Recognition Using Wearable Sensor

    Get PDF
    Human Activity Recognition (HAR) systems are devised for continuously observing human behavior - primarily in the fields of environmental compatibility, sports injury detection, senior care, rehabilitation, entertainment, and the surveillance in intelligent home settings. Inertial sensors, e.g., accelerometers, linear acceleration, and gyroscopes are frequently employed for this purpose, which are now compacted into smart devices, e.g., smartphones. Since the use of smartphones is so widespread now-a-days, activity data acquisition for the HAR systems is a pressing need. In this article, we have conducted the smartphone sensor-based raw data collection, namely H-Activity , using an Android-OS-based application for accelerometer, gyroscope, and linear acceleration. Furthermore, a hybrid deep learning model is proposed, coupling convolutional neural network and long-short term memory network (CNN-LSTM), empowered by the self-attention algorithm to enhance the predictive capabilities of the system. In addition to our collected dataset ( H-Activity ), the model has been evaluated with some benchmark datasets, e.g., MHEALTH, and UCI-HAR to demonstrate the comparative performance of our model. When compared to other models, the proposed model has an accuracy of 99.93% using our collected H-Activity data, and 98.76% and 93.11% using data from MHEALTH and UCI-HAR databases respectively, indicating its efficacy in recognizing human activity recognition. We hope that our developed model could be applicable in the clinical settings and collected data could be useful for further research.publishedVersio

    Human Activity Recognition using Inertial, Physiological and Environmental Sensors: a Comprehensive Survey

    Get PDF
    In the last decade, Human Activity Recognition (HAR) has become a vibrant research area, especially due to the spread of electronic devices such as smartphones, smartwatches and video cameras present in our daily lives. In addition, the advance of deep learning and other machine learning algorithms has allowed researchers to use HAR in various domains including sports, health and well-being applications. For example, HAR is considered as one of the most promising assistive technology tools to support elderly's daily life by monitoring their cognitive and physical function through daily activities. This survey focuses on critical role of machine learning in developing HAR applications based on inertial sensors in conjunction with physiological and environmental sensors.Comment: Accepted for Publication in IEEE Access DOI: 10.1109/ACCESS.2020.303771

    A collaborative healthcare framework for shared healthcare plan with ambient intelligence

    Get PDF
    The fast propagation of the Internet of Things (IoT) devices has driven to the development of collaborative healthcare frameworks to support the next generation healthcare industry for quality medical healthcare. This paper presents a generalized collaborative framework named collaborative shared healthcare plan (CSHCP) for cognitive health and fitness assessment of people using ambient intelligent application and machine learning techniques. CSHCP provides support for daily physical activity recognition, monitoring, assessment and generate a shared healthcare plan based on collaboration among different stakeholders: doctors, patient guardians, as well as close community circles. The proposed framework shows promising outcomes compared to the existing studies. Furthermore, the proposed framework enhances team communication, coordination, long-term plan management of healthcare information to provide a more efficient and reliable shared healthcare plans to people

    A systematic review of physiological signals based driver drowsiness detection systems.

    Get PDF
    Driving a vehicle is a complex, multidimensional, and potentially risky activity demanding full mobilization and utilization of physiological and cognitive abilities. Drowsiness, often caused by stress, fatigue, and illness declines cognitive capabilities that affect drivers' capability and cause many accidents. Drowsiness-related road accidents are associated with trauma, physical injuries, and fatalities, and often accompany economic loss. Drowsy-related crashes are most common in young people and night shift workers. Real-time and accurate driver drowsiness detection is necessary to bring down the drowsy driving accident rate. Many researchers endeavored for systems to detect drowsiness using different features related to vehicles, and drivers' behavior, as well as, physiological measures. Keeping in view the rising trend in the use of physiological measures, this study presents a comprehensive and systematic review of the recent techniques to detect driver drowsiness using physiological signals. Different sensors augmented with machine learning are utilized which subsequently yield better results. These techniques are analyzed with respect to several aspects such as data collection sensor, environment consideration like controlled or dynamic, experimental set up like real traffic or driving simulators, etc. Similarly, by investigating the type of sensors involved in experiments, this study discusses the advantages and disadvantages of existing studies and points out the research gaps. Perceptions and conceptions are made to provide future research directions for drowsiness detection techniques based on physiological signals. [Abstract copyright: © The Author(s), under exclusive licence to Springer Nature B.V. 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
    corecore