197 research outputs found

    Wearable-based human activity recognition: from a healthcare application to a kinetic energy harvesting approach

    Get PDF
    Wearable technology is changing society by becoming an essential component of daily life. Human activity recognition (HAR) is one of the most prominent research areas where wearable devices play a key role. The first major contribution to the field in this dissertation is a smart physical work- load tracking system that combines wearable-based HAR and heart rate tracking. The proposed system employs a concept from ergonomics, the Frimat’s method, to compute the physical workload from heart rate measurements within a specified time window. This dissertation includes a case of study where tracking of an individual over the course of 20 days corroborates the ability of the system to assess adaptation to an exercise routine. The second and third contributions of this dissertation point to KEH in wearable environments. The second contribution is an energy logger for wrist-worn systems, with the purpose of tracking energy generation in KEH systems during daily activities. Thus, it is possible to determine if the harvested energy is enough to power a conventional wearable device. The proposed system computes the harvested energy using the characteristics of the objective load, which in this case is a battery charger. I carried out experiments with multiple subjects to examine the generation capabilities of a commercial harvester under the conditions of human motion. This study provides insights of the performance and limitations of kinetic harvesters as battery chargers. The third contribution is a KEH-based HAR system using deep learning, data augmentation and transfer learning to outperform existing classification approaches in the KEH domain. The proposed architecture comprises convolutional neural networks (CNN) and long short-term memory networks (LSTM), which has been demonstrated to outperform other architectures found in the literature. Since deep learning classifiers require large amounts of data, and KEH datasets are limited in size, this thesis also includes the proposal of three data augmentation methods to synthesize KEH signals simulating new users. Finally, transfer learning is employed to build a system that maintains performance independent of device location or the subject wearing the device.DoctoradoDoctor en Ingeniería Eléctrica y Electrónic

    Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges and Opportunities

    Full text link
    The vast proliferation of sensor devices and Internet of Things enables the applications of sensor-based activity recognition. However, there exist substantial challenges that could influence the performance of the recognition system in practical scenarios. Recently, as deep learning has demonstrated its effectiveness in many areas, plenty of deep methods have been investigated to address the challenges in activity recognition. In this study, we present a survey of the state-of-the-art deep learning methods for sensor-based human activity recognition. We first introduce the multi-modality of the sensory data and provide information for public datasets that can be used for evaluation in different challenge tasks. We then propose a new taxonomy to structure the deep methods by challenges. Challenges and challenge-related deep methods are summarized and analyzed to form an overview of the current research progress. At the end of this work, we discuss the open issues and provide some insights for future directions

    The Real-Time Classification of Competency Swimming Activity Through Machine Learning

    Get PDF
    Every year, an average of 3,536 people die from drowning in America. The significant factors that cause unintentional drowning are people’s lack of water safety awareness and swimming proficiency. Current industry and research trends regarding swimming activity recognition and commercial motion sensors focus more on lap swimming utilized by expert swimmers and do not account for freeform activities. Enhancing swimming education through wearable technology can aid people in learning efficient and effective swimming techniques and water safety. We developed a novel wearable system capable of storing and processing sensor data to categorize competitive and survival swimming activities on a mobile device in real-time. This paper discusses the sensor placement, the hardware and app design, and the research process utilized to achieve activity recognition. For our studies, the data we have gathered comes from various swimming skill levels, from beginner to elite swimmers. Our wearable system uses angle-based novel features as inputs into optimal machine learning algorithms to classify flip turns, traditional competitive strokes, and survival swimming strokes. The machine-learning algorithm was able to classify all activities at .935 of an F-measure. Finally, we examined deep learning and created a CNN model to classify competitive and survival swimming strokes at 95% ac- curacy in real-time on a mobile device

    A survey on wearable sensor modality centred human activity recognition in health care

    Get PDF
    Increased life expectancy coupled with declining birth rates is leading to an aging population structure. Aging-caused changes, such as physical or cognitive decline, could affect people's quality of life, result in injuries, mental health or the lack of physical activity. Sensor-based human activity recognition (HAR) is one of the most promising assistive technologies to support older people's daily life, which has enabled enormous potential in human-centred applications. Recent surveys in HAR either only focus on the deep learning approaches or one specific sensor modality. This survey aims to provide a more comprehensive introduction for newcomers and researchers to HAR. We first introduce the state-of-art sensor modalities in HAR. We look more into the techniques involved in each step of wearable sensor modality centred HAR in terms of sensors, activities, data pre-processing, feature learning and classification, including both conventional approaches and deep learning methods. In the feature learning section, we focus on both hand-crafted features and automatically learned features using deep networks. We also present the ambient-sensor-based HAR, including camera-based systems, and the systems which combine the wearable and ambient sensors. Finally, we identify the corresponding challenges in HAR to pose research problems for further improvement in HAR

    Human Body Posture Recognition Approaches: A Review

    Get PDF
    Human body posture recognition has become the focus of many researchers in recent years. Recognition of body posture is used in various applications, including surveillance, security, and health monitoring. However, these systems that determine the body’s posture through video clips, images, or data from sensors have many challenges when used in the real world. This paper provides an important review of how most essential ‎ hardware technologies are ‎used in posture recognition systems‎. These systems capture and collect datasets through ‎accelerometer sensors or computer vision. In addition, this paper presents a comparison ‎study with state-of-the-art in terms of accuracy. We also present the advantages and ‎limitations of each system and suggest promising future ideas that can increase the ‎efficiency of the existing posture recognition system. Finally, the most common datasets ‎applied in these systems are described in detail. It aims to be a resource to help choose one of the methods in recognizing the posture of the human body and the techniques that suit each method. It analyzes more than 80 papers between 2015 and 202

    A fast and robust deep convolutional neural networks for complex human activity recognition using smartphone

    Get PDF
    © 2019 by the authors. Licensee MDPI, Basel, Switzerland. As a significant role in healthcare and sports applications, human activity recognition (HAR) techniques are capable of monitoring humans’ daily behavior. It has spurred the demand for intelligent sensors and has been giving rise to the explosive growth of wearable and mobile devices. They provide the most availability of human activity data (big data). Powerful algorithms are required to analyze these heterogeneous and high-dimension streaming data efficiently. This paper proposes a novel fast and robust deep convolutional neural network structure (FR-DCNN) for human activity recognition (HAR) using a smartphone. It enhances the effectiveness and extends the information of the collected raw data from the inertial measurement unit (IMU) sensors by integrating a series of signal processing algorithms and a signal selection module. It enables a fast computational method for building the DCNN classifier by adding a data compression module. Experimental results on the sampled 12 complex activities dataset show that the proposed FR-DCNN model is the best method for fast computation and high accuracy recognition. The FR-DCNN model only needs 0.0029 s to predict activity in an online way with 95.27% accuracy. Meanwhile, it only takes 88 s (average) to establish the DCNN classifier on the compressed dataset with less precision loss 94.18%

    An Efficient and Lightweight Deep Learning Model for Human Activity Recognition Using Smartphones

    Get PDF
    Traditional pattern recognition approaches have gained a lot of popularity. However, these are largely dependent upon manual feature extraction, which makes the generalized model obscure. The sequences of accelerometer data recorded can be classified by specialized smartphones into well known movements that can be done with human activity recognition. With the high success and wide adaptation of deep learning approaches for the recognition of human activities, these techniques are widely used in wearable devices and smartphones to recognize the human activities. In this paper, convolutional layers are combined with long short-term memory (LSTM), along with the deep learning neural network for human activities recognition (HAR). The proposed model extracts the features in an automated way and categorizes them with some model attributes. In general, LSTM is alternative form of recurrent neural network (RNN) which is famous for temporal sequences’ processing. In the proposed architecture, a dataset of UCI-HAR for Samsung Galaxy S2 is used for various human activities. The CNN classifier, which should be taken single, and LSTM models should be taken in series and take the feed data. For each input, the CNN model is applied, and each input image’s output is transferred to the LSTM classifier as a time step. The number of filter maps for mapping of the various portions of image is the most important hyperparameter used. Transformation on the basis of observations takes place by using Gaussian standardization. CNN-LSTM, a proposed model, is an efficient and lightweight model that has shown high robustness and better activity detection capability than traditional algorithms by providing the accuracy of 97.89%
    • …
    corecore