6,084 research outputs found

    User Activity Recognition Method based on Atmospheric Pressure Sensing

    Get PDF
    Abstract Several studies have been conducted on context recognition as well as hobby and preference extraction by analyzing the data obtained from the sensors in a smartphone. As a smartphone component, a barometer is expected to be useful for activity recognition because of its low power consumption. In this work, we propose an activity recognition method of classifying a user's state into indoor and outdoor states and using a barometer at each state. In the proposed method, the floor of a building on which a user is located is estimated by determining atmospheric pressure variations sensed in the indoor state, and the user's location is estimated by determining atmospheric pressure variations according to the user movement along a track in the outdoor state. In particular, this paper delineates the method of estimating the current floor on which the user is located. We confirmed that it is possible to closely estimate the current floor of the building in the case of user movement among eighteen floors

    Toward a unified PNT, Part 1: Complexity and context: Key challenges of multisensor positioning

    Get PDF
    The next generation of navigation and positioning systems must provide greater accuracy and reliability in a range of challenging environments to meet the needs of a variety of mission-critical applications. No single navigation technology is robust enough to meet these requirements on its own, so a multisensor solution is required. Known environmental features, such as signs, buildings, terrain height variation, and magnetic anomalies, may or may not be available for positioning. The system could be stationary, carried by a pedestrian, or on any type of land, sea, or air vehicle. Furthermore, for many applications, the environment and host behavior are subject to change. A multi-sensor solution is thus required. The expert knowledge problem is compounded by the fact that different modules in an integrated navigation system are often supplied by different organizations, who may be reluctant to share necessary design information if this is considered to be intellectual property that must be protected

    Robust human locomotion and localization activity recognition over multisensory

    Get PDF
    Human activity recognition (HAR) plays a pivotal role in various domains, including healthcare, sports, robotics, and security. With the growing popularity of wearable devices, particularly Inertial Measurement Units (IMUs) and Ambient sensors, researchers and engineers have sought to take advantage of these advances to accurately and efficiently detect and classify human activities. This research paper presents an advanced methodology for human activity and localization recognition, utilizing smartphone IMU, Ambient, GPS, and Audio sensor data from two public benchmark datasets: the Opportunity dataset and the Extrasensory dataset. The Opportunity dataset was collected from 12 subjects participating in a range of daily activities, and it captures data from various body-worn and object-associated sensors. The Extrasensory dataset features data from 60 participants, including thousands of data samples from smartphone and smartwatch sensors, labeled with a wide array of human activities. Our study incorporates novel feature extraction techniques for signal, GPS, and audio sensor data. Specifically, for localization, GPS, audio, and IMU sensors are utilized, while IMU and Ambient sensors are employed for locomotion activity recognition. To achieve accurate activity classification, state-of-the-art deep learning techniques, such as convolutional neural networks (CNN) and long short-term memory (LSTM), have been explored. For indoor/outdoor activities, CNNs are applied, while LSTMs are utilized for locomotion activity recognition. The proposed system has been evaluated using the k-fold cross-validation method, achieving accuracy rates of 97% and 89% for locomotion activity over the Opportunity and Extrasensory datasets, respectively, and 96% for indoor/outdoor activity over the Extrasensory dataset. These results highlight the efficiency of our methodology in accurately detecting various human activities, showing its potential for real-world applications. Moreover, the research paper introduces a hybrid system that combines machine learning and deep learning features, enhancing activity recognition performance by leveraging the strengths of both approaches

    Context Determination for Adaptive Navigation using Multiple Sensors on a Smartphone

    Get PDF
    Navigation and positioning is inherently dependent on the context, which comprises both the operating environment and the behaviour of the host vehicle or user. No single technique is capable of providing reliable and accurate positioning in all contexts. In order to operate reliably across different contexts, a multi-sensor navigation system is required to detect its operating context and reconfigure the techniques accordingly. This paper aims to determine the behavioural and environmental contexts together, building the foundation of a context-adaptive navigation system. Both behavioural and environmental context detection results are presented. A hierarchical behavioural recognition scheme is proposed, within which the broad classes of human activities and vehicle motions are detected using measurements from accelerometers, gyroscopes, magnetometers and the barometer on a smartphone by decision trees (DT) and Relevance Vector Machines (RVM). The detection results are further improved by behavioural connectivity. Environmental contexts (e.g., indoor and outdoor) are detected from GNSS measurements using a hidden Markov model. The paper also investigates context association in order to further improve the reliability of context determination. Practical test results demonstrate improvements of environment detection in context determination

    Comparing CNN and Human Crafted Features for Human Activity Recognition

    Get PDF
    Deep learning techniques such as Convolutional Neural Networks (CNNs) have shown good results in activity recognition. One of the advantages of using these methods resides in their ability to generate features automatically. This ability greatly simplifies the task of feature extraction that usually requires domain specific knowledge, especially when using big data where data driven approaches can lead to anti-patterns. Despite the advantage of this approach, very little work has been undertaken on analyzing the quality of extracted features, and more specifically on how model architecture and parameters affect the ability of those features to separate activity classes in the final feature space. This work focuses on identifying the optimal parameters for recognition of simple activities applying this approach on both signals from inertial and audio sensors. The paper provides the following contributions: (i) a comparison of automatically extracted CNN features with gold standard Human Crafted Features (HCF) is given, (ii) a comprehensive analysis on how architecture and model parameters affect separation of target classes in the feature space. Results are evaluated using publicly available datasets. In particular, we achieved a 93.38% F-Score on the UCI-HAR dataset, using 1D CNNs with 3 convolutional layers and 32 kernel size, and a 90.5% F-Score on the DCASE 2017 development dataset, simplified for three classes (indoor, outdoor and vehicle), using 2D CNNs with 2 convolutional layers and a 2x2 kernel size
    • …
    corecore