2,841 research outputs found

    Group-In: Group Inference from Wireless Traces of Mobile Devices

    Full text link
    This paper proposes Group-In, a wireless scanning system to detect static or mobile people groups in indoor or outdoor environments. Group-In collects only wireless traces from the Bluetooth-enabled mobile devices for group inference. The key problem addressed in this work is to detect not only static groups but also moving groups with a multi-phased approach based only noisy wireless Received Signal Strength Indicator (RSSIs) observed by multiple wireless scanners without localization support. We propose new centralized and decentralized schemes to process the sparse and noisy wireless data, and leverage graph-based clustering techniques for group detection from short-term and long-term aspects. Group-In provides two outcomes: 1) group detection in short time intervals such as two minutes and 2) long-term linkages such as a month. To verify the performance, we conduct two experimental studies. One consists of 27 controlled scenarios in the lab environments. The other is a real-world scenario where we place Bluetooth scanners in an office environment, and employees carry beacons for more than one month. Both the controlled and real-world experiments result in high accuracy group detection in short time intervals and sampling liberties in terms of the Jaccard index and pairwise similarity coefficient.Comment: This work has been funded by the EU Horizon 2020 Programme under Grant Agreements No. 731993 AUTOPILOT and No.871249 LOCUS projects. The content of this paper does not reflect the official opinion of the EU. Responsibility for the information and views expressed therein lies entirely with the authors. Proc. of ACM/IEEE IPSN'20, 202

    AI Modeling Approaches for Detecting, Characterizing, and Predicting Brief Daily Behaviors such as Toothbrushing using Wrist Trackers.

    Get PDF
    Continuous advancements in wrist-worn sensors have opened up exciting possibilities for real-time monitoring of individuals\u27 daily behaviors, with the aim of promoting healthier, more organized, and efficient lives. Understanding the duration of specific daily behaviors has become of interest to individuals seeking to optimize their lifestyles. However, there is still a research gap when it comes to monitoring short-duration behaviors that have a significant impact on health using wrist-worn inertial sensors in natural environments. These behaviors often involve repetitive micro-events that last only a few seconds or even microseconds, making their detection and analysis challenging. Furthermore, these micro-events are often surrounded by non-repetitive boundary events, further complicating the identification process. Effective detection and timely intervention during these short-duration behaviors are crucial for designing personalized interventions that can positively impact individuals\u27 lifestyles. To address these challenges, this dissertation introduces three models: mORAL, mTeeth, and Brushing Prompt. These models leverage wrist-worn inertial sensors to accurately infer short-duration behaviors, identify repetitive micro-behaviors, and provide timely interventions related to oral hygiene. The dissertation\u27s contributions extend beyond the development of these models. Firstly, precise and detailed labels for each brief and micro-repetitive behavior are acquired to train and validate the models effectively. This involved meticulous marking of the exact start and end times of each event, including any intervening pauses, at a second-level granularity. A comprehensive scientific research study was conducted to collect such data from participants in their free-living natural environments. Secondly, a solution is proposed to address the issue of sensor placement variability. Given the different positions of the sensor within a wristband and variations in wristband placement on the wrist, the model needs to determine the relative configuration of the inertial sensor accurately. Accurately determining the relative positioning of the inertial sensor with respect to the wrist is crucial for the model to determine the orientation of the hand. Additionally, time synchronization errors between sensor data and associated video, despite both being collected on the same smartphone, are addressed through the development of an algorithm that tightly synchronizes the two data sources without relying on an explicit anchor event. Furthermore, an event-based approach is introduced to identify candidate segments of data for applying machine learning models, outperforming the traditional fixed window-based approach. These candidate segments enable reliable detection of brief daily behaviors in a computationally efficient manner suitable for real-time. The dissertation also presents a computationally lightweight method for identifying anchor events using wrist-worn inertial sensors. Anchor events play a vital role in assigning unambiguous labels in a fixed-length window-based approach to data segmentation and effectively demarcating transitions between micro-repetitive events. Significant features are extracted, and explainable machine learning models are developed to ensure reliable detection of brief daily and micro-repetitive behaviors. Lastly, the dissertation addresses the crucial factor of the opportune moment for intervention during brief daily behaviors using wrist-worn inertial sensors. By leveraging these sensors, users can receive timely and personalized interventions to enhance their performance and improve their lifestyles. Overall, this dissertation makes substantial contributions to the field of real-time monitoring of short-duration behaviors. It tackles various technical challenges, provides innovative solutions, and demonstrates the potential for wrist-worn sensors to facilitate effective interventions and promote healthier behaviors. By advancing our understanding of these behaviors and optimizing intervention strategies, this research has the potential to significantly impact individuals\u27 well-being and contribute to the development of personalized health solutions

    Augmenting Vision-Based Human Pose Estimation with Rotation Matrix

    Full text link
    Fitness applications are commonly used to monitor activities within the gym, but they often fail to automatically track indoor activities inside the gym. This study proposes a model that utilizes pose estimation combined with a novel data augmentation method, i.e., rotation matrix. We aim to enhance the classification accuracy of activity recognition based on pose estimation data. Through our experiments, we experiment with different classification algorithms along with image augmentation approaches. Our findings demonstrate that the SVM with SGD optimization, using data augmentation with the Rotation Matrix, yields the most accurate results, achieving a 96% accuracy rate in classifying five physical activities. Conversely, without implementing the data augmentation techniques, the baseline accuracy remains at a modest 64%.Comment: 24 page

    Unobtrusive and pervasive video-based eye-gaze tracking

    Get PDF
    Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe

    Reconstructing Human Motion

    Get PDF
    This thesis presents methods for reconstructing human motion in a variety of applications and begins with an introduction to the general motion capture hardware and processing pipeline. Then, a data-driven method for the completion of corrupted marker-based motion capture data is presented. The approach is especially suitable for challenging cases, e.g., if complete marker sets of multiple body parts are missing over a long period of time. Using a large motion capture database and without the need for extensive preprocessing the method is able to fix missing markers across different actors and motion styles. The approach can be used for incrementally increasing prior-databases, as the underlying search technique for similar motions scales well to huge databases. The resulting clean motion database could then be used in the next application: a generic data-driven method for recognizing human full body actions from live motion capture data originating from various sources. The method queries an annotated motion capture database for similar motion segments, able to handle temporal deviations from the original motion. The approach is online-capable, works in realtime, requires virtually no preprocessing and is shown to work with a variety of feature sets extracted from input data including positional data, sparse accelerometer signals, skeletons extracted from depth sensors and even video data. Evaluation is done by comparing against a frame-based Support Vector Machine approach on a freely available motion database as well as a database containing Judo referee signal motions. In the last part, a method to indirectly reconstruct the effects of the human heart's pumping motion from video data of the face is applied in the context of epileptic seizures. These episodes usually feature interesting heart rate patterns like a significant increase at seizure start as well as seizure-type dependent drop-offs near the end. The pulse detection method is evaluated for applicability regarding seizure detection in a multitude of scenarios, ranging from videos recorded in a controlled clinical environment to patient supplied videos of seizures filmed with smartphones

    As You Are, So Shall You Move Your Head: A System-Level Analysis between Head Movements and Corresponding Traits and Emotions

    Full text link
    Identifying physical traits and emotions based on system-sensed physical activities is a challenging problem in the realm of human-computer interaction. Our work contributes in this context by investigating an underlying connection between head movements and corresponding traits and emotions. To do so, we utilize a head movement measuring device called eSense, which gives acceleration and rotation of a head. Here, first, we conduct a thorough study over head movement data collected from 46 persons using eSense while inducing five different emotional states over them in isolation. Our analysis reveals several new head movement based findings, which in turn, leads us to a novel unified solution for identifying different human traits and emotions through exploiting machine learning techniques over head movement data. Our analysis confirms that the proposed solution can result in high accuracy over the collected data. Accordingly, we develop an integrated unified solution for real-time emotion and trait identification using head movement data leveraging outcomes of our analysis.Comment: 9 pages, 7 figures, NSysS 201
    corecore