66,067 research outputs found

    Employing Environmental Data and Machine Learning to Improve Mobile Health Receptivity

    Get PDF
    Behavioral intervention strategies can be enhanced by recognizing human activities using eHealth technologies. As we find after a thorough literature review, activity spotting and added insights may be used to detect daily routines inferring receptivity for mobile notifications similar to just-in-time support. Towards this end, this work develops a model, using machine learning, to analyze the motivation of digital mental health users that answer self-assessment questions in their everyday lives through an intelligent mobile application. A uniform and extensible sequence prediction model combining environmental data with everyday activities has been created and validated for proof of concept through an experiment. We find that the reported receptivity is not sequentially predictable on its own, the mean error and standard deviation are only slightly below by-chance comparison. Nevertheless, predicting the upcoming activity shows to cover about 39% of the day (up to 58% in the best case) and can be linked to user individual intervention preferences to indirectly find an opportune moment of receptivity. Therefore, we introduce an application comprising the influences of sensor data on activities and intervention thresholds, as well as allowing for preferred events on a weekly basis. As a result of combining those multiple approaches, promising avenues for innovative behavioral assessments are possible. Identifying and segmenting the appropriate set of activities is key. Consequently, deliberate and thoughtful design lays the foundation for further development within research projects by extending the activity weighting process or introducing a model reinforcement.BMBF, 13GW0157A, Verbundprojekt: Self-administered Psycho-TherApy-SystemS (SELFPASS) - Teilvorhaben: Data Analytics and Prescription for SELFPASSTU Berlin, Open-Access-Mittel - 201

    Mobile phones: a trade-off between speech intelligibility and exposure to noise levels and to radio-frequency electromagnetic fields

    Get PDF
    When making phone calls, cellphone and smartphone users are exposed to radio-frequency (RF) electromagnetic fields (EMFs) and sound pressure simultaneously. Speech intelligibility during mobile phone calls is related to the sound pressure level of speech relative to potential background sounds and also to the RF-EMF exposure, since the signal quality is correlated with the RF-EMF strength. Additionally, speech intelligibility, sound pressure level, and exposure to RF-EMFs are dependent on how the call is made (on speaker, held at the ear, or with headsets). The relationship between speech intelligibility, sound exposure, and exposure to RF-EMFs is determined in this study. To this aim, the transmitted RF-EMF power was recorded during phone calls made by 53 subjects in three different, controlled exposure scenarios: calling with the phone at the ear, calling in speaker mode, and calling with a headset. This emitted power is directly proportional to the exposure to RF EMFs and is translated into specific absorption rate using numerical simulations. Simultaneously, sound pressure levels have been recorded and speech intelligibility has been assessed during each phone call. The results show that exposure to RF-EMFs, quantified as the specific absorption in the head, will be reduced when speaker-mode or a headset is used, in comparison to calling next to the ear. Additionally, personal exposure to sound pressure is also found to be highest in the condition where the phone is held next to the ear. On the other hand, speech perception is found to be the best when calling with a phone next to the ear in comparison to the other studied conditions, when background noise is present

    Transportation mode recognition fusing wearable motion, sound and vision sensors

    Get PDF
    We present the first work that investigates the potential of improving the performance of transportation mode recognition through fusing multimodal data from wearable sensors: motion, sound and vision. We first train three independent deep neural network (DNN) classifiers, which work with the three types of sensors, respectively. We then propose two schemes that fuse the classification results from the three mono-modal classifiers. The first scheme makes an ensemble decision with fixed rules including Sum, Product, Majority Voting, and Borda Count. The second scheme is an adaptive fuser built as another classifier (including Naive Bayes, Decision Tree, Random Forest and Neural Network) that learns enhanced predictions by combining the outputs from the three mono-modal classifiers. We verify the advantage of the proposed method with the state-of-the-art Sussex-Huawei Locomotion and Transportation (SHL) dataset recognizing the eight transportation activities: Still, Walk, Run, Bike, Bus, Car, Train and Subway. We achieve F1 scores of 79.4%, 82.1% and 72.8% with the mono-modal motion, sound and vision classifiers, respectively. The F1 score is remarkably improved to 94.5% and 95.5% by the two data fusion schemes, respectively. The recognition performance can be further improved with a post-processing scheme that exploits the temporal continuity of transportation. When assessing generalization of the model to unseen data, we show that while performance is reduced - as expected - for each individual classifier, the benefits of fusion are retained with performance improved by 15 percentage points. Besides the actual performance increase, this work, most importantly, opens up the possibility for dynamically fusing modalities to achieve distinct power-performance trade-off at run time

    A Home Security System Based on Smartphone Sensors

    Get PDF
    Several new smartphones are released every year. Many people upgrade to new phones, and their old phones are not put to any further use. In this paper, we explore the feasibility of using such retired smartphones and their on-board sensors to build a home security system. We observe that door-related events such as opening and closing have unique vibration signatures when compared to many types of environmental vibrational noise. These events can be captured by the accelerometer of a smartphone when the phone is mounted on a wall near a door. The rotation of a door can also be captured by the magnetometer of a smartphone when the phone is mounted on a door. We design machine learning and threshold-based methods to detect door opening events based on accelerometer and magnetometer data and build a prototype home security system that can detect door openings and notify the homeowner via email, SMS and phone calls upon break-in detection. To further augment our security system, we explore using the smartphone’s built-in microphone to detect door and window openings across multiple doors and windows simultaneously. Experiments in a residential home show that the accelerometer- based detection can detect door open events with an accuracy higher than 98%, and magnetometer-based detection has 100% accuracy. By using the magnetometer method to automate the training phase of a neural network, we find that sound-based detection of door openings has an accuracy of 90% across multiple doors

    Emotions in context: examining pervasive affective sensing systems, applications, and analyses

    Get PDF
    Pervasive sensing has opened up new opportunities for measuring our feelings and understanding our behavior by monitoring our affective states while mobile. This review paper surveys pervasive affect sensing by examining and considering three major elements of affective pervasive systems, namely; “sensing”, “analysis”, and “application”. Sensing investigates the different sensing modalities that are used in existing real-time affective applications, Analysis explores different approaches to emotion recognition and visualization based on different types of collected data, and Application investigates different leading areas of affective applications. For each of the three aspects, the paper includes an extensive survey of the literature and finally outlines some of challenges and future research opportunities of affective sensing in the context of pervasive computing
    • 

    corecore