9,236 research outputs found
Employing Environmental Data and Machine Learning to Improve Mobile Health Receptivity
Behavioral intervention strategies can be enhanced by recognizing human activities using eHealth technologies. As we find after a thorough literature review, activity spotting and added insights may be used to detect daily routines inferring receptivity for mobile notifications similar to just-in-time support. Towards this end, this work develops a model, using machine learning, to analyze the motivation of digital mental health users that answer self-assessment questions in their everyday lives through an intelligent mobile application. A uniform and extensible sequence prediction model combining environmental data with everyday activities has been created and validated for proof of concept through an experiment. We find that the reported receptivity is not sequentially predictable on its own, the mean error and standard deviation are only slightly below by-chance comparison. Nevertheless, predicting the upcoming activity shows to cover about 39% of the day (up to 58% in the best case) and can be linked to user individual intervention preferences to indirectly find an opportune moment of receptivity. Therefore, we introduce an application comprising the influences of sensor data on activities and intervention thresholds, as well as allowing for preferred events on a weekly basis. As a result of combining those multiple approaches, promising avenues for innovative behavioral assessments are possible. Identifying and segmenting the appropriate set of activities is key. Consequently, deliberate and thoughtful design lays the foundation for further development within research projects by extending the activity weighting process or introducing a model reinforcement.BMBF, 13GW0157A, Verbundprojekt: Self-administered Psycho-TherApy-SystemS (SELFPASS) - Teilvorhaben: Data Analytics and Prescription for SELFPASSTU Berlin, Open-Access-Mittel - 201
Fireground location understanding by semantic linking of visual objects and building information models
This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques. The novelty of our methodology lies in the semantic linking of video object recognition results from visual and thermal cameras with Building Information Models (BIM). The current limitations and possibilities of certain building information streams in the context of fire safety or fire incident management are addressed in this paper. Furthermore, our data management tools match higher-level semantic metadata descriptors of BIM and deep-learning based visual object recognition and classification networks. Based on these matches, estimations can be generated of camera, objects and event positions in the BIM model, transforming it from a static source of information into a rich, dynamic data provider. Previous work has already investigated the possibilities to link BIM and low-cost point sensors for fireground understanding, but these approaches did not take into account the benefits of video analysis and recent developments in semantics and feature learning research. Finally, the strengths of the proposed approach compared to the state-of-the-art is its (semi -)automatic workflow, generic and modular setup and multi-modal strategy, which allows to automatically create situational awareness, to improve localization and to facilitate the overall fire understanding
From Personalized Medicine to Population Health: A Survey of mHealth Sensing Techniques
Mobile Sensing Apps have been widely used as a practical approach to collect
behavioral and health-related information from individuals and provide timely
intervention to promote health and well-beings, such as mental health and
chronic cares. As the objectives of mobile sensing could be either \emph{(a)
personalized medicine for individuals} or \emph{(b) public health for
populations}, in this work we review the design of these mobile sensing apps,
and propose to categorize the design of these apps/systems in two paradigms --
\emph{(i) Personal Sensing} and \emph{(ii) Crowd Sensing} paradigms. While both
sensing paradigms might incorporate with common ubiquitous sensing
technologies, such as wearable sensors, mobility monitoring, mobile data
offloading, and/or cloud-based data analytics to collect and process sensing
data from individuals, we present a novel taxonomy system with two major
components that can specify and classify apps/systems from aspects of the
life-cycle of mHealth Sensing: \emph{(1) Sensing Task Creation \&
Participation}, \emph{(2) Health Surveillance \& Data Collection}, and
\emph{(3) Data Analysis \& Knowledge Discovery}. With respect to different
goals of the two paradigms, this work systematically reviews this field, and
summarizes the design of typical apps/systems in the view of the configurations
and interactions between these two components. In addition to summarization,
the proposed taxonomy system also helps figure out the potential directions of
mobile sensing for health from both personalized medicines and population
health perspectives.Comment: Submitted to a journal for revie
- …