5,263 research outputs found
Employing Environmental Data and Machine Learning to Improve Mobile Health Receptivity
Behavioral intervention strategies can be enhanced by recognizing human activities using eHealth technologies. As we find after a thorough literature review, activity spotting and added insights may be used to detect daily routines inferring receptivity for mobile notifications similar to just-in-time support. Towards this end, this work develops a model, using machine learning, to analyze the motivation of digital mental health users that answer self-assessment questions in their everyday lives through an intelligent mobile application. A uniform and extensible sequence prediction model combining environmental data with everyday activities has been created and validated for proof of concept through an experiment. We find that the reported receptivity is not sequentially predictable on its own, the mean error and standard deviation are only slightly below by-chance comparison. Nevertheless, predicting the upcoming activity shows to cover about 39% of the day (up to 58% in the best case) and can be linked to user individual intervention preferences to indirectly find an opportune moment of receptivity. Therefore, we introduce an application comprising the influences of sensor data on activities and intervention thresholds, as well as allowing for preferred events on a weekly basis. As a result of combining those multiple approaches, promising avenues for innovative behavioral assessments are possible. Identifying and segmenting the appropriate set of activities is key. Consequently, deliberate and thoughtful design lays the foundation for further development within research projects by extending the activity weighting process or introducing a model reinforcement.BMBF, 13GW0157A, Verbundprojekt: Self-administered Psycho-TherApy-SystemS (SELFPASS) - Teilvorhaben: Data Analytics and Prescription for SELFPASSTU Berlin, Open-Access-Mittel - 201
The Oneiric Reality of Electronic Scents
This paper investigates the ‘oneiric’ dimension of scent, by suggesting a new design process that can be worn as a fashion accessory or integrated in textile technologies, to subtly alter reality and go beyond our senses. It fuses wearable ‘electronic scent’ delivery systems with pioneering biotechnologies as a ground-breaking ‘science fashion’ enabler. The purpose is to enhance wellbeing by reaching a day‐dream state of being through the sense of smell.
The sense of smell (or olfaction) is a chemical sense and part of the limbic system which regulates emotion and memory within the brain. The power of scent makes content extremely compelling by offering a heightened sense of reality which is intensified by emotions such as joy, anger and fear. Scent helps us appreciate all the senses as we embark on a sensory journey unlike any other; it enhances mood, keeps us in the moment, diverts us from distractions, reduces boredom and encourages creativity.
This paper highlights the importance of smell, the forgotten sense, and also identifies how we as humans have grown to underuse our senses. It endeavours to show how the reinvention of our sensory faculties is possible through advances in biotechnology. It introduces the new ‘data senses’ as a wearable sensory platform that triggers and fine tunes the senses with fragrances. It puts forward a new design process that is currently being developed in clothing elements, jewellery and textile technologies, offering a new method to deliver scent electronically and intelligently in fashion and everyday consumer products. It creates a personal ‘scent wave’, around the wearer, to allow the mind to wander, to give a deeper sense of life or ‘lived reality’ (verses fantasy), a new found satisfaction and confidence, and to reach new heights of creativity.
By combining biology with wearable technologies, we propose a biotechnological solution that can be translated into sensory fashion elements. This is a new trend in 21st century ‘data sensing’, based on holographic biosensors that sense the human condition, aromachology (the science of the effect of fragrance and behaviour), colour-therapy, and smart polymer science. The use of biosensors in the world of fashion and textiles, enables us to act on visual cues or detect scent signals and rising stress levels, allowing immediate information to hand.
An ‘oneiric’ mood is triggered by a spectrum of scents which is encased in a micro-computerised ‘scent‐cell’ and integrated into clothing elements or jewellery. When we inhale an unexpected scent, it takes us by surprise; the power of fragrance fills us with pleasurable ripples of multi‐sensations and dream‐like qualities. The aromas create a near trance‐like experience that induces a daydream state of (immediate) satisfaction, or a ‘revived reality’ in our personal scent bubble of reality.
The products and jewellery items were copyrighted and designed by Slim Barrett and the technology input was from EG Technology and Epigem
CaloriNet: From silhouettes to calorie estimation in private environments
We propose a novel deep fusion architecture, CaloriNet, for the online
estimation of energy expenditure for free living monitoring in private
environments, where RGB data is discarded and replaced by silhouettes. Our
fused convolutional neural network architecture is trainable end-to-end, to
estimate calorie expenditure, using temporal foreground silhouettes alongside
accelerometer data. The network is trained and cross-validated on a publicly
available dataset, SPHERE_RGBD + Inertial_calorie. Results show
state-of-the-art minimum error on the estimation of energy expenditure
(calories per minute), outperforming alternative, standard and single-modal
techniques.Comment: 11 pages, 7 figure
Recommended from our members
Machine learning to model health with multimodal mobile sensor data
The widespread adoption of smartphones and wearables has led to the accumulation of rich datasets, which could aid the understanding of behavior and health in unprecedented detail. At the same time, machine learning and specifically deep learning have reached impressive performance in a variety of prediction tasks, but their use on time-series data appears challenging. Existing models struggle to learn from this unique type of data due to noise, sparsity, long-tailed distributions of behaviors, lack of labels, and multimodality.
This dissertation addresses these challenges by developing new models that leverage multi-task learning for accurate forecasting, multimodal fusion for improved population subtyping, and self-supervision for learning generalized representations. We apply our proposed methods to challenging real-world tasks of predicting mental health and cardio-respiratory fitness through sensor data.
First, we study the relationship of passive data as collected from smartphones (movement and background audio) to momentary mood levels. Our new training pipeline, which combines different sensor data into a low-dimensional embedding and clusters longitudinal user trajectories as outcome, outperforms traditional approaches based solely on psychology questionnaires. Second, motivated by mood instability as a predictor of poor mental health, we propose encoder-decoder models for time-series forecasting which exploit the bi-modality of mood with multi-task learning.
Next, motivated by the success of general-purpose models in vision and language tasks, we propose a self-supervised neural network ready-to-use as a feature extractor for wearable data. To this end, we set the heart rate responses as the supervisory signal for activity data, leveraging their underlying physiological relationship and show that the resulting task-agnostic embeddings can generalize in predicting structurally different downstream outcomes through transfer learning (e.g. BMI, age, energy expenditure), outperforming unsupervised autoencoders and biomarkers. Finally, acknowledging fitness as a strong predictor of overall health, which, however, can only be measured with expensive instruments (e.g., a VO2max test), we develop models that enable accurate prediction of fine-grained fitness levels with wearables in the present, and more importantly, its direction and magnitude almost a decade later.
All proposed methods are evaluated on large longitudinal datasets with tens of thousands of participants in the wild. The models developed and the insights drawn in this dissertation provide evidence for a better understanding of high-dimensional behavioral and physiological data with implications for large-scale health and lifestyle monitoring.The Department of Computer Science and Technology at the University of Cambridge through the EPSRC through Grant DTP (EP/N509620/1), and the Embiricos Trust Scholarship of Jesus College Cambridg
DeepMood: Modeling Mobile Phone Typing Dynamics for Mood Detection
The increasing use of electronic forms of communication presents new
opportunities in the study of mental health, including the ability to
investigate the manifestations of psychiatric diseases unobtrusively and in the
setting of patients' daily lives. A pilot study to explore the possible
connections between bipolar affective disorder and mobile phone usage was
conducted. In this study, participants were provided a mobile phone to use as
their primary phone. This phone was loaded with a custom keyboard that
collected metadata consisting of keypress entry time and accelerometer
movement. Individual character data with the exceptions of the backspace key
and space bar were not collected due to privacy concerns. We propose an
end-to-end deep architecture based on late fusion, named DeepMood, to model the
multi-view metadata for the prediction of mood scores. Experimental results
show that 90.31% prediction accuracy on the depression score can be achieved
based on session-level mobile phone typing dynamics which is typically less
than one minute. It demonstrates the feasibility of using mobile phone metadata
to infer mood disturbance and severity.Comment: KDD 201
- …