43 research outputs found
Beyond mobile apps: a survey of technologies for mental well-being
Mental health problems are on the rise globally and strain national health systems worldwide. Mental disorders are closely associated with fear of stigma, structural barriers such as financial burden, and lack of available services and resources which often prohibit the delivery of frequent clinical advice and monitoring. Technologies for mental well-being exhibit a range of attractive properties, which facilitate the delivery of state-of-the-art clinical monitoring. This review article provides an overview of traditional techniques followed by their technological alternatives, sensing devices, behaviour changing tools, and feedback interfaces. The challenges presented by these technologies are then discussed with data collection, privacy, and battery life being some of the key issues which need to be carefully considered for the successful deployment of mental health toolkits. Finally, the opportunities this growing research area presents are discussed including the use of portable tangible interfaces combining sensing and feedback technologies. Capitalising on the data these ubiquitous devices can record, state of the art machine learning algorithms can lead to the development of robust clinical decision support tools towards diagnosis and improvement of mental well-being delivery in real-time
The Constructivistly-Organised Dimensional-Appraisal (CODA) Model and Evidence for the Role of Goal-directed Processes in Emotional Episodes Induced by Music
The study of affective responses to music is a flourishing field. Advancements in the study of this phenomena have been complemented by the introduction of several music-specific models of emotion, with two of the most well-cited ones being the BRECVEMA and the Multifactorial Process Model. These two models have undoubtedly contributed to the field. However, contemporary developments in the wider affective sciences (broadly described as the ‘rise of affectivism’) have yet to be incorporated into the music emotion literature. These developments in the affective sciences may aid in addressing remaining gaps in the music literature, in particular for acknowledging individual and contextual differences.
The first aim of this thesis was to outline contemporary theories from the wider affective sciences and subsequently critique current popular models of musical emotions through the lens of these advancements. The second aim was to propose a new model based on this critique: the Constructivistly-Organised Dimensional-Appraisal (CODA) model. This CODA model draws together multiple competing models into a single framework centralised around goal-directed appraisal mechanisms which are key to the wider affective sciences but are a less commonly acknowledged component of musical affect. The third aim was to empirically test some of the core hypotheses of the CODA model. In particular, examining goal-directed mechanisms, their validity in a musical context, and their ability to address individual and contextual differences in musically induced affect. Across four experiments which include exploratory and lab-based designs through to real- world applications, the results are supportive of the role of goal-directed mechanisms in musically induced emotional episodes. Experiment one presents a first test battery of multiple appraisal dimensions developed for music. The results show that several of the hypothesised appraisal dimensions are valid dimensions is a musical context. Moreover, these mechanisms cluster into goal-directed latent variables. Experiment two develops a new set of stimuli annotations relating to musical goals, showing that music can be more or less appropriate for different musical goals (functions). Experiment three, using the new stimuli set from experiment two, tests the effects of different goals with more or less appropriate music on musically induced affect. These results show that goal-directed mechanisms can change induced core-affect (valence and arousal) and intensity, even for the same piece of music. Experiment four extends the study of goal-directed mechanisms into a real-world context through an interdisciplinary and cross-cultural design. The final experiment demonstrates how goal-directed mechanisms can be manipulated through different algorithms to induce negative affect in a Colombian population.
The main conclusions of this thesis are that the CODA model, more specifically goal-directed mechanisms, provide a valuable, non-reductive, and more efficient approach to addressing individual and contextual differences for musically induced emotional episodes in the new era of affectivism
Grounding semantic cognition using computational modelling and network analysis
The overarching objective of this thesis is to further the field of grounded semantics using a range of computational and empirical studies. Over the past thirty years, there have been many algorithmic advances in the
modelling of semantic cognition. A commonality across these cognitive models is a reliance on hand-engineering “toy-models”. Despite incorporating newer
techniques (e.g. Long short-term memory), the model inputs remain unchanged. We argue that the inputs to these traditional semantic models have little resemblance with real human experiences. In this dissertation, we ground our neural network models by training them with real-world visual scenes using naturalistic photographs. Our approach is an alternative to both hand-coded
features and embodied raw sensorimotor signals.
We conceptually replicate the mutually reinforcing nature of hybrid (feature-based and grounded) representations using silhouettes of concrete concepts as model inputs. We next gradually develop a novel grounded cognitive semantic representation which we call scene2vec, starting with object co-occurrences and then adding emotions and language-based tags. Limitations of our scene-based representation are identified for more abstract concepts (e.g. freedom). We further present a large-scale human semantics study, which reveals small-world semantic network topologies are context-dependent and
that scenes are the most dominant cognitive dimension. This finding leads us to conclude that there is no meaning without context. Lastly, scene2vec shows
promising human-like context-sensitive stereotypes (e.g. gender role bias), and we explore how such stereotypes are reduced by targeted debiasing. In conclusion, this thesis provides support for a novel computational
viewpoint on investigating meaning - scene-based grounded semantics. Future research scaling scene-based semantic models to human-levels through virtual grounding has the potential to unearth new insights into the human mind and
concurrently lead to advancements in artificial general intelligence by enabling robots, embodied or otherwise, to acquire and represent meaning directly from the environment
Recommended from our members
Tangible fidgeting interfaces for mental wellbeing recognition using deep learning applied to physiological sensor data
The momentary assessment of an individual's affective state is critical to the monitoring of mental wellbeing and the ability to instantly apply interventions. This thesis introduces the concept of tangible fidgeting interfaces for affective recognition from design and development through to evaluation. Tangible interfaces expand upon the affordance of familiar physical objects as the ability to touch and fidget may help to tap into individuals' psychological need to feel occupied and engaged. Embedding digital technologies within interfaces capitalises on motor and perceptual capabilities and allows for the direct manipulation of data, offering people the potential for new modes of interaction when experiencing mental wellbeing challenges.
Tangible interfaces present an ideal opportunity to digitally enable physical fidgeting interactions along with physiological sensor monitoring to unobtrusively and comfortable measure non-visable changes in affective state. This opportunity initiated the investigation of factors that would bring about the designing of more effective intelligent solutions using participatory design techniques to engage people in designing solutions relevant to themselves.
Adopting an artificial intelligence approach using physiological signals creates the possibility to quantify affect with high levels of accuracy. However, labelling is an indispensable stage of data pre-processing that is required before classification and can be extremely challenging with multi-model sensor data. New techniques are introduced for labelling at the point of collection coupled with a pilot study and a systematic performance comparison of five custom built labelling interfaces.
When classifying labelled physiological sensor data, individual differences between people limit the generalisability of models. To address this challenge, a transfer learning approach has been developed that personalises affective models using few labelled samples. This approach to personalise models and improve cross-domain performance is completed on-device, automating the traditionally manual process, saving time and labour. Furthermore, monitoring trajectories over long periods of time inherits some critical limitations in relation to the size of the training dataset. This shortcoming may hinder the development of reliable and accurate machine learning models. A second framework has been developed to overcome the limitation of small training datasets using an image-encoding transfer learning approach.
This research offers the first attempt at the development of tangible interfaces using artificial intelligence towards building a real-world continuous affect recognition system in addition to offering real-time feedback to perform as interventions. This exploration of affective interfaces has many potential applications to help improve quality of life for the wider population
Lifelog access modelling using MemoryMesh
As of very recently, we have observed a convergence of technologies that have led to the emergence of lifelogging as a technology for personal data application. Lifelogging will become ubiquitous in the near future, not just for memory enhancement and health management, but also in various other domains. While there are many devices available for gathering massive lifelogging data, there are still challenges to modelling large volume of multi-modal lifelog data. In the thesis, we explore and address the problem of how to model lifelog in order to make personal lifelogs more accessible to users from the perspective of collection, organization and visualization. In order to subdivide our research targets, we designed and followed the following steps to solve the problem:
1. Lifelog activity recognition. We use multiple sensor data to analyse various daily life activities. Data ranges from accelerometer data collected by mobile phones to images captured by wearable cameras. We propose a semantic, density-based algorithm to cope with concept selection issues for lifelogging sensory data.
2. Visual discovery of lifelog images. Most of the lifelog information we takeeveryday is in a form of images, so images contain significant information about our lives. Here we conduct some experiments on visual content analysis of lifelog images, which includes both image contents and image meta data.
3. Linkage analysis of lifelogs. By exploring linkage analysis of lifelog data, we can connect all lifelog images using linkage models into a concept called the MemoryMesh. The thesis includes experimental evaluations using real-life data collected from multiple users and shows the performance of our algorithms in detecting semantics of daily-life concepts and their effectiveness in activity recognition and lifelog retrieval