5,504 research outputs found

    Schizophrenia – time to commit to policy change

    Get PDF
    Schizophrenia is recognised as one of the most complex and profound mental health conditions, steeped in both myth and reality. Efforts needs to be multifaceted, including policy development, treatment guidance and scientific innovation, with all stakeholders working together to ensure meaningful progress. This report delves into the unique needs of people with schizophrenia, exploring supportive measures for their well-being, practical and attainable recommendations for change. The message to all nations, policy makers, payers and healthcare professionals is clear: strive for excellence, but most importantly – start somewhere

    A Spark Of Emotion: The Impact of Electrical Facial Muscle Activation on Emotional State and Affective Processing

    Get PDF
    Facial feedback, which involves the brain receiving information about the activation of facial muscles, has the potential to influence our emotional states and judgments. The extent to which this applies is still a matter of debate, particularly considering a failed replication of a seminal study. One factor contributing to the lack of replication in facial feedback effects may be the imprecise manipulation of facial muscle activity in terms of both degree and timing. To overcome these limitations, this thesis proposes a non-invasive method for inducing precise facial muscle contractions, called facial neuromuscular electrical stimulation (fNMES). I begin by presenting a systematic literature review that lays the groundwork for standardising the use of fNMES in psychological research, by evaluating its application in existing studies. This review highlights two issues, the lack of use of fNMES in psychology research and the lack of parameter reporting. I provide practical recommendations for researchers interested in implementing fNMES. Subsequently, I conducted an online experiment to investigate participants' willingness to participate in fNMES research. This experiment revealed that concerns over potential burns and involuntary muscle movements are significant deterrents to participation. Understanding these anxieties is critical for participant management and expectation setting. Subsequently, two laboratory studies are presented that investigated the facial FFH using fNMES. The first study showed that feelings of happiness and sadness, and changes in peripheral physiology, can be induced by stimulating corresponding facial muscles with 5–seconds of fNMES. The second experiment showed that fNMES-induced smiling alters the perception of ambiguous facial emotions, creating a bias towards happiness, and alters neural correlates of face processing, as measured with event-related potentials (ERPs). In summary, the thesis presents promising results for testing the facial feedback hypothesis with fNMES and provides practical guidelines and recommendations for researchers interested in using fNMES for psychological research

    Speech-based automatic depression detection via biomarkers identification and artificial intelligence approaches

    Get PDF
    Depression has become one of the most prevalent mental health issues, affecting more than 300 million people all over the world. However, due to factors such as limited medical resources and accessibility to health care, there are still a large number of patients undiagnosed. In addition, the traditional approaches to depression diagnosis have limitations because they are usually time-consuming, and depend on clinical experience that varies across different clinicians. From this perspective, the use of automatic depression detection can make the diagnosis process much faster and more accessible. In this thesis, we present the possibility of using speech for automatic depression detection. This is based on the findings in neuroscience that depressed patients have abnormal cognition mechanisms thus leading to the speech differs from that of healthy people. Therefore, in this thesis, we show two ways of benefiting from automatic depression detection, i.e., identifying speech markers of depression and constructing novel deep learning models to improve detection accuracy. The identification of speech markers tries to capture measurable depression traces left in speech. From this perspective, speech markers such as speech duration, pauses and correlation matrices are proposed. Speech duration and pauses take speech fluency into account, while correlation matrices represent the relationship between acoustic features and aim at capturing psychomotor retardation in depressed patients. Experimental results demonstrate that these proposed markers are effective at improving the performance in recognizing depressed speakers. In addition, such markers show statistically significant differences between depressed patients and non-depressed individuals, which explains the possibility of using these markers for depression detection and further confirms that depression leaves detectable traces in speech. In addition to the above, we propose an attention mechanism, Multi-local Attention (MLA), to emphasize depression-relevant information locally. Then we analyse the effectiveness of MLA on performance and efficiency. According to the experimental results, such a model can significantly improve performance and confidence in the detection while reducing the time required for recognition. Furthermore, we propose Cross-Data Multilevel Attention (CDMA) to emphasize different types of depression-relevant information, i.e., specific to each type of speech and common to both, by using multiple attention mechanisms. Experimental results demonstrate that the proposed model is effective to integrate different types of depression-relevant information in speech, improving the performance significantly for depression detection

    Impact of Imaging and Distance Perception in VR Immersive Visual Experience

    Get PDF
    Virtual reality (VR) headsets have evolved to include unprecedented viewing quality. Meanwhile, they have become lightweight, wireless, and low-cost, which has opened to new applications and a much wider audience. VR headsets can now provide users with greater understanding of events and accuracy of observation, making decision-making faster and more effective. However, the spread of immersive technologies has shown a slow take-up, with the adoption of virtual reality limited to a few applications, typically related to entertainment. This reluctance appears to be due to the often-necessary change of operating paradigm and some scepticism towards the "VR advantage". The need therefore arises to evaluate the contribution that a VR system can make to user performance, for example to monitoring and decision-making. This will help system designers understand when immersive technologies can be proposed to replace or complement standard display systems such as a desktop monitor. In parallel to the VR headsets evolution there has been that of 360 cameras, which are now capable to instantly acquire photographs and videos in stereoscopic 3D (S3D) modality, with very high resolutions. 360° images are innately suited to VR headsets, where the captured view can be observed and explored through the natural rotation of the head. Acquired views can even be experienced and navigated from the inside as they are captured. The combination of omnidirectional images and VR headsets has opened to a new way of creating immersive visual representations. We call it: photo-based VR. This represents a new methodology that combines traditional model-based rendering with high-quality omnidirectional texture-mapping. Photo-based VR is particularly suitable for applications related to remote visits and realistic scene reconstruction, useful for monitoring and surveillance systems, control panels and operator training. The presented PhD study investigates the potential of photo-based VR representations. It starts by evaluating the role of immersion and user’s performance in today's graphical visual experience, to then use it as a reference to develop and evaluate new photo-based VR solutions. With the current literature on photo-based VR experience and associated user performance being very limited, this study builds new knowledge from the proposed assessments. We conduct five user studies on a few representative applications examining how visual representations can be affected by system factors (camera and display related) and how it can influence human factors (such as realism, presence, and emotions). Particular attention is paid to realistic depth perception, to support which we develop target solutions for photo-based VR. They are intended to provide users with a correct perception of space dimension and objects size. We call it: true-dimensional visualization. The presented work contributes to unexplored fields including photo-based VR and true-dimensional visualization, offering immersive system designers a thorough comprehension of the benefits, potential, and type of applications in which these new methods can make the difference. This thesis manuscript and its findings have been partly presented in scientific publications. In particular, five conference papers on Springer and the IEEE symposia, [1], [2], [3], [4], [5], and one journal article in an IEEE periodical [6], have been published

    Linking language and emotion: how emotion is understood in language comprehension, production and prediction using psycholinguistic methods

    Get PDF
    Emotions are an integral part of why and how we use language in everyday life. We communicate our concerns, express our woes, and share our joy through the use of non-verbal and verbal language. Yet there is a limited understanding of when and how emotional language is processed differently to neutral language, or of how emotional information facilitates or inhibits language processing. Indeed, various efforts have been made to bring back emotions into the discipline of psycholinguistics in the last decade. This can be seen in many interdisciplinary models focusing on the role played by emotion in each aspect of linguistic experience. In this thesis, I answer this call and pursue questions that remain unanswered in psycholinguistics regarding its interaction with emotion. The general trend that I am using to bring emotion into psycholinguistic research is straightforward. Where applicable and relevant, I use well-established tasks or paradigms to investigate the effects of emotional content in language processing. Hence, I focused on three main areas of language processing: comprehension, production and prediction. The first experimental chapter includes a series of experiments utilising the Modality Switching Paradigm to investigate whether sentences describing emotional states are processed differently from sentences describing cognitive states. No switching effects were found consistently in my 3 experiments. My results suggest that these distinct classes of interoceptive concepts, such as ‘thinking’ or ‘being happy’, are not processed differently from each other, suggesting that people do not switch attention between different interoceptive systems when comprehending emotional or cognitive sentences. I discuss the implications for grounded cognition theory in the embodiment literature. In my second experimental chapter, I used the Cumulative Semantic Interference Paradigm to investigate these two questions: (1) whether emotion concepts interfere with one another when repeatedly retrieved (emotion label objects), and (2) whether similar interference occurs for concrete objects that share similar valence association (emotion-laden objects). This could indicate that people use information such as valence and arousal to group objects in semantic memory. I found that interference occurs when people retrieve direct emotion labels repeatedly (e.g., “happy” and “sad”) but not when they retrieve the names of concrete objects that have similar emotion connotations (e.g., “puppy” and “rainbow”). I discuss my findings in terms of the different types of information that support representation of abstract vs. concrete concepts. In my final experimental chapter, I used the Visual World Paradigm to investigate whether the emotional state of an agent is used to inform predictions during sentence processing. I found that people do use the description of emotional state of an agent (e.g., “The boy is happy”) to predict the cause of that affective state during sentence processing (e.g., “because he was given an ice-cream”). A key result here is that people were more likely to fixate on the emotionally congruent objects (e.g., ice-cream) compared to incongruent objects (e.g., broccoli). This suggests that people rapidly and automatically inform predictions about upcoming sentence information based on the emotional state of the agent. I discuss our findings as a novel contribution to the Visual World literature. I conducted a diverse set of experiments using a range of established psycholinguistic methods to investigate the roles of emotional information in language processing. I found clear results in the eye-tracking study but inconsistent effects in both switching and interference studies. I interpret these mixed findings in the following way: emotional content does not always have effects in language processing and that effect are most likely in tasks that explicitly require participants to simulate emotion states in some way. Regardless, not only was I successful in finding some novel results by extending previous tasks, but I was also able to show that this is an avenue that can be explored more to advance the affective psycholinguistic field

    Know Thyself: Improving Interoceptive Ability Through Ambient Biofeedback in the Workplace

    Get PDF
    Interoception, the perception of the body’s internal state, is intimately connected to self-regulation and wellbeing. Grounded in the affective science literature, we design an ambient biofeedback system called Soni-Phy and a lab study to investigate whether, when and how an unobtrusive biofeedback system can be used to improve interoceptive sensibility and accuracy by amplifying a users’ internal state. This research has practical significance for the design and improvement of assistive technologies for the workplace

    Meaning-making and creativity in musical entrainment

    Get PDF
    In this paper we suggest that basic forms of musical entrainment may be considered as intrinsically creative, enabling further creative behaviors which may flourish at different levels and timescales. Rooted in an agent's capacity to form meaningful couplings with their sonic, social, and cultural environment, musical entrainment favors processes of adaptation and exploration, where innovative and functional aspects are cultivated via active, bodily experience. We explore these insights through a theoretical lens that integrates findings from enactive cognitive science and creative cognition research. We center our examination on the realms of groove experience and the communicative and emotional dimensions of music, aiming to present a novel preliminary perspective on musical entrainment, rooted in the fundamental concepts of meaning-making and creativity. To do so, we draw from a suite of approaches that place particular emphasis on the role of situated experience and review a range of recent empirical work on entrainment (in musical and non-musical settings), emphasizing the latter's biological and cognitive foundations. We conclude that musical entrainment may be regarded as a building block for different musical creativities that shape one's musical development, offering a concrete example for how this theory could be empirically tested in the future

    Innermost Echoes: Integrating Real-Time Physiology into Live Music Performances

    Get PDF
    In this paper, we propose a method for utilizing musical artifacts and physiological data as a means for creating a new form of live music experience that is rooted in the physiology of the perform- ers and audience members. By utilizing physiological data (namely Electrodermal Activity (EDA) and Heart Rate Variability (HRV)) and applying this data to musical artifacts including a robotic koto (a traditional 13-string Japanese instrument fitted with solenoids and linear actuators), a Eurorack synthesizer, and Max/MSP software, we aim to develop a new form of semi-improvisational and signif- icantly indeterminate performance practice. It has since evolved into a multi-modal methodology which honors improvisational performance practices and utilizes physiological data which of- fers both performers and audiences an ever-changing and intimate experience. In our first exploratory phase, we focused on the development of a means for controlling a bespoke robotic koto in conjunction with a Eurorack synthesizer system and Max/MSP software for controlling the incoming data. We integrated a reliance on physiological data to infuse a more directly human elements into this artifact system. This allows a significant portion of the decision-making to be directly controlled by the incoming physiological data in real-time, thereby affording a sense of performativity within this non-living system. Our aim is to continue the development of this method to strike a novel balance between intentionality and impromptu performative results

    Deep Learning Techniques for Electroencephalography Analysis

    Get PDF
    In this thesis we design deep learning techniques for training deep neural networks on electroencephalography (EEG) data and in particular on two problems, namely EEG-based motor imagery decoding and EEG-based affect recognition, addressing challenges associated with them. Regarding the problem of motor imagery (MI) decoding, we first consider the various kinds of domain shifts in the EEG signals, caused by inter-individual differences (e.g. brain anatomy, personality and cognitive profile). These domain shifts render multi-subject training a challenging task and impede robust cross-subject generalization. We build a two-stage model ensemble architecture and propose two objectives to train it, combining the strengths of curriculum learning and collaborative training. Our subject-independent experiments on the large datasets of Physionet and OpenBMI, verify the effectiveness of our approach. Next, we explore the utilization of the spatial covariance of EEG signals through alignment techniques, with the goal of learning domain-invariant representations. We introduce a Riemannian framework that concurrently performs covariance-based signal alignment and data augmentation, while training a convolutional neural network (CNN) on EEG time-series. Experiments on the BCI IV-2a dataset show that our method performs superiorly over traditional alignment, by inducing regularization to the weights of the CNN. We also study the problem of EEG-based affect recognition, inspired by works suggesting that emotions can be expressed in relative terms, i.e. through ordinal comparisons between different affective state levels. We propose treating data samples in a pairwise manner to infer the ordinal relation between their corresponding affective state labels, as an auxiliary training objective. We incorporate our objective in a deep network architecture which we jointly train on the tasks of sample-wise classification and pairwise ordinal ranking. We evaluate our method on the affective datasets of DEAP and SEED and obtain performance improvements over deep networks trained without the additional ranking objective
    • 

    corecore