15,448 research outputs found

    Multimodal Observation and Interpretation of Subjects Engaged in Problem Solving

    Get PDF
    In this paper we present the first results of a pilot experiment in the capture and interpretation of multimodal signals of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant's awareness of the current situation and to predict ability to respond effectively to challenging situations. Results show that a multimodal approach is more accurate than a unimodal one. By combining body posture, visual attention and emotion, the multimodal approach can reach up to 93% of accuracy when determining player's chess expertise while unimodal approach reaches 86%. Finally this experiment validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving

    Automatic Measurement of Affect in Dimensional and Continuous Spaces: Why, What, and How?

    Get PDF
    This paper aims to give a brief overview of the current state-of-the-art in automatic measurement of affect signals in dimensional and continuous spaces (a continuous scale from -1 to +1) by seeking answers to the following questions: i) why has the field shifted towards dimensional and continuous interpretations of affective displays recorded in real-world settings? ii) what are the affect dimensions used, and the affect signals measured? and iii) how has the current automatic measurement technology been developed, and how can we advance the field

    Personalized Prediction of Recurrent Stress Events Using Self-Supervised Learning on Multimodal Time-Series Data

    Full text link
    Chronic stress can significantly affect physical and mental health. The advent of wearable technology allows for the tracking of physiological signals, potentially leading to innovative stress prediction and intervention methods. However, challenges such as label scarcity and data heterogeneity render stress prediction difficult in practice. To counter these issues, we have developed a multimodal personalized stress prediction system using wearable biosignal data. We employ self-supervised learning (SSL) to pre-train the models on each subject's data, allowing the models to learn the baseline dynamics of the participant's biosignals prior to fine-tuning the stress prediction task. We test our model on the Wearable Stress and Affect Detection (WESAD) dataset, demonstrating that our SSL models outperform non-SSL models while utilizing less than 5% of the annotations. These results suggest that our approach can personalize stress prediction to each user with minimal annotations. This paradigm has the potential to enable personalized prediction of a variety of recurring health events using complex multimodal data streams

    Acute Stroke Multimodal Imaging: Present and Potential Applications toward Advancing Care.

    Get PDF
    In the past few decades, the field of acute ischemic stroke (AIS) has experienced significant advances in clinical practice. A core driver of this success has been the utilization of acute stroke imaging with an increasing focus on advanced methods including multimodal imaging. Such imaging techniques not only provide a richer understanding of AIS in vivo, but also, in doing so, provide better informed clinical assessments in management and treatment toward achieving best outcomes. As a result, advanced stroke imaging methods are now a mainstay of routine AIS practice that reflect best practice delivery of care. Furthermore, these imaging methods hold great potential to continue to advance the understanding of AIS and its care in the future. Copyright © 2017 by Thieme Medical Publishers, Inc

    Communication interventions in adult and pediatric oncology: A scoping review and analysis of behavioral targets

    Get PDF
    BackgroundImproving communication requires that clinicians and patients change their behaviors. Interventions might be more successful if they incorporate principles from behavioral change theories. We aimed to determine which behavioral domains are targeted by communication interventions in oncology.MethodsSystematic search of literature indexed in Ovid Medline, Embase, Scopus, Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, Clinicaltrials.gov (2000-October 2018) for intervention studies targeting communication behaviors of clinicians and/or patients in oncology. Two authors extracted the following information: population, number of participants, country, number of sites, intervention target, type and context, study design. All included studies were coded based on which behavioral domains were targeted, as defined by Theoretical Domains Framework.FindingsEighty-eight studies met inclusion criteria. Interventions varied widely in which behavioral domains were engaged. Knowledge and skills were engaged most frequently (85%, 75/88 and 73%, 64/88, respectively). Fewer than 5% of studies engaged social influences (3%, 3/88) or environmental context/resources (5%, 4/88). No studies engaged reinforcement. Overall, 7/12 behavioral domains were engaged by fewer than 30% of included studies. We identified methodological concerns in many studies. These 88 studies reported 188 different outcome measures, of which 156 measures were reported by individual studies.ConclusionsMost communication interventions target few behavioral domains. Increased engagement of behavioral domains in future studies could support communication needs in feasible, specific, and sustainable ways. This study is limited by only including interventions that directly facilitated communication interactions, which excluded stand-alone educational interventions and decision-aids. Also, we applied stringent coding criteria to allow for reproducible, consistent coding, potentially leading to underrepresentation of behavioral domains

    What Twitter Profile and Posted Images Reveal About Depression and Anxiety

    Full text link
    Previous work has found strong links between the choice of social media images and users' emotions, demographics and personality traits. In this study, we examine which attributes of profile and posted images are associated with depression and anxiety of Twitter users. We used a sample of 28,749 Facebook users to build a language prediction model of survey-reported depression and anxiety, and validated it on Twitter on a sample of 887 users who had taken anxiety and depression surveys. We then applied it to a different set of 4,132 Twitter users to impute language-based depression and anxiety labels, and extracted interpretable features of posted and profile pictures to uncover the associations with users' depression and anxiety, controlling for demographics. For depression, we find that profile pictures suppress positive emotions rather than display more negative emotions, likely because of social media self-presentation biases. They also tend to show the single face of the user (rather than show her in groups of friends), marking increased focus on the self, emblematic for depression. Posted images are dominated by grayscale and low aesthetic cohesion across a variety of image features. Profile images of anxious users are similarly marked by grayscale and low aesthetic cohesion, but less so than those of depressed users. Finally, we show that image features can be used to predict depression and anxiety, and that multitask learning that includes a joint modeling of demographics improves prediction performance. Overall, we find that the image attributes that mark depression and anxiety offer a rich lens into these conditions largely congruent with the psychological literature, and that images on Twitter allow inferences about the mental health status of users.Comment: ICWSM 201

    Assessing the Feasibility of a Multimodal Approach to Pain Evaluation in Early Stages after Spinal Cord Injury.

    Get PDF
    This research evaluates the feasibility of a multimodal pain assessment protocol during rehabilitation following spinal cord injury (SCI). The protocol amalgamates clinical workup (CW), quantitative sensory testing (QST), and psychosocial factors (PSF) administered at 4 (T1), 12 (T2), and 24 (T3) weeks post injury and at discharge (T4). Molecular blood biomarkers (BB) were evaluated via gene expression and proteomic assays at T1 and T4. Different pain trajectories and temporal changes were identified using QST, with inflammation and pain-related biomarkers recorded. Higher concentrations of osteopontin and cystatin-C were found in SCI patients compared to healthy controls, indicating their potential as biomarkers. We observed altered inflammatory responses and a slight increase in ICAM-1 and CCL3 were noted, pointing towards changes in cellular adhesion linked with spinal injury and a possible connection with neuropathic pain. Despite a small patient sample hindering the correlation of feasibility data, descriptive statistical analyses were conducted on stress, depression, anxiety, quality of life, and pain interferences. The SCI Pain Instrument (SCIPI) was efficient in distinguishing between nociceptive and neuropathic pain, showing a progressive increase in severity over time. The findings emphasize the need for the careful consideration of recruitment setting and protocol adjustments to enhance the feasibility of multimodal pain evaluation studies post SCI. They also shed light on potential early adaptive mechanisms in SCI pathophysiology, warranting the further exploration of prognostic and preventive strategies for chronic pain in the SCI population

    Stressful first impressions in job interviews

    Get PDF
    Stress can impact many aspects of our lives, such as the way we interact and work with others, or the first impressions that we make. In the past, stress has been most commonly assessed through self-reported questionnaires; however, advancements in wearable technology have enabled the measurement of physiological symptoms of stress in an unobtrusive manner. Using a dataset of job interviews, we investigate whether first impressions of stress (from annotations) are equivalent to physiological measurements of the electrodermal activity (EDA). We examine the use of automatically extracted nonverbal cues stemming from both the visual and audio modalities, as well EDA stress measurements for the inference of stress impressions obtained from manual annotations. Stress impressions were found to be significantly negatively correlated with hireability ratings i.e individuals who were perceived to be more stressed were more likely to obtained lower hireability scores. The analysis revealed a significant relationship between audio and visual features but low predictability and no significant effects were found for the EDA features. While some nonverbal cues were more clearly related to stress, the physiological cues were less reliable and warrant further investigation into the use of wearable sensors for stress detection
    • …
    corecore