567 research outputs found

    Multimodal Data Analysis of Dyadic Interactions for an Automated Feedback System Supporting Parent Implementation of Pivotal Response Treatment

    Get PDF
    abstract: Parents fulfill a pivotal role in early childhood development of social and communication skills. In children with autism, the development of these skills can be delayed. Applied behavioral analysis (ABA) techniques have been created to aid in skill acquisition. Among these, pivotal response treatment (PRT) has been empirically shown to foster improvements. Research into PRT implementation has also shown that parents can be trained to be effective interventionists for their children. The current difficulty in PRT training is how to disseminate training to parents who need it, and how to support and motivate practitioners after training. Evaluation of the parents’ fidelity to implementation is often undertaken using video probes that depict the dyadic interaction occurring between the parent and the child during PRT sessions. These videos are time consuming for clinicians to process, and often result in only minimal feedback for the parents. Current trends in technology could be utilized to alleviate the manual cost of extracting data from the videos, affording greater opportunities for providing clinician created feedback as well as automated assessments. The naturalistic context of the video probes along with the dependence on ubiquitous recording devices creates a difficult scenario for classification tasks. The domain of the PRT video probes can be expected to have high levels of both aleatory and epistemic uncertainty. Addressing these challenges requires examination of the multimodal data along with implementation and evaluation of classification algorithms. This is explored through the use of a new dataset of PRT videos. The relationship between the parent and the clinician is important. The clinician can provide support and help build self-efficacy in addition to providing knowledge and modeling of treatment procedures. Facilitating this relationship along with automated feedback not only provides the opportunity to present expert feedback to the parent, but also allows the clinician to aid in personalizing the classification models. By utilizing a human-in-the-loop framework, clinicians can aid in addressing the uncertainty in the classification models by providing additional labeled samples. This will allow the system to improve classification and provides a person-centered approach to extracting multimodal data from PRT video probes.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Human-Machine Communication: Complete Volume. Volume 3. Diffusion of Human-Machine Communication During and After the COVID-19 Pandemic

    Get PDF
    This is the complete volume of HMC Volume 3. Diffusion of Human-Machine Communication During and After the COVID-19 Pandemi

    Psychiatry in the Digital Age: A Blessing or a Curse?

    Get PDF
    Social distancing and the shortage of healthcare professionals during the COVID-19 pandemic, the impact of population aging on the healthcare system, as well as the rapid pace of digital innovation are catalyzing the development and implementation of new technologies and digital services in psychiatry. Is this transformation a blessing or a curse for psychiatry? To answer this question, we conducted a literature review covering a broad range of new technologies and eHealth services, including telepsychiatry; computer-, internet-, and app-based cognitive behavioral therapy; virtual reality; digital applied games; a digital medicine system; omics; neuroimaging; machine learning; precision psychiatry; clinical decision support; electronic health records; physician charting; digital language translators; and online mental health resources for patients. We found that eHealth services provide effective, scalable, and cost-efficient options for the treatment of people with limited or no access to mental health care. This review highlights innovative technologies spearheading the way to more effective and safer treatments. We identified artificially intelligent tools that relieve physicians from routine tasks, allowing them to focus on collaborative doctor-patient relationships. The transformation of traditional clinics into digital ones is outlined, and the challenges associated with the successful deployment of digitalization in psychiatry are highlighted

    Special Issue on Assistive and Rehabilitation Robotics

    Full text link

    L'empathie et la vidéoconférence en séances simulées de téléthérapie

    Get PDF
    La téléthérapie, soit l'offre de soins en psychothérapie à l'aide de moyens de communication comme la vidéoconférence (VC), est une modalité de traitement de plus en plus utilisée. Bien que la recherche montre que cette modalité soit aussi efficace que les suivis en présentiel et qu'elle permette la création de l'alliance thérapeutique avec les clients, des préoccupations subsistent quant à la possibilité que l'empathie soit impactée dans les suivis de téléthérapie par VC. Les quelques données disponibles indiquent que l'empathie ressentie par les thérapeutes et perçue par les clients pourrait être moindre en VC qu'en présentiel, ce qui n'a pas encore été testé expérimentalement. Cet écart d'empathie proviendrait de la perte relative de certains indices non verbaux en VC, notamment du contact visuel. Considérant que l'empathie constitue un prédicteur de l'issue thérapeutique, il importe 1) d'élucider les mécanismes propres au contexte de VC qui influencent l'empathie, 2) de tester quantitativement la présence d'une différence d'empathie entre les séances en VC et celles en présentiel et 3) de développer des méthodes par lesquelles optimiser l'empathie en téléthérapie. Ces objectifs sont abordés par l'entremise des quatre chapitres de la thèse. Le premier chapitre aborde le développement d'un cadre conceptuel rendant compte de l'effet de filtre présent dans un contexte de communication virtuelle et du probable impact délétère de cet effet de filtre sur l'empathie en téléthérapie. Le chapitre 2, au moyen de deux études, montre que les niveaux d'empathie ressentie et perçue lors de séances simulées de thérapie sont moins élevés en VC qu'en présentiel. Le chapitre 2 identifie également que certains éléments de la téléprésence, soit l'impression pour les thérapeutes et les clients d'être ensemble durant l'appel en VC, corrèlent avec l'empathie rapportée par les thérapeutes et les clients. Les chapitres 3 et 4 investiguent l'effet du contact visuel sur l'empathie perçue en téléthérapie. Le chapitre 3 décrit d'abord le développement d'une méthodologie simple permettant de préserver la perception de contact visuel en diminuant l'angle de regard situé entre la caméra web et les yeux de l'interlocuteur sur l'écran d'ordinateur. Le chapitre 4 reprend cette méthodologie pour créer deux conditions expérimentales, avec ou sans contact visuel en VC. Les résultats montrent que, contrairement aux hypothèses initiales, le fait de pouvoir établir un contact visuel n'augmente pas les niveaux d'empathie et de téléprésence rapportés par les clients en séances simulées de téléthérapie. Des données d'oculométrie prélevées durant les séances montrent que les clients ne regardent pas davantage les yeux et le visage du thérapeute dans la condition avec contact visuel. Une association est toutefois observée entre le temps passé à regarder les yeux du thérapeute et l'empathie rapportée par les clients, mais seulement dans la condition avec contact visuel. Ces données indiquent ainsi que les clients sont en mesure de percevoir l'empathie en VC, que le contact visuel soit possible ou non. La thèse démontre que l'empathie peut être affectée en contexte de VC, mais également que les clients peuvent s'ajuster à l'altération de certains indices non verbaux lorsqu'ils jaugent l'empathie du thérapeute. Ces résultats sont encourageants vu l'usage grandissant de cette modalité de traitement en contexte de pandémie mondiale de COVID-19.Teletherapy, defined as the use of a communication medium such as videoconference (VC) to conduct psychotherapy sessions at a distance, is increasingly used by therapists and clients. The use of teletherapy has seen a surge during the COVID-19 pandemic as a means to comply with the recommended social distancing measures. Though teletherapy has produced comparable outcomes to traditional, in-person therapy and is deemed a suitable modality for the establishment of therapeutic alliance, concerns remain over the possibility that empathy could be altered in teletherapy through VC. The available data, although limited, support the hypothesis that empathy could be lower in VC than in in-person sessions, but this has yet to be empirically tested. The relative loss of nonverbal cues in VC, such as the alteration of eye contact, could account for this potential discrepancy in empathy. There is therefore a need 1) to describe the influence of the VC medium on the mechanisms underlying empathy, 2) to quantitatively compare the levels of empathy in VC sessions to those in in-person sessions, and 3) to design a procedure to enhance empathy in VC sessions. These objectives are addressed in the four chapters of this thesis. The first chapter consists in the elaboration of a conceptual framework of online empathy. The conceptual framework describes the filter effect induced by online environments on nonverbal signals and its potential adverse influence on empathy in VC. Chapter 2 describes a study showing a decrease in empathy reported by therapists and clients taking part in simulated clinical sessions in VC and in-person settings. The results also reveal a significant correlation between empathy and telepresence, a term that relates to the impression for clients and therapists of being there, together in a VC interaction. The studies described in Chapters 3 and 4 aim at identifying the impact of eye contact on perceived empathy in teletherapy. Chapter 3 first describes the development of a simple methodology that facilitates eye contact in VC by decreasing the gaze angle between the webcam and the eyes of the other interactant on the screen. This methodology is employed in the experiment described in Chapter 4 to either allow or prevent eye contact from the clients' perspective during simulations of clinical sessions. The results show that, contrary to the hypothesis, facilitating eye contact in VC does not lead to higher levels of empathy and telepresence. Eye tracking data collected during the sessions showed that clients did not look more at the eyes and the face of therapists when eye contact was facilitated. However, a significant, positive correlation was observed between the time spent looking into the eyes of the therapist and the levels of empathy reported, but only in the sessions where eye contact was facilitated. These results show that clients can perceive empathy in VC, whether eye contact is altered or not. Overall, the findings of the thesis demonstrate that empathy can be altered in VC sessions but also highlight the capacity of clients to adapt to the alteration of nonverbal signals when assessing therapist empathy

    Logging Stress and Anxiety Using a Gamified Mobile-based EMA Application, and Emotion Recognition Using a Personalized Machine Learning Approach

    Get PDF
    According to American Psychological Association (APA) more than 9 in 10 (94 percent) adults believe that stress can contribute to the development of major health problems, such as heart disease, depression, and obesity. Due to the subjective nature of stress, and anxiety, it has been demanding to measure these psychological issues accurately by only relying on objective means. In recent years, researchers have increasingly utilized computer vision techniques and machine learning algorithms to develop scalable and accessible solutions for remote mental health monitoring via web and mobile applications. To further enhance accuracy in the field of digital health and precision diagnostics, there is a need for personalized machine-learning approaches that focus on recognizing mental states based on individual characteristics, rather than relying solely on general-purpose solutions. This thesis focuses on conducting experiments aimed at recognizing and assessing levels of stress and anxiety in participants. In the initial phase of the study, a mobile application with broad applicability (compatible with both Android and iPhone platforms) is introduced (we called it STAND). This application serves the purpose of Ecological Momentary Assessment (EMA). Participants receive daily notifications through this smartphone-based app, which redirects them to a screen consisting of three components. These components include a question that prompts participants to indicate their current levels of stress and anxiety, a rating scale ranging from 1 to 10 for quantifying their response, and the ability to capture a selfie. The responses to the stress and anxiety questions, along with the corresponding selfie photographs, are then analyzed on an individual basis. This analysis focuses on exploring the relationships between self-reported stress and anxiety levels and potential facial expressions indicative of stress and anxiety, eye features such as pupil size variation and eye closure, and specific action units (AUs) observed in the frames over time. In addition to its primary functions, the mobile app also gathers sensor data, including accelerometer and gyroscope readings, on a daily basis. This data holds potential for further analysis related to stress and anxiety. Furthermore, apart from capturing selfie photographs, participants have the option to upload video recordings of themselves while engaging in two neuropsychological games. These recorded videos are then subjected to analysis in order to extract pertinent features that can be utilized for binary classification of stress and anxiety (i.e., stress and anxiety recognition). The participants that will be selected for this phase are students aged between 18 and 38, who have received recent clinical diagnoses indicating specific stress and anxiety levels. In order to enhance user engagement in the intervention, gamified elements - an emerging trend to influence user behavior and lifestyle - has been utilized. Incorporating gamified elements into non-game contexts (e.g., health-related) has gained overwhelming popularity during the last few years which has made the interventions more delightful, engaging, and motivating. In the subsequent phase of this research, we conducted an AI experiment employing a personalized machine learning approach to perform emotion recognition on an established dataset called Emognition. This experiment served as a simulation of the future analysis that will be conducted as part of a more comprehensive study focusing on stress and anxiety recognition. The outcomes of the emotion recognition experiment in this study highlight the effectiveness of personalized machine learning techniques and bear significance for the development of future diagnostic endeavors. For training purposes, we selected three models, namely KNN, Random Forest, and MLP. The preliminary performance accuracy results for the experiment were 93%, 95%, and 87% respectively for these models

    Multisensory learning in adaptive interactive systems

    Get PDF
    The main purpose of my work is to investigate multisensory perceptual learning and sensory integration in the design and development of adaptive user interfaces for educational purposes. To this aim, starting from renewed understanding from neuroscience and cognitive science on multisensory perceptual learning and sensory integration, I developed a theoretical computational model for designing multimodal learning technologies that take into account these results. Main theoretical foundations of my research are multisensory perceptual learning theories and the research on sensory processing and integration, embodied cognition theories, computational models of non-verbal and emotion communication in full-body movement, and human-computer interaction models. Finally, a computational model was applied in two case studies, based on two EU ICT-H2020 Projects, "weDRAW" and "TELMI", on which I worked during the PhD
    • …
    corecore