3,379 research outputs found

    Being in-sync: A multimodal framework on the emotional and cognitive synchronization of collaborative learners

    Get PDF
    Collaborative learners share an experience when focusing on a task together and coevally influence each other’s emotions and motivations. Continuous emotional synchronization relates to how learners co-regulate their cognitive resources, especially regarding their joint attention and transactive discourse. “Being in-sync” then refers to multiple emotional and cognitive group states and processes, raising the question: to what extent and when is being in-sync beneficial and when is it not? In this article, we propose a framework of multi-modal learning analytics addressing synchronization of collaborative learners across emotional and cognitive dimensions and different modalities. To exemplify this framework and approach the question of how emotions and cognitions intertwine in collaborative learning, we present contrasting cases of learners in a tabletop environment that have or have not been instructed to coordinate their gaze. Qualitative analysis of multimodal data incorporating eye-tracking and electrodermal sensors shows that gaze instruction facilitated being emotionally, cognitively, and behaviorally “in-sync” during the peer collaboration. Identifying and analyzing moments of shared emotional shifts shows how learners are establishing shared understanding regarding both the learning task as well as the relationship among them when they are emotionally “in-sync.

    How the Eyes Tell Lies: Social Gaze During a Preference Task

    Get PDF
    Social attention is thought to require detecting the eyes of others and following their gaze. To be effective, observers must also be able to infer the person's thoughts and feelings about what he or she is looking at, but this has only rarely been investigated in laboratory studies. In this study, participants' eye movements were recorded while they chose which of four patterns they preferred. New observers were subsequently able to reliably guess the preference response by watching a replay of the fixations. Moreover, when asked to mislead the person guessing, participants changed their looking behavior and guessing success was reduced. In a second experiment, naĂŻve participants could also guess the preference of the original observers but were unable to identify trials which were lies. These results confirm that people can spontaneously use the gaze of others to infer their judgments, but also that these inferences are open to deception

    GIMO: Gaze-Informed Human Motion Prediction in Context

    Full text link
    Predicting human motion is critical for assistive robots and AR/VR applications, where the interaction with humans needs to be safe and comfortable. Meanwhile, an accurate prediction depends on understanding both the scene context and human intentions. Even though many works study scene-aware human motion prediction, the latter is largely underexplored due to the lack of ego-centric views that disclose human intent and the limited diversity in motion and scenes. To reduce the gap, we propose a large-scale human motion dataset that delivers high-quality body pose sequences, scene scans, as well as ego-centric views with eye gaze that serves as a surrogate for inferring human intent. By employing inertial sensors for motion capture, our data collection is not tied to specific scenes, which further boosts the motion dynamics observed from our subjects. We perform an extensive study of the benefits of leveraging eye gaze for ego-centric human motion prediction with various state-of-the-art architectures. Moreover, to realize the full potential of gaze, we propose a novel network architecture that enables bidirectional communication between the gaze and motion branches. Our network achieves the top performance in human motion prediction on the proposed dataset, thanks to the intent information from the gaze and the denoised gaze feature modulated by the motion. The proposed dataset and our network implementation will be publicly available

    Therapeutic Alliance as Active Inference: The Role of Therapeutic Touch and Synchrony

    Get PDF
    Recognizing and aligning individuals’ unique adaptive beliefs or “priors” through cooperative communication is critical to establishing a therapeutic relationship and alliance. Using active inference, we present an empirical integrative account of the biobehavioral mechanisms that underwrite therapeutic relationships. A significant mode of establishing cooperative alliances—and potential synchrony relationships—is through ostensive cues generated by repetitive coupling during dynamic touch. Established models speak to the unique role of affectionate touch in developing communication, interpersonal interactions, and a wide variety of therapeutic benefits for patients of all ages; both neurophysiologically and behaviorally. The purpose of this article is to argue for the importance of therapeutic touch in establishing a therapeutic alliance and, ultimately, synchrony between practitioner and patient. We briefly overview the importance and role of therapeutic alliance in prosocial and clinical interactions. We then discuss how cooperative communication and mental state alignment—in intentional communication—are accomplished using active inference. We argue that alignment through active inference facilitates synchrony and communication. The ensuing account is extended to include the role of (C-) tactile afferents in realizing the beneficial effect of therapeutic synchrony. We conclude by proposing a method for synchronizing the effects of touch using the concept of active inference

    Proceedings of the 1st joint workshop on Smart Connected and Wearable Things 2016

    Get PDF
    These are the Proceedings of the 1st joint workshop on Smart Connected and Wearable Things (SCWT'2016, Co-located with IUI 2016). The SCWT workshop integrates the SmartObjects and IoWT workshops. It focusses on the advanced interactions with smart objects in the context of the Internet-of-Things (IoT), and on the increasing popularity of wearables as advanced means to facilitate such interactions

    Multimodal Data Analysis of Dyadic Interactions for an Automated Feedback System Supporting Parent Implementation of Pivotal Response Treatment

    Get PDF
    abstract: Parents fulfill a pivotal role in early childhood development of social and communication skills. In children with autism, the development of these skills can be delayed. Applied behavioral analysis (ABA) techniques have been created to aid in skill acquisition. Among these, pivotal response treatment (PRT) has been empirically shown to foster improvements. Research into PRT implementation has also shown that parents can be trained to be effective interventionists for their children. The current difficulty in PRT training is how to disseminate training to parents who need it, and how to support and motivate practitioners after training. Evaluation of the parents’ fidelity to implementation is often undertaken using video probes that depict the dyadic interaction occurring between the parent and the child during PRT sessions. These videos are time consuming for clinicians to process, and often result in only minimal feedback for the parents. Current trends in technology could be utilized to alleviate the manual cost of extracting data from the videos, affording greater opportunities for providing clinician created feedback as well as automated assessments. The naturalistic context of the video probes along with the dependence on ubiquitous recording devices creates a difficult scenario for classification tasks. The domain of the PRT video probes can be expected to have high levels of both aleatory and epistemic uncertainty. Addressing these challenges requires examination of the multimodal data along with implementation and evaluation of classification algorithms. This is explored through the use of a new dataset of PRT videos. The relationship between the parent and the clinician is important. The clinician can provide support and help build self-efficacy in addition to providing knowledge and modeling of treatment procedures. Facilitating this relationship along with automated feedback not only provides the opportunity to present expert feedback to the parent, but also allows the clinician to aid in personalizing the classification models. By utilizing a human-in-the-loop framework, clinicians can aid in addressing the uncertainty in the classification models by providing additional labeled samples. This will allow the system to improve classification and provides a person-centered approach to extracting multimodal data from PRT video probes.Dissertation/ThesisDoctoral Dissertation Computer Science 201
    • …
    corecore