4 research outputs found

    Multimodality in Pervasive Environment

    Get PDF
    Future pervasive environments are expected to immerse users in a consistent world of probes, sensors and actuators. Multimodal interfaces combined with social computing interactions and high-performance networking can foster a new generation of pervasive environments. However, much work is still needed to harness the full potential of multimodal interaction. In this paper we discuss some short-term research goals, including advanced techniques for joining and correlating multiple data flows, each with its own approximations and uncertainty models. Also, we discuss some longer term objectives, like providing users with a mental model of their own multimodal "aura", enabling them to collaborate with the network infrastructure toward inter-modal correlation of multimodal inputs, much in the same way as the human brain extracts a single self-conscious experience from multiple sensorial data flows

    Toward Sensor-Based Context Aware Systems

    Get PDF
    This paper proposes a methodology for sensor data interpretation that can combine sensor outputs with contexts represented as sets of annotated business rules. Sensor readings are interpreted to generate events labeled with the appropriate type and level of uncertainty. Then, the appropriate context is selected. Reconciliation of different uncertainty types is achieved by a simple technique that moves uncertainty from events to business rules by generating combs of standard Boolean predicates. Finally, context rules are evaluated together with the events to take a decision. The feasibility of our idea is demonstrated via a case study where a context-reasoning engine has been connected to simulated heartbeat sensors using prerecorded experimental data. We use sensor outputs to identify the proper context of operation of a system and trigger decision-making based on context information

    Effects of avatar character performances in virtual reality dramas used for teachers’ education

    Get PDF
    Virtual reality drama has the benefit of enhancing immersion, which was lacking in original e-Learning systems. Moreover, dangerous and expensive educational content can be replaced by stimulating users\u2019 interest. In this study, we investigate the effects of avatar performance in virtual reality drama. The hypothesis that the psychical distance between virtual characters and their viewers changes according to the size of video shots is tested with an autonomic nervous system function test. Eighty-four college students were randomly assigned to three groups. Virtual reality drama is used to train teachers concerning school bullying prevention, and deals with the dialogue between teachers and students. Group 1 was provided with full-shot video clips, Group 2 was shown various clips from full shots to extreme close-ups, and Group 3 was provided with close-up shots. We found that the virtual reality drama viewers\u2019 levels of stimulation changed in relation to the size of the shots. The R-R (between P wave and P wave) intervals of the electrocardiograms (ECGs, bio-signal feedback) became significantly narrower as the shot size became smaller

    Emotional state inference using face related features

    No full text
    Obtaining reliable and complete systems able to extract human emotional status from streaming videos is of paramount importance to Human Machine Interaction (HMI) applications. Side views, unnatural postures and context are challenges. This paper presents a semi-supervised fuzzy emotional classification system based on Russell\u2019s circumplex model. This emotional inference system relies only on face related features codified with the Facial Action Coding System (FACS). These features are provided by a morphable 3D tracking system robust to posture, occlusion and illumination changes
    corecore