2,330 research outputs found

    Multimodal Interaction in a Haptic Environment

    Get PDF
    In this paper we investigate the introduction of haptics in a multimodal tutoring environment. In this environment a haptic device is used to control a virtual piece of sterile cotton and a virtual injection needle. Speech input and output is provided to interact with a virtual tutor, available as a talking head, and a virtual patient. We introduce the haptic tasks and how different agents in the multi-agent system are made responsible for them. Notes are provided about the way we introduce an affective model in the tutor agent

    Mixed reality participants in smart meeting rooms and smart home enviroments

    Get PDF
    Human–computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in a virtual meeting room, we discuss how remote meeting participants can take part in meeting activities and they have some observations on translating research results to smart home environments

    Classifying motor imagery in presence of speech

    Get PDF
    In the near future, brain-computer interface (BCI) applications for non-disabled users will require multimodal interaction and tolerance to dynamic environment. However, this conflicts with the highly sensitive recording techniques used for BCIs, such as electroencephalography (EEG). Advanced machine learning and signal processing techniques are required to decorrelate desired brain signals from the rest. This paper proposes a signal processing pipeline and two classification methods suitable for multiclass EEG analysis. The methods were tested in an experiment on separating left/right hand imagery in presence/absence of speech. The analyses showed that the presence of speech during motor imagery did not affect the classification accuracy significantly and regardless of the presence of speech, the proposed methods were able to separate left and right hand imagery with an accuracy of 60%. The best overall accuracy achieved for the 5-class separation of all the tasks was 47% and both proposed methods performed equally well. In addition, the analysis of event-related spectral power changes revealed characteristics related to motor imagery and speech

    Haptic Water; Haptics on an Animated Surface

    Get PDF
    Haptic rendering is becoming an important element of multimodal interaction. Often a real-time coupling between haptics and visualization is required, based upon an underlying physical model. In this paper, we study haptic rendering and visualization of the generation of waves in shallow water. For applications, it is usually more important to come up with a believable simulation, rather than a physically accurate simulation. Therefore our focus was on obtaining suitable simplifications of the Kas-Miller model, and incorporation into a multimodal environment, aiming at haptic rendering and real-time visualization of waves. The result has been implemented and tested using a Haptic Master device, produced by FCS Control Systems

    Maps, agents and dialogue for exploring a virtual world

    Get PDF
    In previous years we have been involved in several projects in which users (or visitors) had to find their way in information-rich virtual environments. 'Information-rich' means that the users do not know beforehand what is available in the environment, where to go in the environment to find the information and, moreover, users or visitors do not necessarily know exactly what they are looking for. Information-rich means also that the information may change during time. A second visit to the same environment will require different behavior of the visitor in order for him or her to obtain similar information than was available during a previous visit. In this paper we report about two projects and discuss our attempts to generalize from the different approaches and application domains to obtain a library of methods and tools to design and implement intelligent agents that inhabit virtual environments and where the agents support the navigation of the user/visitor

    A Demonstration of Continuous Interaction with Elckerlyc

    Get PDF
    We discuss behavior planning in the style of the SAIBA framework for continuous (as opposed to turn-based) interaction. Such interaction requires the real-time application of minor shape or timing modifications of running behavior and anticipation of behavior of a (human) interaction partner. We discuss how behavior (re)planning and on-the-fly parameter modification fit into the current SAIBA framework, and what type of language or architecture extensions might be necessary. Our BML realizer Elckerlyc provides flexible mechanisms for both the specification and the execution of modifications to running behavior. We show how these mechanisms are used in a virtual trainer and two turn taking scenarios

    Toward Affective Dialogue Modeling using Partially Observable Markov Decision Processes

    Get PDF
    We propose a novel approach to developing a dialogue model which is able to take into account some aspects of the user’s emotional state and acts appropriately. The dialogue model uses a Partially Observable Markov Decision Process approach with observations composed of the observed user’s emotional state and action. A simple example of route navigation is explained to clarify our approach and preliminary results & future plans are briefly discussed
    corecore