62,385 research outputs found

    A generic architecture and dialogue model for multimodal interaction

    Get PDF
    This paper presents a generic architecture and a dialogue model for multimodal interaction. Architecture and model are transparent and have been used for different task domains. In this paper the emphasis is on their use for the navigation task in a virtual environment. The dialogue model is based on the information state approach and the recognition of dialogue acts. We explain how pairs of backward and forward looking tags and the preference rules of the dialogue act determiner together determine the structure of the dialogues that can be handled by the system. The system action selection mechanism and the problem of reference resolution are discussed in detail

    Reference resolution in multi-modal interaction: Preliminary observations

    Get PDF
    In this paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply more than one modality in conveying his or her message to the environment in which a computer detects and interprets signals from different modalities. We show some naturally arising problems but do not give general solutions. Rather we decide to perform more detailed research on reference resolution in uni-modal contexts to obtain methods generalizable to multi-modal contexts. Since we try to build applications for a Dutch audience and since hardly any research has been done on reference resolution for Dutch, we give results on the resolution of anaphoric and deictic references in Dutch texts. We hope to be able to extend these results to our multimodal contexts later

    Follow-up question handling in the IMIX and Ritel systems: A comparative study

    Get PDF
    One of the basic topics of question answering (QA) dialogue systems is how follow-up questions should be interpreted by a QA system. In this paper, we shall discuss our experience with the IMIX and Ritel systems, for both of which a follow-up question handling scheme has been developed, and corpora have been collected. These two systems are each other's opposites in many respects: IMIX is multimodal, non-factoid, black-box QA, while Ritel is speech, factoid, keyword-based QA. Nevertheless, we will show that they are quite comparable, and that it is fruitful to examine the similarities and differences. We shall look at how the systems are composed, and how real, non-expert, users interact with the systems. We shall also provide comparisons with systems from the literature where possible, and indicate where open issues lie and in what areas existing systems may be improved. We conclude that most systems have a common architecture with a set of common subtasks, in particular detecting follow-up questions and finding referents for them. We characterise these tasks using the typical techniques used for performing them, and data from our corpora. We also identify a special type of follow-up question, the discourse question, which is asked when the user is trying to understand an answer, and propose some basic methods for handling it

    Reference Resolution in Multi-modal Interaction: Position paper

    Get PDF
    In this position paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply more than one modality in conveying his or her message to the environment in which a computer detects and interprets signals from different modalities. We show some naturally arising problems and how they are treated for different contexts. No generally applicable solutions are given

    A holistic multimodal approach to the non-invasive analysis of watercolour paintings

    Get PDF
    A holistic approach using non-invasive multimodal imaging and spectroscopic techniques to study the materials (pigments, drawing materials and paper) and painting techniques of watercolour paintings is presented. The non-invasive imaging and spectroscopic techniques include VIS-NIR reflectance spectroscopy and multispectral imaging, micro-Raman spectroscopy, X-ray fluorescence spectroscopy (XRF) and optical coherence tomography (OCT). The three spectroscopic techniques complement each other in pigment identification. Multispectral imaging (near infrared bands), OCT and micro-Raman complement each other in the visualisation and identification of the drawing material. OCT probes the microstructure and light scattering properties of the substrate while XRF detects the elemental composition that indicates the sizing methods and the filler content . The multiple techniques were applied in a study of forty six 19th century Chinese export watercolours from the Victoria & Albert Museum (V&A) and the Royal Horticultural Society (RHS) to examine to what extent the non-invasive analysis techniques employed complement each other and how much useful information about the paintings can be extracted to address art conservation and history questions

    Automated multimodal volume registration based on supervised 3D anatomical landmark detection

    Get PDF
    We propose a new method for automatic 3D multimodal registration based on anatomical landmark detection. Landmark detectors are learned independantly in the two imaging modalities using Extremely Randomized Trees and multi-resolution voxel windows. A least-squares fitting algorithm is then used for rigid registration based on the landmark positions as predicted by these detectors in the two imaging modalities. Experiments are carried out with this method on a dataset of pelvis CT and CBCT scans related to 45 patients. On this dataset, our fully automatic approach yields results very competitive with respect to a manually assisted state-of-the-art rigid registration algorithm
    • …
    corecore