669 research outputs found

    An information assistant system for the prevention of tunnel vision in crisis management

    Get PDF
    In the crisis management environment, tunnel vision is a set of bias in decision makers’ cognitive process which often leads to incorrect understanding of the real crisis situation, biased perception of information, and improper decisions. The tunnel vision phenomenon is a consequence of both the challenges in the task and the natural limitation in a human being’s cognitive process. An information assistant system is proposed with the purpose of preventing tunnel vision. The system serves as a platform for monitoring the on-going crisis event. All information goes through the system before arrives at the user. The system enhances the data quality, reduces the data quantity and presents the crisis information in a manner that prevents or repairs the user’s cognitive overload. While working with such a system, the users (crisis managers) are expected to be more likely to stay aware of the actual situation, stay open minded to possibilities, and make proper decisions

    GAIML: A New Language for Verbal and Graphical Interaction in Chatbots

    Get PDF
    Natural and intuitive interaction between users and complex systems is a crucial research topic in human-computer interaction. A major direction is the definition and implementation of systems with natural language understanding capabilities. The interaction in natural language is often performed by means of systems called chatbots. A chatbot is a conversational agent with a proper knowledge base able to interact with users. Chatbots appearance can be very sophisticated with 3D avatars and speech processing modules. However the interaction between the system and the user is only performed through textual areas for inputs and replies. An interaction able to add to natural language also graphical widgets could be more effective. On the other side, a graphical interaction involving also the natural language can increase the comfort of the user instead of using only graphical widgets. In many applications multi-modal communication must be preferred when the user and the system have a tight and complex interaction. Typical examples are cultural heritages applications (intelligent museum guides, picture browsing) or systems providing the user with integrated information taken from different and heterogenous sources as in the case of the iGoogle™ interface. We propose to mix the two modalities (verbal and graphical) to build systems with a reconfigurable interface, which is able to change with respect to the particular application context. The result of this proposal is the Graphical Artificial Intelligence Markup Language (GAIML) an extension of AIML allowing merging both interaction modalities. In this context a suitable chatbot system called Graphbot is presented to support this language. With this language is possible to define personalized interface patterns that are the most suitable ones in relation to the data types exchanged between the user and the system according to the context of the dialogue

    Towards an engineering approach for advanced interaction techniques in 3D environments

    Get PDF
    National audienceIn recent years, Virtual Environments have appeared in new areas such as mass-market, web or mobile situations. In parallel, advanced forms ofinteractions are emerging such as tactile, mixed, tangible or spatial user interfaces, promoting ease of learning and use. To contribute to the democratization of 3D Virtual Environments(3DVE) and their use by persons who are not experts in 3D and occasional users, simultaneously considering Computer Graphics and Human Computer Interaction design considerations is required. In this position paper, we first provide an overview of a new analytical framework for the design of advanced interaction techniques for 3D Virtual Environment. It consists in identifying links that support the interaction and connect user’s tasks to be performed in a 3DVE with the targeted scene graph. We relate our work to existing modeling approaches and discuss about our expectations with regards to the engineering of advanced interaction techniqu

    Multi-modal usability evaluation.

    Get PDF
    Research into the usability of multi-modal systems has tended to be device-led, with a resulting lack of theory about multi-modal interaction and how it might differ from more conventional interaction. This is compounded by a confusion over the precise definition of modality between the various disciplines within the HCI community, how modalities can be effectively classified, and their usability properties. There is a consequent lack of appropriate methodologies and notations to model such interactions and assess the usability implications of these interfaces. The role of expertise and craft skill in using HCI techniques is also poorly understood. This thesis proposes a new definition of modality, and goes on to identify issues of importance to multi-modal usability, culminating in the development of a new methodology to support the identification of such usability issues. It additionally explores the role of expertise and craft skill in using usability modelling techniques to assess usability issues. By analysing the problems inherent in current definitions and approaches, as well as issues relevant to cognitive science, a clear understanding of both the requirements for a suitable definition of modality and the salient usability issues are obtained. A novel definition of modality, based on the three elements of sense, information form and temporal nature is proposed. Further, an associated taxonomy is produced, which categorises modalities within the sensory dimension as visual, acoustic and haptic. This taxonomy classifies modalities within the information form dimension as lexical, symbolic or concrete, and classifies the temporal form dimension modalities as discrete, continuous, or dynamic. This results in a twenty-seven cell taxonomy, with each cell representing one taxon, indicating one particular type of modality. This is a faceted classification system, with the modality named after the intersection of the categories, building the category names into a compound modality name. The issues surrounding modality are examined and refined into the concepts of modality types, properties and clashes. Modalities are identified as belonging to either the system or the user, and being expressive or receptive in type. Various properties are described based on issues of granularity and redundancy. The five different types of clashes are described. Problems relating to the modelling of multi-modal interaction are examined by means of a motivating case study based on a portion of an interface for a robotic arm. The effectiveness of five modelling techniques, STN, CW, CPM-GOMS, PUM and Z, in representing multi-modal issues are assessed. From this, and using the collated definition, taxonomy and theory, a new methodology, Evaluating Multi-modal Usability (EMU), is developed. This is applied to a previous case study of the robotic arm to assess its application and coverage. Both the definition and EMU are used by students in a case study to test the definition and methodology's effectiveness, and to examine the leverage such an approach may give. The results shows that modalities can be successfully identified within an interactive context, and that usability issues can be described. Empirical video data of the robotic arm in use is used to confirm the issues identified by the previous analyses, and to identify new issues. A rational re-analysis of the six approaches (STN, CW, CPM-GOMS, PUM, Z and EMU) is conducted in order to distinguish between issues identified through craft skill, based on general HCI expertise and familiarity with the problem, and issues identified due to the core of the method for each approach. This is to gain a realistic understanding of the validity of claims made by each method, and to identify how else issues might be identified, and the consequent implications. Craft skill is found to have a wider role than anticipated, and the importance of expertise in using such approaches emphasised. From the case study and the re-analyses the implications for EMU are examined, and suggestions made for future refinement. The main contributions of this thesis are the new definition, taxonomy and theory, which significantly contribute to the theoretical understanding of multi-modal usability, helping to resolve existing confusion in this area. The new methodology, EMU, is a useful technique for examining interfaces for multi-modal usability issues, although some refinement is required. The importance of craft skill in the identification of usability issues has been explicitly explored, with implications for future work on usability modelling and the training of practitioners in such techniques

    Bridging the Gap between a Behavioural Formal Description Technique and User Interface description language: Enhancing ICO with a Graphical User Interface markup language

    Get PDF
    International audienceIn the last years, User Interface Description Languages (UIDLs) appeared as a suitable solution for developing interactive systems. In order to implement reliable and efficient applications, we propose to employ a formal description technique called ICO (Interactive Cooperative Object) that has been developed to cope with complex behaviours of interactive systems including event-based and multimodal interactions. So far, ICO is able to describe most of the parts of an interactive system, from functional core concerns to fine grain interaction techniques, but, even if it addresses parts of the rendering, it still not has means to describe the effective rendering of such interactive system. This paper presents a solution to overcome this gap using markup languages. A first technique is based on the Java technology called JavaFX and a second technique is based on the emergent UsiXML language for describing user interface components for multi-target platforms. The proposed approach offers a bridge between markup language based descriptions of the user interface components and a robust technique for describing behaviour using ICO modelling. Furthermore, this paper highlights how it is possible to take advantage from both behavioural and markup language description techniques to propose a new model-based approach for prototyping interactive systems. The proposed approach is fully illustrated by a case study using an interactive application embedded into interactive aircraft cockpits

    Comprehensive Framework for MultiModal Meaning Representation

    Get PDF
    International audienceMost of the existing content representation language designs are anchored to a specific modality (such as speech). This paper describes the rationale behind the definition of MMIL, the interface language intended to be used as the exchange format between modules in the MIAMM project, where we try to build a juke box incorporating multiple modalities such as speech, Haptics and Graphics. This interface language has been conceived as a general format for representing multimodal content both at lower (e.g. linguistic analysis) and higher (e.g. communication within the dialogue manager) levels of representations
    corecore