507 research outputs found

    COMM Notation for Specifying Collaborative and MultiModal Interactive Systems

    Get PDF
    International audienceMulti-user multimodal interactive systems involve multiple users that can use multiple interaction modalities. Although multi-user multimodal systems are becoming more prevalent (especially multimodal systems involving multitouch surfaces), their design is still ad-hoc without properly keeping track of the design process. Addressing this issue of lack of design tools for multi-user multimodal systems, we present the COMM (Collaborative and MultiModal) notation and its on-line editor for specifying multi-user multimodal interactive systems. Extending the CTT notation, the salient features of the COMM notation include the concepts of interactive role and modal task as well as a refinement of the temporal operators applied to tasks using the Allen relationships. A multimodal military command post for the control of unmanned aerial vehicles (UAV) by two operators is used to illustrate the discussion

    e-COMM, un éditeur pour spécifier l'interaction multimodale et multiutilisateur

    Get PDF
    National audienceCet article présente l'éditeur e-COMM pour la spécification de systèmes interactifs multimodaux et multiutilisateurs à l'aide de la notation COMM. La majorité des notations dédiées à la conception des collecticiels offrent des moyens limités pour décrire l'interaction multimodale. Ainsi la notation COMM comble ce manque en introduisant de nouveaux concepts (tâche modale et rôle interactif) pour lier ces deux aspects : l'interaction multimodale et multiutilisateur. L'éditeur e-COMM vise alors deux objectifs : founir un outil facilement accessible et aider le concepteur à se concentrer sur la tâche d'édition en reposant autant que possible sur la manipulation directe

    Description des tâches avec un système interactif multiutilisateur et multimodal : Etude comparative de notations

    No full text
    International audienceMulti-user multimodal interactive systems involve multiple users who can use multiple interactionmodalities. Multi-user multimodal systems are becoming more prevalent, especially systems based on largeshared multi-touch surfaces or video game centers such as Wii or Xbox. In this article we address thedescription of the tasks with such interactive systems. We review existing notations for the description of taskswith a multi-user multimodal interactive system and focus particularly on tree-based notations. For elementarytasks (e.g. actions), we also consider the notations that describe multimodal interaction. The contribution isthen a comparison of existing notations based on a set of organized concepts. While some concepts are generalto any notation, other concepts are specific to human-computer interaction, or to multi-user interaction andfinally to multimodal interaction.De nombreux systèmes interactifs, professionnels ou grand public, permettent conjointementl’interaction multiutilisateur et multimodale. Un système interactif est multimodal lorsqu’un utilisateur peutinteragir avec le système par l’usage de plusieurs modalités d’interaction (en entrée ou en sortie) de façonparallèle ou non. Nous constatons que de plus en plus de systèmes multiutilisateurs ou collecticiels sontmultimodaux, comme ceux construits autour d’une surface interactive et les consoles de jeu de type Wii ouXbox. Nous traitons dans cet article de la description des tâches-utilisateur avec de tels systèmes interactifsmultiutilisateurs et multimodaux. Précisément, nous dressons un panorama des notations existantes permettantla description des tâches mono ou multi-utilisateur avec une attention particulière pour les notations à based’arbre de tâches. Nous focalisons aussi sur les tâches élémentaires ou actions mono/multi-modales del’utilisateur en considérant les notations de description de l’interaction multimodale. Pour cela, nousproposons une étude comparative d'un ensemble de notations de description selon une grille d’analyseregroupant des concepts généraux à l’interaction et des concepts propres à l’interaction multiutilisateur etmultimodale

    Enhanced Task Modelling for Systematic Identification and Explicit Representation of Human Errors

    Get PDF
    International audienceTask models produced from task analysis, are a very important element of UCD approaches as they provide support for describing users goals and users activities, allowing human factors specialists to ensure and assess the effectiveness of interactive applications. As user errors are not part of a user goal they are usually omitted from tasks descriptions. However, in the field of Human Reliability Assessment, task descriptions (including task models) are central artefacts for the analysis of human errors. Several methods (such as HET, CREAM and HERT) require task models in order to systematically analyze all the potential errors and deviations that may occur. However, during this systematic analysis, potential human errors are gathered and recorded separately and not connected to the task models. Such non integration brings issues such as completeness (i.e. ensuring that all the potential human errors have been identified) or combined errors identification (i.e. identifying deviations resulting from a combination of errors). We argue that representing human errors explicitly and systematically within task models contributes to the design and evaluation of error-tolerant interactive system. However, as demonstrated in the paper, existing task modeling notations, even those used in the methods mentioned above, do not have a sufficient expressive power to allow systematic and precise description of potential human errors. Based on the analysis of existing human error classifications, we propose several extensions to existing task modelling techniques to represent explicitly all the types of human error and to support their systematic task-based identification. These extensions are integrated within the tool-supported notation called HAMSTERS and are illustrated on a case study from the avionics domain

    Virtual Dance and Motion-Capture

    Get PDF
    A general view of various ways in which virtual dance can be understood is presented in the first part of this article. It then appraises the uses of the term “virtual” in previous studies of digital dance. A more in-depth view of virtual dance as it relates to motion-capture is offered, and key issues are discussed regarding computer animation, digital imaging, motion signature, virtual reality and interactivity. The paper proposes that some forms of virtual dance be defined in relation to both digital technologies and contemporary theories of virtuality

    A Model-Based Approach for Gesture Interfaces

    Get PDF
    The description of a gesture requires temporal analysis of values generated by input sensors, and it does not fit well the observer pattern traditionally used by frameworks to handle the user’s input. The current solution is to embed particular gesture-based interactions into frameworks by notifying when a gesture is detected completely. This approach suffers from a lack of flexibility, unless the programmer performs explicit temporal analysis of raw sensors data. This thesis proposes a compositional, declarative meta-model for gestures definition based on Petri Nets. Basic traits are used as building blocks for defining gestures; each one notifies the change of a feature value. A complex gesture is defined by the composition of other sub-gestures using a set of operators. The user interface behaviour can be associated to the recognition of the whole gesture or to any other sub-component, addressing the problem of granularity for the notification of events. The meta-model can be instantiated for different gesture recognition supports and its definition has been validated through a proof of concept library. Sample applications have been developed for supporting multi-touch gestures in iOS and full body gestures with Microsoft Kinect. In addition to the solution for the event granularity problem, this thesis discusses how to separate the definition of the gesture from the user interface behaviour using the proposed compositional approach. The gesture description meta-model has been integrated into MARIA, a model-based user interface description language, extending it with the description of full-body gesture interfaces

    Designing usable mobile interfaces for spatial data

    Get PDF
    2010 - 2011This dissertation deals mainly with the discipline of Human-­‐Computer Interaction (HCI), with particular attention on the role that it plays in the domain of modern mobile devices. Mobile devices today offer a crucial support to a plethora of daily activities for nearly everyone. Ranging from checking business mails while traveling, to accessing social networks while in a mall, to carrying out business transactions while out of office, to using all kinds of online public services, mobile devices play the important role to connect people while physically apart. Modern mobile interfaces are therefore expected to improve the user's interaction experience with the surrounding environment and offer different adaptive views of the real world. The goal of this thesis is to enhance the usability of mobile interfaces for spatial data. Spatial data are particular data in which the spatial component plays an important role in clarifying the meaning of the data themselves. Nowadays, this kind of data is totally widespread in mobile applications. Spatial data are present in games, map applications, mobile community applications and office automations. In order to enhance the usability of spatial data interfaces, my research investigates on two major issues: 1. Enhancing the visualization of spatial data on small screens 2. Enhancing the text-­‐input methods I selected the Design Science Research approach to investigate the above research questions. The idea underling this approach is “you build artifact to learn from it”, in other words researchers clarify what is new in their design. The new knowledge carried out from the artifact will be presented in form of interaction design patterns in order to support developers in dealing with issues of mobile interfaces. The thesis is organized as follows. Initially I present the broader context, the research questions and the approaches I used to investigate them. Then the results are split into two main parts. In the first part I present the visualization technique called Framy. The technique is designed to support users in visualizing geographical data on mobile map applications. I also introduce a multimodal extension of Framy obtained by adding sounds and vibrations. After that I present the process that turned the multimodal interface into a means to allow visually impaired users to interact with Framy. Some projects involving the design principles of Framy are shown in order to demonstrate the adaptability of the technique in different contexts. The second part concerns the issue related to text-­‐input methods. In particular I focus on the work done in the area of virtual keyboards for mobile devices. A new kind of virtual keyboard called TaS provides users with an input system more efficient and effective than the traditional QWERTY keyboard. Finally, in the last chapter, the knowledge acquired is formalized in form of interaction design patterns. [edited by author]X n.s

    Cognitive architecture of multimodal multidimensional dialogue management

    Get PDF
    Numerous studies show that participants of real-life dialogues happen to get involved in rather dynamic non-sequential interactions. This challenges the dialogue system designs based on a reactive interlocutor paradigm and calls for dialog systems that can be characterised as a proactive learner, accomplished multitasking planner and adaptive decision maker. Addressing this call, the thesis brings innovative integration of cognitive models into the human-computer dialogue systems. This work utilises recent advances in Instance-Based Learning of Theory of Mind skills and the established Cognitive Task Analysis and ACT-R models. Cognitive Task Agents, producing detailed simulation of human learning, prediction, adaption and decision making, are integrated in the multi-agent Dialogue Man-ager. The manager operates on the multidimensional information state enriched with representations based on domain- and modality-specific semantics and performs context-driven dialogue acts interpretation and generation. The flexible technical framework for modular distributed dialogue system integration is designed and tested. The implemented multitasking Interactive Cognitive Tutor is evaluated as showing human-like proactive and adaptive behaviour in setting goals, choosing appropriate strategies and monitoring processes across contexts, and encouraging the user exhibit similar metacognitive competences

    Chatbots for Modelling, Modelling of Chatbots

    Full text link
    Tesis Doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de Lectura: 28-03-202
    corecore