196 research outputs found

    Emotive computing may have a role in telecare

    Get PDF
    This brief paper sets out arguments for the introduction of new technologies into telecare and lifestyle monitoring that can detect and monitor the emotive state of patients. The significantly increased use of computers by older people will enable the elements of emotive computing to be integrated with features such as keyboards and webcams, to provide additional information on emotional state. When this is combined with other data, there will be significant opportunities for system enhancement and the identification of changes in user status, and hence of need. The ubiquity of home computing makes the keyboard a very attractive, economic and non-intrusive means of data collection and analysis

    Description des tâches avec un système interactif multiutilisateur et multimodal : Etude comparative de notations

    No full text
    International audienceMulti-user multimodal interactive systems involve multiple users who can use multiple interactionmodalities. Multi-user multimodal systems are becoming more prevalent, especially systems based on largeshared multi-touch surfaces or video game centers such as Wii or Xbox. In this article we address thedescription of the tasks with such interactive systems. We review existing notations for the description of taskswith a multi-user multimodal interactive system and focus particularly on tree-based notations. For elementarytasks (e.g. actions), we also consider the notations that describe multimodal interaction. The contribution isthen a comparison of existing notations based on a set of organized concepts. While some concepts are generalto any notation, other concepts are specific to human-computer interaction, or to multi-user interaction andfinally to multimodal interaction.De nombreux systèmes interactifs, professionnels ou grand public, permettent conjointementl’interaction multiutilisateur et multimodale. Un système interactif est multimodal lorsqu’un utilisateur peutinteragir avec le système par l’usage de plusieurs modalités d’interaction (en entrée ou en sortie) de façonparallèle ou non. Nous constatons que de plus en plus de systèmes multiutilisateurs ou collecticiels sontmultimodaux, comme ceux construits autour d’une surface interactive et les consoles de jeu de type Wii ouXbox. Nous traitons dans cet article de la description des tâches-utilisateur avec de tels systèmes interactifsmultiutilisateurs et multimodaux. Précisément, nous dressons un panorama des notations existantes permettantla description des tâches mono ou multi-utilisateur avec une attention particulière pour les notations à based’arbre de tâches. Nous focalisons aussi sur les tâches élémentaires ou actions mono/multi-modales del’utilisateur en considérant les notations de description de l’interaction multimodale. Pour cela, nousproposons une étude comparative d'un ensemble de notations de description selon une grille d’analyseregroupant des concepts généraux à l’interaction et des concepts propres à l’interaction multiutilisateur etmultimodale

    Bridging the Gap between a Behavioural Formal Description Technique and User Interface description language: Enhancing ICO with a Graphical User Interface markup language

    Get PDF
    International audienceIn the last years, User Interface Description Languages (UIDLs) appeared as a suitable solution for developing interactive systems. In order to implement reliable and efficient applications, we propose to employ a formal description technique called ICO (Interactive Cooperative Object) that has been developed to cope with complex behaviours of interactive systems including event-based and multimodal interactions. So far, ICO is able to describe most of the parts of an interactive system, from functional core concerns to fine grain interaction techniques, but, even if it addresses parts of the rendering, it still not has means to describe the effective rendering of such interactive system. This paper presents a solution to overcome this gap using markup languages. A first technique is based on the Java technology called JavaFX and a second technique is based on the emergent UsiXML language for describing user interface components for multi-target platforms. The proposed approach offers a bridge between markup language based descriptions of the user interface components and a robust technique for describing behaviour using ICO modelling. Furthermore, this paper highlights how it is possible to take advantage from both behavioural and markup language description techniques to propose a new model-based approach for prototyping interactive systems. The proposed approach is fully illustrated by a case study using an interactive application embedded into interactive aircraft cockpits

    Fusion multimodale pour les systèmes d'interaction

    Get PDF
    Les chercheurs en informatique et en génie informatique consacrent une partie importante de leurs efforts sur la communication et l'interaction entre l'homme et la machine. En effet, avec l'avènement du traitement multimodal et du multimédia en temps réel, l'ordinateur n'est plus considéré seulement comme un outil de calcul, mais comme une machine de traitement, de communication, de collection et de contrôle, une machine qui accompagne, aide et favorise de nombreuses activités dans la vie quotidienne. Une interface multimodale permet une interaction plus flexible et naturelle entre l’homme et la machine, en augmentant la capacité des systèmes multimodaux pour une meilleure correspondance avec les besoin de l’homme. Dans ce type d’interaction, un moteur de fusion est un composant fondamental qui interprète plusieurs sources de communications, comme les commandes vocales, les gestes, le stylet, etc. ce qui rend l’interaction homme-machine plus riche et plus efficace. Notre projet de recherche permettra une meilleure compréhension de la fusion et de l'interaction multimodale, par la construction d'un moteur de fusion en utilisant des technologies de Web sémantique. L'objectif est de développer un système expert pour l'interaction multimodale personne-machine qui mènera à la conception d'un outil de surveillance pour personnes âgées, afin de leurs assurer une aide et une confiance en soi, à domicile comme à l'extérieur

    Muecas: a multi-sensor robotic head for affective human robot interaction and imitation

    Get PDF
    Este artículo presenta una cabeza robótica humanoide multi-sensor para la interacción del robot humano. El diseño de la cabeza robótica, Muecas, se basa en la investigación en curso sobre los mecanismos de percepción e imitación de las expresiones y emociones humanas. Estos mecanismos permiten la interacción directa entre el robot y su compañero humano a través de las diferentes modalidades del lenguaje natural: habla, lenguaje corporal y expresiones faciales. La cabeza robótica tiene 12 grados de libertad, en una configuración de tipo humano, incluyendo ojos, cejas, boca y cuello, y ha sido diseñada y construida totalmente por IADeX (Ingeniería, Automatización y Diseño de Extremadura) y RoboLab. Se proporciona una descripción detallada de su cinemática junto con el diseño de los controladores más complejos. Muecas puede ser controlado directamente por FACS (Sistema de Codificación de Acción Facial), el estándar de facto para reconocimiento y síntesis de expresión facial. Esta característica facilita su uso por parte de plataformas de terceros y fomenta el desarrollo de la imitación y de los sistemas basados en objetivos. Los sistemas de imitación aprenden del usuario, mientras que los basados en objetivos utilizan técnicas de planificación para conducir al usuario hacia un estado final deseado. Para mostrar la flexibilidad y fiabilidad de la cabeza robótica, se presenta una arquitectura de software capaz de detectar, reconocer, clasificar y generar expresiones faciales en tiempo real utilizando FACS. Este sistema se ha implementado utilizando la estructura robótica, RoboComp, que proporciona acceso independiente al hardware a los sensores en la cabeza. Finalmente, se presentan resultados experimentales que muestran el funcionamiento en tiempo real de todo el sistema, incluyendo el reconocimiento y la imitación de las expresiones faciales humanas.This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions.Trabajo financiado por: Ministerio de Ciencia e Innovación. Proyecto TIN2012-38079-C03-1 Gobierno de Extremadura. Proyecto GR10144peerReviewe

    How Fast Can We Play Tetris Greedily With Rectangular Pieces?

    Get PDF
    Consider a variant of Tetris played on a board of width ww and infinite height, where the pieces are axis-aligned rectangles of arbitrary integer dimensions, the pieces can only be moved before letting them drop, and a row does not disappear once it is full. Suppose we want to follow a greedy strategy: let each rectangle fall where it will end up the lowest given the current state of the board. To do so, we want a data structure which can always suggest a greedy move. In other words, we want a data structure which maintains a set of O(n)O(n) rectangles, supports queries which return where to drop the rectangle, and updates which insert a rectangle dropped at a certain position and return the height of the highest point in the updated set of rectangles. We show via a reduction to the Multiphase problem [P\u{a}tra\c{s}cu, 2010] that on a board of width w=Θ(n)w=\Theta(n), if the OMv conjecture [Henzinger et al., 2015] is true, then both operations cannot be supported in time O(n1/2ϵ)O(n^{1/2-\epsilon}) simultaneously. The reduction also implies polynomial bounds from the 3-SUM conjecture and the APSP conjecture. On the other hand, we show that there is a data structure supporting both operations in O(n1/2log3/2n)O(n^{1/2}\log^{3/2}n) time on boards of width nO(1)n^{O(1)}, matching the lower bound up to a no(1)n^{o(1)} factor.Comment: Correction of typos and other minor correction

    Piavca: a framework for heterogeneous interactions with virtual characters

    Get PDF
    This paper presents a virtual character animation system for real time multimodal interaction in an immersive virtual reality setting. Human to human interaction is highly multimodal, involving features such as verbal language, tone of voice, facial expression, gestures and gaze. This multimodality means that, in order to simulate social interaction, our characters must be able to handle many different types of interaction, and many different types of animation, simultaneously. Our system is based on a model of animation that represents different types of animations as instantiations of an abstract function representation. This makes it easy to combine different types of animation. It also encourages the creation of behavior out of basic building blocks. making it easy to create and configure new beahviors for novel situations. The model has been implemented in Piavca, an open source character animation system

    Piavca: A Framework for Heterogeneous Interactions with Virtual Characters

    Full text link

    Vibrotactile Jacket For Perception Enhancement

    Get PDF
    By nature, human beings perceive their environment mostly using sight and audition. Vibrotactile feedback has proven satisfying results in the domains of simple multimodal interaction, for immersion and navigation purposes. The scope of this research is to evaluate the additional value of tactile feedback on the user upper body limbs to pass 3D directional information. This paper presents the development of a vibrotactile jacket and its software interface. A validation concept test bench has been setup to measure the effect of our vibrotactile device onto the response time to localize a target in a virtual environment over visual and auditory cue. Early results are encouraging by showing clear benefits of our vibrotactile system while the complexity of the multimodal environment is increasing
    corecore