9 research outputs found

    A Framework for a Priori Evaluation of Multimodal User Interfaces Supporting Cooperation

    No full text
    International audienceIn this short paper we will present our latest research on a new framework being developed for aiding novice designers of highly interactive, cooperative, multimodal systems to make expert decisions in choice of interaction modalities depending on the type of activity and its cooperative nature. Our research is conducted within the field of maritime surveillance the next generation distributed multimodal work support

    Cooperative speed assistance : interaction and persuasion design

    Get PDF

    Multimodal information presentation for high-load human computer interaction

    Get PDF
    This dissertation addresses the question: given an application and an interaction context, how can interfaces present information to users in a way that improves the quality of interaction (e.g. a better user performance, a lower cognitive demand and a greater user satisfaction)? Information presentation is critical to the quality of interaction because it guides, constrains and even determines cognitive behavior. A good presentation is particularly desired in high-load human computer interactions, such as when users are under time pressure, stress, or are multi-tasking. Under a high mental workload, users may not have the spared cognitive capacity to cope with the unnecessary workload induced by a bad presentation. In this dissertation work, the major presentation factor of interest is modality. We have conducted theoretical studies in the cognitive psychology domain, in order to understand the role of presentation modality in different stages of human information processing. Based on the theoretical guidance, we have conducted a series of user studies investigating the effect of information presentation (modality and other factors) in several high-load task settings. The two task domains are crisis management and driving. Using crisis scenario, we investigated how to presentation information to facilitate time-limited visual search and time-limited decision making. In the driving domain, we investigated how to present highly-urgent danger warnings and how to present informative cues that help drivers manage their attention between multiple tasks. The outcomes of this dissertation work have useful implications to the design of cognitively-compatible user interfaces, and are not limited to high-load applications

    On the critical role of the sensorimotor loop on the design of interaction techniques and interactive devices

    Get PDF
    People interact with their environment thanks to their perceptual and motor skills. This is the way they both use objects around them and perceive the world around them. Interactive systems are examples of such objects. Therefore to design such objects, we must understand how people perceive them and manipulate them. For example, haptics is both related to the human sense of touch and what I call the motor ability. I address a number of research questions related to the design and implementation of haptic, gestural, and touch interfaces and present examples of contributions on these topics. More interestingly, perception, cognition, and action are not separated processes, but an integrated combination of them called the sensorimotor loop. Interactive systems follow the same overall scheme, with differences that make the complementarity of humans and machines. The interaction phenomenon is a set of connections between human sensorimotor loops, and interactive systems execution loops. It connects inputs with outputs, users and systems, and the physical world with cognition and computing in what I call the Human-System loop. This model provides a complete overview of the interaction phenomenon. It helps to identify the limiting factors of interaction that we can address to improve the design of interaction techniques and interactive devices.Les humains interagissent avec leur environnement grĂące Ă  leurs capacitĂ©s perceptives et motrices. C'est ainsi qu'ils utilisent les objets qui les entourent et perçoivent le monde autour d'eux. Les systĂšmes interactifs sont des exemples de tels objets. Par consĂ©quent, pour concevoir de tels objets, nous devons comprendre comment les gens les perçoivent et les manipulent. Par exemple, l'haptique est Ă  la fois liĂ©e au sens du toucher et Ă  ce que j'appelle la capacitĂ© motrice. J'aborde un certain nombre de questions de recherche liĂ©es Ă  la conception et Ă  la mise en Ɠuvre d'interfaces haptiques, gestuelles et tactiles et je prĂ©sente des exemples de contributions sur ces sujets. Plus intĂ©ressant encore, la perception, la cognition et l'action ne sont pas des processus sĂ©parĂ©s, mais une combinaison intĂ©grĂ©e d'entre eux appelĂ©e la boucle sensorimotrice. Les systĂšmes interactifs suivent le mĂȘme schĂ©ma global, avec des diffĂ©rences qui forme la complĂ©mentaritĂ© des humains et des machines. Le phĂ©nomĂšne d'interaction est un ensemble de connexions entre les boucles sensorimotrices humaines et les boucles d'exĂ©cution des systĂšmes interactifs. Il relie les entrĂ©es aux sorties, les utilisateurs aux systĂšmes, et le monde physique Ă  la cognition et au calcul dans ce que j'appelle la boucle Humain-SystĂšme. Ce modĂšle fournit un aperçu complet du phĂ©nomĂšne d'interaction. Il permet d'identifier les facteurs limitatifs de l'interaction que nous pouvons aborder pour amĂ©liorer la conception des techniques d'interaction et des dispositifs interactifs

    Understanding Mode and Modality Transfer in Unistroke Gesture Input

    Get PDF
    Unistroke gestures are an attractive input method with an extensive research history, but one challenge with their usage is that the gestures are not always self-revealing. To obtain expertise with these gestures, interaction designers often deploy a guided novice mode -- where users can rely on recognizing visual UI elements to perform a gestural command. Once a user knows the gesture and associated command, they can perform it without guidance; thus, relying on recall. The primary aim of my thesis is to obtain a comprehensive understanding of why, when, and how users transfer from guided modes or modalities to potentially more efficient, or novel, methods of interaction -- through symbolic-abstract unistroke gestures. The goal of my work is to not only study user behaviour from novice to more efficient interaction mechanisms, but also to expand upon the concept of intermodal transfer to different contexts. We garner this understanding by empirically evaluating three different use cases of mode and/or modality transitions. Leveraging marking menus, the first piece investigates whether or not designers should force expertise transfer by penalizing use of the guided mode, in an effort to encourage use of the recall mode. Second, we investigate how well users can transfer skills between modalities, particularly when it is impractical to present guidance in the target or recall modality. Lastly, we assess how well users' pre-existing spatial knowledge of an input method (the QWERTY keyboard layout), transfers to performance in a new modality. Applying lessons from these three assessments, we segment intermodal transfer into three possible characterizations -- beyond the traditional novice to expert contextualization. This is followed by a series of implications and potential areas of future exploration spawning from our work

    Designing Text Entry Methods for Non-Verbal Vocal Input

    Get PDF
    Katedra počítačovĂ© grafiky a interakc

    Conception, prototypage et Ă©valuation d’un systĂšme pour l'exploration audio-tactile et spatiale de pages web par des utilisateurs non-voyants

    Get PDF
    RÉSUMÉ : L’accĂšs Ă  l’information a drastiquement changĂ© depuis l’apparition des nouvelles technologies et du monde en ligne. Il est maintenant possible d’accĂ©der Ă  une multitude d’informations en tout temps et en tout lieu. Cette apparente facilitĂ© d’accĂšs Ă  l’information est cependant trĂšs loin de la rĂ©alitĂ© des personnes ayant un handicap pour qui l’arrivĂ©e des nouvelles technologies et du monde en ligne a crĂ©Ă© de nouvelles situations de handicap. La prĂ©sente thĂšse se concentre sur les situations de handicap rencontrĂ©es par les personnes non-voyantes, au cours de l’exploration de pages Web. Heureusement, des technologies adaptĂ©es sont actuellement disponibles pour les personnes non-voyantes qui dĂ©sirent accĂ©der au monde du Web : les lecteurs d’écran. Ceux-ci permettent une exploration linĂ©aire de la page Ă  l’aide d’un retour sonore gĂ©nĂ©rĂ© par une synthĂšse vocale. Cette adaptation amĂ©liore grandement l’accĂšs au monde du Web mais engendre son lot de frustrations. Celles-ci sont principalement liĂ©es au non-respect des lignes de conduite d’accessibilitĂ© dans plusieurs sites Web et Ă  la prĂ©sentation strictement linĂ©aire de l’information par les lecteurs d’écran. L’objectif de notre recherche est d’amĂ©liorer l’accĂšs au Web pour les utilisateurs non-voyants, en leur proposant une alternative Ă  l’exploration linĂ©aire : l’exploration spatiale de pages Web, c.-Ă -d. Ă  l’aide de retours tactiles et sonores. Par l’intermĂ©diaire de l’exploration spatiale, nous souhaitons passer outre certains problĂšmes d’accessibilitĂ© dans les pages Web, en donner une meilleure idĂ©e globale, et permettre de mieux associer les informations de cette page entre elles. Notre hypothĂšse principale est la suivante : L’exploration spatiale d’un site Web est plus efficace et plus efficiente que l’exploration linĂ©aire pour des utilisateurs non-voyants. L’exploration spatiale se fait par l’intermĂ©diaire du fureteur d’écran multimodal TactoWeb que nous avons dĂ©veloppĂ©. TactoWeb est contrĂŽlĂ© par le Tactograph, un appareil gĂ©nĂ©rant des sensations tactiles sous forme de vibrations et d’ondulations qui dĂ©pendent de l’emplacement du curseur sur la page Web. Les retours sonores sont une combinaison entre une synthĂšse vocale et un ensemble d’audicĂŽnes. Cette approche multimodale permet de recrĂ©er les liens entre les diffĂ©rents Ă©lĂ©ments d’une page Web qui pourraient disparaĂźtre au cours de la linĂ©arisation de l’information. Avant de concevoir notre systĂšme, nous avons identifiĂ© les points forts et les points faibles des lecteurs d’écrans. La thĂšse prĂ©sente ensuite le processus de conception de notre fureteur multimodal du point de vue logiciel, ainsi que l’amĂ©lioration du matĂ©riel utilisĂ©. Un tutoriel d’apprentissage a Ă©tĂ© conçu afin de guider les utilisateurs non-voyants dans l’exploration spatiale. Nous procĂ©dons ensuite Ă  l’évaluation de notre systĂšme Ă  l’aide d’une Ă©tude comparative. Cette Ă©tude prend la forme d’une expĂ©rimentation comparant JAWS, un lecteur d’écran permettant l’exploration linĂ©aire, avec TactoWeb, notre systĂšme d’exploration spatiale. Il est Ă©vident que les deux outils ont diffĂ©rents degrĂ©s de maturitĂ©. En effet, JAWS existe depuis presque 20 ans alors que TactoWeb n’en est qu’à sa premiĂšre version. Cette diffĂ©rence de maturitĂ© peut clairement influer sur les rĂ©sultats de notre Ă©valuation en faveur de l’exploration linĂ©aire avec JAWS. À cela s’ajoute le fait que nos participants avaient plusieurs annĂ©es d’expĂ©rience avec JAWS alors qu’ils faisaient de l’exploration spatiale avec TactoWeb pour la premiĂšre fois et pour une courte pĂ©riode. L’étude porte sur 14 participants non-voyants et met en relief les diffĂ©rents aspects de l’exploration spatiale qui permettent de rĂ©duire les situations de handicap rencontrĂ©es au cours de l’exploration de pages Web. Pour ce faire, nous avons Ă©tudiĂ© les taux de succĂšs, la difficultĂ© rencontrĂ©e et le temps d’exĂ©cution pour huit tĂąches effectuĂ©es selon les deux types d’exploration. Ces tĂąches sont divisĂ©es en deux types en fonction de leur but : rechercher une information dans un site Web et remplir un formulaire. De plus, chacun de ces sous-ensembles de tĂąches contient deux tĂąches Ă  effectuer dans des sites accessibles, et deux tĂąches Ă  effectuer dans des sites non-accessibles. Chaque participant a donc effectuĂ© quatre tĂąches avec chacun des deux outils (JAWS et TactoWeb), soit une tĂąche de chaque type, dans des sites accessibles ou non. Cela nous a permis d’étudier les deux types d’exploration dans quatre situations diffĂ©rentes et d’observer leurs avantages et inconvĂ©nients dans chacune de ces situations. De plus, nous avons observĂ© s’il y avait des diffĂ©rences dans l’utilisation des deux types d’exploration en fonction de la pĂ©riode d’apparition de la cĂ©citĂ© chez nos participants (cĂ©citĂ© de naissance ou tardive). L’étude a infirmĂ© notre hypothĂšse principale. Les deux types d’exploration sont aussi efficaces l’un que l’autre pour ce qui est de la capacitĂ© des participants Ă  rĂ©aliser les tĂąches et l’exploration linĂ©aire est plus efficiente avec un temps d’exĂ©cution plus court. Tel que mentionnĂ© ci-dessus, cette diffĂ©rence d’efficience peut s’expliquer par le fait que les participants avaient beaucoup d’expĂ©rience avec l’outil d’exploration linĂ©aire et aucune avec l’outil d’exploration spatiale, Ă  part la durĂ©e de notre tutoriel (moyenne de 38 min) et la durĂ©e de l’expĂ©rience elle-mĂȘme (entre deux et trois heures). De plus, la diffĂ©rence de maturitĂ© entre les deux outils est Ă  considĂ©rer. Notre hypothĂšse principale est infirmĂ©e en tenant compte de ces biais mais il pourrait en ĂȘtre autrement lorsque TactoWeb aura une plus grande maturitĂ© et lorsque les participants auront plus d’expĂ©rience avec notre outil. NĂ©anmoins, malgrĂ© les diffĂ©rences d’expĂ©rience des participants et de maturitĂ© entre les deux outils, nous obtenons une efficacitĂ© similaire, ce qui est un gros point positif pour l’exploration spatiale. Cette derniĂšre semble donc demander beaucoup moins de temps d’apprentissage que l’exploration linĂ©aire. De plus, l’exploration spatiale gĂ©nĂšre moins de difficultĂ© dans les tĂąches de remplissage de formulaire et a un temps d’exĂ©cution similaire Ă  l’exploration linĂ©aire dans les tĂąches de ce type, effectuĂ©es dans des sites non-accessibles. D’ailleurs, les Ă©carts de temps d’exĂ©cution entre les deux types d’exploration sont globalement plus rĂ©duits dans les tĂąches effectuĂ©es dans des sites non-accessibles, par rapport Ă  celles effectuĂ©es dans des sites accessibles. MĂȘme si l’exploration spatiale amĂ©liore l’accĂšs aux sites Web pour les personnes non-voyantes dans certaines situations de handicap (non-respect de l’ordre logique de lecture dans le code, non-respect de l’association entre un champ et son Ă©tiquette dans les formulaires), notre outil TactoWeb peut ĂȘtre amĂ©liorĂ©. L’exploration horizontale devrait ĂȘtre plus guidĂ©e et la qualitĂ© de la synthĂšse vocale grandement amĂ©liorĂ©e. Enfin, si on regarde globalement les rĂ©sultats de notre expĂ©rimentation, on se rend compte que l’exploration linĂ©aire semble plus pertinente lorsqu’il s’agit de naviguer entre les diffĂ©rentes pages d’un mĂȘme site Web, et que l’exploration spatiale semble plus adaptĂ©e lorsqu’il faut explorer dans une mĂȘme page Web. Les deux types d’explorations semblent donc complĂ©mentaires.----------ABSTRACT : The emergence of new technologies and the online world changed the way we access information. It is now possible to access any information, at any time, and in any place. This apparent ease of access to information is however far from the reality of people with disabilities. The emergence of new technologies and the online world have created new situations of handicap for them. This thesis focuses on situations of handicap faced by blind people when they browse Web pages. Fortunately, appropriate technologies are currently available for blind people wishing to access the World Wide Web: screen readers. These systems allow a linear exploration of a Web page, using audio feedback generated by speech synthesis. This adaptation greatly improves Web accessibility but also creates a lot of frustration. This frustration is mainly produced by non-compliance with accessibility guidelines in several Web sites, as well as the strictly linear presentation of information by screen readers. The goal of our research is to improve access to the Web for blind users, offering them an alternative to linear exploration: spatial exploration of Web pages, i.e. with tactile and audible feedbacks. Through spatial exploration, we want to override some accessibility issues in Web pages, give a better overall picture of the pages, and give a better connection between linked information in these pages. Our main hypothesis is: Spatial exploration of a Web site is more effective and more efficient than linear exploration for blind users. Spatial exploration is done through TactoWeb, a multimodal Web browser we developed. TactoWeb is controlled by the Tactograph, a tactile feedback device producing undulations and vibrations, depending on where the cursor is on the Web page. Audio feedback is a combination between speech synthesis and a set of earcons. This multimodal approach allows the user to recreate connections between the different elements composing a Web page that could have disappeared during the linearization of the information. Before designing our system, we identified the strengths and weaknesses of screen readers. The thesis presents the process of designing our multimodal Web browser, and improving the hardware we used. A training tutorial was designed to guide blind users in spatial exploration. Then, we evaluate our system using a comparative study. This study takes the form of an experiment comparing JAWS, a screen reader using linear exploration, with TactoWeb, our browser allowing spatial exploration. It is obvious the degree of maturity of each tool is different. JAWS actually exists since 1995 whereas we are still using the first version of TactoWeb. This difference of maturity could affect the results of our evaluation in favour of the linear exploration with JAWS. In addition, one must remind that our participants have several years of experience with JAWS whereas they will use space exploration with TactoWeb for the first time during the tutorial (average of 38 min) and the experiment (between two and three hours). The study involves 14 blind participants and highlights the different aspects of space exploration that reduce handicap situations encountered when browsing Web pages. To do so, we studied the success rate, the difficulty and the execution time for eight tasks performed with both types of exploration. These tasks are divided into two types according to their purpose: finding information in a Web site and filling out a form. Moreover, each of these subsets includes two tasks performed in accessible Web sites and two in non-accessible Web sites. Each type of task, in an accessible Web site or not, has been performed by each participant, with each tool (JAWS and TactoWeb). This allowed us to study the two types of exploration in four different situations, observing their advantages and disadvantages for each of these situations. Moreover, we observed whether there were differences between congenital and late blind, depending of the type of exploration used. The study invalidated our main hypothesis. The two types of exploration are in fact as effective as each other, but linear exploration is more efficient thanks to a shorter execution time. This efficiency difference could be explained by the fact that participants had much more experience with the linear exploration tool than with the spatial exploration tool, which is limited to the time they used our tutorial (average of 38 min) and to the duration of the experiment. Moreover, one must take into account the difference of maturity between the two tools. Our main hypothesis is not validated but it could be different when TactoWeb will have greater maturity and when participants will have more experience with our tool. However we obtain a similar effectiveness despite the difference of user experience among the participants and the difference of maturity between the two tools, and this is a major positive point for spatial exploration. Learning spatial exploration seems to take a lot less time than learning linear exploration. Moreover, spatial exploration generates less difficulty when filling out a form, as well as a similar execution time as the linear exploration for tasks of this type is made in non-accessible Web sites. Also, differences in execution time between the two types of exploration are generally smaller in the tasks performed in non-accessible Web sites than those made in accessible Web sites. Even if spatial exploration improves Web accessibility for blind people in some situations of handicap (non-compliance with the logical reading order in the code, no association between a field and its label in forms), the TactoWeb browser can be improved. The horizontal exploration could be more guided and the quality of speech synthesis greatly improved. Finally, if we look at the overall results of our experiment, we realize that linear exploration seems more relevant when it comes to navigating between the different pages of the same Web site, and spatial exploration seems more relevant when exploring in a single Web page. So the two types of exploration seem to be complementary

    Design space for multimodal interaction

    No full text
    Abstract: One trend in Human Computer Interaction is to extend the sensory-motor capabilities of computer systems to better match the natural communication means of humans. Although the multiplicity of modalities opens a vast world of experience, our understanding of how they relate to each other is still unclear and the terminology is unstable. In this paper we present our definitions and existing frameworks useful for the design of multimodal interaction. Key words: Multimodal UI, I/O devices, Interaction Languages, Combination. 1
    corecore