5 research outputs found

    Investigating Phicon feedback in non-visual tangible user interfaces

    No full text
    We investigated ways that users could interact with Phicons in non-visual tabletop tangible user interfaces (TUIs). We carried out a brainstorming and rapid prototyping session with a blind usability expert, using two different non-visual TUI scenarios to quickly explore the design space. From this, we derived a basic set of guidelines and interactions that are common in both scenarios, and which we believe are common in most non-visual tabletop TUI applications. Future work is focused on validating our findings in a fully functioning system

    Practical, appropriate, empirically-validated guidelines for designing educational games

    Get PDF
    There has recently been a great deal of interest in the potential of computer games to function as innovative educational tools. However, there is very little evidence of games fulfilling that potential. Indeed, the process of merging the disparate goals of education and games design appears problematic, and there are currently no practical guidelines for how to do so in a coherent manner. In this paper, we describe the successful, empirically validated teaching methods developed by behavioural psychologists and point out how they are uniquely suited to take advantage of the benefits that games offer to education. We conclude by proposing some practical steps for designing educational games, based on the techniques of Applied Behaviour Analysis. It is intended that this paper can both focus educational games designers on the features of games that are genuinely useful for education, and also introduce a successful form of teaching that this audience may not yet be familiar with

    Improving command selection in smart environments by exploiting spatial constancy

    Get PDF
    With the a steadily increasing number of digital devices, our environments are becoming increasingly smarter: we can now use our tablets to control our TV, access our recipe database while cooking, and remotely turn lights on and off. Currently, this Human-Environment Interaction (HEI) is limited to in-place interfaces, where people have to walk up to a mounted set of switches and buttons, and navigation-based interaction, where people have to navigate on-screen menus, for example on a smart-phone, tablet, or TV screen. Unfortunately, there are numerous scenarios in which neither of these two interaction paradigms provide fast and convenient access to digital artifacts and system commands. People, for example, might not want to touch an interaction device because their hands are dirty from cooking: they want device-free interaction. Or people might not want to have to look at a screen because it would interrupt their current task: they want system-feedback-free interaction. Currently, there is no interaction paradigm for smart environments that allows people for these kinds of interactions. In my dissertation, I introduce Room-based Interaction to solve this problem of HEI. With room-based interaction, people associate digital artifacts and system commands with real-world objects in the environment and point toward these real-world proxy objects for selecting the associated digital artifact. The design of room-based interaction is informed by a theoretical analysis of navigation- and pointing-based selection techniques, where I investigated the cognitive systems involved in executing a selection. An evaluation of room-based interaction in three user studies and a comparison with existing HEI techniques revealed that room-based interaction solves many shortcomings of existing HEI techniques: the use of real-world proxy objects makes it easy for people to learn the interaction technique and to perform accurate pointing gestures, and it allows for system-feedback-free interaction; the use of the environment as flat input space makes selections fast; the use of mid-air full-arm pointing gestures allows for device-free interaction and increases awareness of other’s interactions with the environment. Overall, I present an alternative selection paradigm for smart environments that is superior to existing techniques in many common HEI-scenarios. This new paradigm can make HEI more user-friendly, broaden the use cases of smart environments, and increase their acceptance for the average user

    Tabletop tangible maps and diagrams for visually impaired users

    Get PDF
    En dépit de leur omniprésence et de leur rôle essentiel dans nos vies professionnelles et personnelles, les représentations graphiques, qu'elles soient numériques ou sur papier, ne sont pas accessibles aux personnes déficientes visuelles car elles ne fournissent pas d'informations tactiles. Par ailleurs, les inégalités d'accès à ces représentations ne cessent de s'accroître ; grâce au développement de représentations graphiques dynamiques et disponibles en ligne, les personnes voyantes peuvent non seulement accéder à de grandes quantités de données, mais aussi interagir avec ces données par le biais de fonctionnalités avancées (changement d'échelle, sélection des données à afficher, etc.). En revanche, pour les personnes déficientes visuelles, les techniques actuellement utilisées pour rendre accessibles les cartes et les diagrammes nécessitent l'intervention de spécialistes et ne permettent pas la création de représentations interactives. Cependant, les récentes avancées dans le domaine de l'adaptation automatique de contenus laissent entrevoir, dans les prochaines années, une augmentation de la quantité de contenus adaptés. Cette augmentation doit aller de pair avec le développement de dispositifs utilisables et abordables en mesure de supporter l'affichage de représentations interactives et rapidement modifiables, tout en étant accessibles aux personnes déficientes visuelles. Certains prototypes de recherche s'appuient sur une représentation numérique seulement : ils peuvent être instantanément modifiés mais ne fournissent que très peu de retour tactile, ce qui rend leur exploration complexe d'un point de vue cognitif et impose de fortes contraintes sur le contenu. D'autres prototypes s'appuient sur une représentation numérique et physique : bien qu'ils puissent être explorés tactilement, ce qui est un réel avantage, ils nécessitent un support tactile qui empêche toute modification rapide. Quant aux dispositifs similaires à des tablettes Braille, mais avec des milliers de picots, leur coût est prohibitif. L'objectif de cette thèse est de pallier les limitations de ces approches en étudiant comment développer des cartes et diagrammes interactifs physiques, modifiables et abordables. Pour cela, nous nous appuyons sur un type d'interface qui a rarement été étudié pour des utilisateurs déficients visuels : les interfaces tangibles, et plus particulièrement les interfaces tangibles sur table. Dans ces interfaces, des objets physiques représentent des informations numériques et peuvent être manipulés par l'utilisateur pour interagir avec le système, ou par le système lui-même pour refléter un changement du modèle numérique - on parle alors d'interfaces tangibles sur tables animées, ou actuated. Grâce à la conception, au développement et à l'évaluation de trois interfaces tangibles sur table (les Tangible Reels, la Tangible Box et BotMap), nous proposons un ensemble de solutions techniques répondant aux spécificités des interfaces tangibles pour des personnes déficientes visuelles, ainsi que de nouvelles techniques d'interaction non-visuelles, notamment pour la reconstruction d'une carte ou d'un diagramme et l'exploration de cartes de type " Pan & Zoom ". D'un point de vue théorique, nous proposons aussi une nouvelle classification pour les dispositifs interactifs accessibles.Despite their omnipresence and essential role in our everyday lives, online and printed graphical representations are inaccessible to visually impaired people because they cannot be explored using the sense of touch. The gap between sighted and visually impaired people's access to graphical representations is constantly growing due to the increasing development and availability of online and dynamic representations that not only give sighted people the opportunity to access large amounts of data, but also to interact with them using advanced functionalities such as panning, zooming and filtering. In contrast, the techniques currently used to make maps and diagrams accessible to visually impaired people require the intervention of tactile graphics specialists and result in non-interactive tactile representations. However, based on recent advances in the automatic production of content, we can expect in the coming years a growth in the availability of adapted content, which must go hand-in-hand with the development of affordable and usable devices. In particular, these devices should make full use of visually impaired users' perceptual capacities and support the display of interactive and updatable representations. A number of research prototypes have already been developed. Some rely on digital representation only, and although they have the great advantage of being instantly updatable, they provide very limited tactile feedback, which makes their exploration cognitively demanding and imposes heavy restrictions on content. On the other hand, most prototypes that rely on digital and physical representations allow for a two-handed exploration that is both natural and efficient at retrieving and encoding spatial information, but they are physically limited by the use of a tactile overlay, making them impossible to update. Other alternatives are either extremely expensive (e.g. braille tablets) or offer a slow and limited way to update the representation (e.g. maps that are 3D-printed based on users' inputs). In this thesis, we propose to bridge the gap between these two approaches by investigating how to develop physical interactive maps and diagrams that support two-handed exploration, while at the same time being updatable and affordable. To do so, we build on previous research on Tangible User Interfaces (TUI) and particularly on (actuated) tabletop TUIs, two fields of research that have surprisingly received very little interest concerning visually impaired users. Based on the design, implementation and evaluation of three tabletop TUIs (the Tangible Reels, the Tangible Box and BotMap), we propose innovative non-visual interaction techniques and technical solutions that will hopefully serve as a basis for the design of future TUIs for visually impaired users, and encourage their development and use. We investigate how tangible maps and diagrams can support various tasks, ranging from the (re)construction of diagrams to the exploration of maps by panning and zooming. From a theoretical perspective we contribute to the research on accessible graphical representations by highlighting how research on maps can feed research on diagrams and vice-versa. We also propose a classification and comparison of existing prototypes to deliver a structured overview of current research
    corecore