17 research outputs found

    Phrasing Bimanual Interaction for Visual Design

    Get PDF
    Architects and other visual thinkers create external representations of their ideas to support early-stage design. They compose visual imagery with sketching to form abstract diagrams as representations. When working with digital media, they apply various visual operations to transform representations, often engaging in complex sequences. This research investigates how to build interactive capabilities to support designers in putting together, that is phrasing, sequences of operations using both hands. In particular, we examine how phrasing interactions with pen and multi-touch input can support modal switching among different visual operations that in many commercial design tools require using menus and tool palettes—techniques originally designed for the mouse, not pen and touch. We develop an interactive bimanual pen+touch diagramming environment and study its use in landscape architecture design studio education. We observe interesting forms of interaction that emerge, and how our bimanual interaction techniques support visual design processes. Based on the needs of architects, we develop LayerFish, a new bimanual technique for layering overlapping content. We conduct a controlled experiment to evaluate its efficacy. We explore the use of wearables to identify which user, and distinguish what hand, is touching to support phrasing together direct-touch interactions on large displays. From design and development of the environment and both field and controlled studies, we derive a set methods, based upon human bimanual specialization theory, for phrasing modal operations through bimanual interactions without menus or tool palettes

    Partially-indirect Bimanual Input with Gaze, Pen, and Touch for Pan, Zoom, and Ink Interaction

    Get PDF
    Bimanual pen and touch UIs are mainly based on the direct manipulation paradigm. Alternatively we propose partially- indirect bimanual input, where direct pen input is used with the dominant hand, and indirect-touch input with the non-dominant hand. As direct and indirect inputs do not overlap, users can interact in the same space without interference. We investigate two indirect-touch techniques combined with direct pen input: the first redirects touches to the user’s gaze position, and the second redirects touches to the pen position. In this paper, we present an empirical user study where we compare both partially-indirect techniques to direct pen and touch input in bimanual pan, zoom, and ink tasks. Our experimental results show that users are comparatively fast with the indirect techniques, but more accurate as users can dynamically change the zoom-target during indirect zoom gestures. Further our studies reveal that direct and indirect zoom gestures have distinct characteristics regarding spatial use, gestural use, and bimanual parallelism

    Understanding and Rejecting Errant Touches on Multi-touch Tablets

    Get PDF
    Given the pervasion of multi-touch tablet, pen-based applications have rapidly moved onto this new platform. Users draw both with bare fingers and using capacitive pens as they would do on paper in the past. Unlike paper, these tablets cannot distinguish legitimate finger/pen input from accidental touches by other parts of the user's hand. In this thesis, we refer it to as errant touch rejection problem since users may unintentionally touch the screen with other parts of their hand. In this thesis, I design, implement and evaluate new approaches, bezel-focus rejection, of preventing errant touches on multi-touch tablets. I began the research by conducting a formal study to collect and characterize errant touches. I analyzed the data collected from the study and the results are guiding me to design rejection techniques. I will conclude this research by developing bezel-focus rejection and evaluate its performance. The results show that bezel-focus rejection yields high rejection rate of errant touches and make users more inclined to rest hands on tablet than comparison techniques. This research has two major contributions to Human Computer Interaction (HCI) community. First, my proposed errant touch rejection approaches can be applied the other pen-based note-taking applications. Second, my experimental results can serve as a guide to other developing similar techniques

    Investigating Precise Control in Spatial Interactions: Proxemics, Kinesthetics, and Analytics

    Get PDF
    Augmented and Virtual Reality (AR/VR) technologies have reshaped the way in which we perceive the virtual world. In fact, recent technological advancements provide experiences that make the physical and virtual worlds almost indistinguishable. However, the physical world affords subtle sensorimotor cues which we subconsciously utilize to perform simple and complex tasks in our daily lives. The lack of this affordance in existing AR/VR systems makes it difficult for their mainstream adoption over conventional 2D2D user interfaces. As a case in point, existing spatial user interfaces (SUI) lack the intuition to perform tasks in a manner that is perceptually familiar to the physical world. The broader goal of this dissertation lies in facilitating an intuitive spatial manipulation experience, specifically for motor control. We begin by investigating the role of proximity to an action on precise motor control in spatial tasks. We do so by introducing a new SUI called the Clock-Maker's Work-Space (CMWS), with the goal of enabling precise actions close to the body, akin to the physical world. On evaluating our setup in comparison to conventional mixed-reality interfaces, we find CMWS to afford precise actions for bi-manual spatial tasks. We further compare our SUI with a physical manipulation task and observe similarities in user behavior across both tasks. We subsequently narrow our focus on studying precise spatial rotation. We utilize haptics, specifically force-feedback (kinesthetics) for augmenting fine motor control in spatial rotational task. By designing three kinesthetic rotation metaphors, we evaluate precise rotational control with and without haptic feedback for 3D shape manipulation. Our results show that haptics-based rotation algorithms allow for precise motor control in 3D space, also, help reduce hand fatigue. In order to understand precise control in its truest form, we investigate orthopedic surgery training from the point of analyzing bone-drilling tasks. We designed a hybrid physical-virtual simulator for bone-drilling training and collected physical data for analyzing precise drilling action. We also developed a Laplacian based performance metric to help expert surgeons evaluate the resident training progress across successive years of orthopedic residency

    Tabletop tangible maps and diagrams for visually impaired users

    Get PDF
    En dépit de leur omniprésence et de leur rôle essentiel dans nos vies professionnelles et personnelles, les représentations graphiques, qu'elles soient numériques ou sur papier, ne sont pas accessibles aux personnes déficientes visuelles car elles ne fournissent pas d'informations tactiles. Par ailleurs, les inégalités d'accès à ces représentations ne cessent de s'accroître ; grâce au développement de représentations graphiques dynamiques et disponibles en ligne, les personnes voyantes peuvent non seulement accéder à de grandes quantités de données, mais aussi interagir avec ces données par le biais de fonctionnalités avancées (changement d'échelle, sélection des données à afficher, etc.). En revanche, pour les personnes déficientes visuelles, les techniques actuellement utilisées pour rendre accessibles les cartes et les diagrammes nécessitent l'intervention de spécialistes et ne permettent pas la création de représentations interactives. Cependant, les récentes avancées dans le domaine de l'adaptation automatique de contenus laissent entrevoir, dans les prochaines années, une augmentation de la quantité de contenus adaptés. Cette augmentation doit aller de pair avec le développement de dispositifs utilisables et abordables en mesure de supporter l'affichage de représentations interactives et rapidement modifiables, tout en étant accessibles aux personnes déficientes visuelles. Certains prototypes de recherche s'appuient sur une représentation numérique seulement : ils peuvent être instantanément modifiés mais ne fournissent que très peu de retour tactile, ce qui rend leur exploration complexe d'un point de vue cognitif et impose de fortes contraintes sur le contenu. D'autres prototypes s'appuient sur une représentation numérique et physique : bien qu'ils puissent être explorés tactilement, ce qui est un réel avantage, ils nécessitent un support tactile qui empêche toute modification rapide. Quant aux dispositifs similaires à des tablettes Braille, mais avec des milliers de picots, leur coût est prohibitif. L'objectif de cette thèse est de pallier les limitations de ces approches en étudiant comment développer des cartes et diagrammes interactifs physiques, modifiables et abordables. Pour cela, nous nous appuyons sur un type d'interface qui a rarement été étudié pour des utilisateurs déficients visuels : les interfaces tangibles, et plus particulièrement les interfaces tangibles sur table. Dans ces interfaces, des objets physiques représentent des informations numériques et peuvent être manipulés par l'utilisateur pour interagir avec le système, ou par le système lui-même pour refléter un changement du modèle numérique - on parle alors d'interfaces tangibles sur tables animées, ou actuated. Grâce à la conception, au développement et à l'évaluation de trois interfaces tangibles sur table (les Tangible Reels, la Tangible Box et BotMap), nous proposons un ensemble de solutions techniques répondant aux spécificités des interfaces tangibles pour des personnes déficientes visuelles, ainsi que de nouvelles techniques d'interaction non-visuelles, notamment pour la reconstruction d'une carte ou d'un diagramme et l'exploration de cartes de type " Pan & Zoom ". D'un point de vue théorique, nous proposons aussi une nouvelle classification pour les dispositifs interactifs accessibles.Despite their omnipresence and essential role in our everyday lives, online and printed graphical representations are inaccessible to visually impaired people because they cannot be explored using the sense of touch. The gap between sighted and visually impaired people's access to graphical representations is constantly growing due to the increasing development and availability of online and dynamic representations that not only give sighted people the opportunity to access large amounts of data, but also to interact with them using advanced functionalities such as panning, zooming and filtering. In contrast, the techniques currently used to make maps and diagrams accessible to visually impaired people require the intervention of tactile graphics specialists and result in non-interactive tactile representations. However, based on recent advances in the automatic production of content, we can expect in the coming years a growth in the availability of adapted content, which must go hand-in-hand with the development of affordable and usable devices. In particular, these devices should make full use of visually impaired users' perceptual capacities and support the display of interactive and updatable representations. A number of research prototypes have already been developed. Some rely on digital representation only, and although they have the great advantage of being instantly updatable, they provide very limited tactile feedback, which makes their exploration cognitively demanding and imposes heavy restrictions on content. On the other hand, most prototypes that rely on digital and physical representations allow for a two-handed exploration that is both natural and efficient at retrieving and encoding spatial information, but they are physically limited by the use of a tactile overlay, making them impossible to update. Other alternatives are either extremely expensive (e.g. braille tablets) or offer a slow and limited way to update the representation (e.g. maps that are 3D-printed based on users' inputs). In this thesis, we propose to bridge the gap between these two approaches by investigating how to develop physical interactive maps and diagrams that support two-handed exploration, while at the same time being updatable and affordable. To do so, we build on previous research on Tangible User Interfaces (TUI) and particularly on (actuated) tabletop TUIs, two fields of research that have surprisingly received very little interest concerning visually impaired users. Based on the design, implementation and evaluation of three tabletop TUIs (the Tangible Reels, the Tangible Box and BotMap), we propose innovative non-visual interaction techniques and technical solutions that will hopefully serve as a basis for the design of future TUIs for visually impaired users, and encourage their development and use. We investigate how tangible maps and diagrams can support various tasks, ranging from the (re)construction of diagrams to the exploration of maps by panning and zooming. From a theoretical perspective we contribute to the research on accessible graphical representations by highlighting how research on maps can feed research on diagrams and vice-versa. We also propose a classification and comparison of existing prototypes to deliver a structured overview of current research

    Presentation adaptation for multimodal interface systems: Three essays on the effectiveness of user-centric content and modality adaptation

    Full text link
    The use of devices is becoming increasingly ubiquitous and the contexts of their users more and more dynamic. This often leads to situations where one communication channel is rather impractical. Text-based communication is particularly inconvenient when the hands are already occupied with another task. Audio messages induce privacy risks and may disturb other people if used in public spaces. Multimodal interfaces thus offer users the flexibility to choose between multiple interaction modalities. While the choice of a suitable input modality lies in the hands of the users, they may also require output in a different modality depending on their situation. To adapt the output of a system to a particular context, rules are needed that specify how information should be presented given the users’ situation and state. Therefore, this thesis tests three adaptation rules that – based on observations from cognitive science – have the potential to improve the interaction with an application by adapting the presented content or its modality. Following modality alignment, the output (audio versus visual) of a smart home display is matched with the user’s input (spoken versus manual) to the system. Experimental evaluations reveal that preferences for an input modality are initially too unstable to infer a clear preference for either interaction modality. Thus, the data shows no clear relation between the users’ modality choice for the first interaction and their attitude towards output in different modalities. To apply multimodal redundancy, information is displayed in multiple modalities. An application of the rule in a video conference reveals that captions can significantly reduce confusion. However, the effect is limited to confusion resulting from language barriers, whereas contradictory auditory reports leave the participants in a state of confusion independent of whether captions are available or not. We therefore suggest to activate captions only when the facial expression of a user – captured by action units, expressions of positive or negative affect, and a reduced blink rate – implies that the captions effectively improve comprehension. Content filtering in movies puts the character into the spotlight that – according to the distribution of their gaze to elements in the previous scene – the users prefer. If preferences are predicted with machine learning classifiers, this has the potential to significantly improve the user’ involvement compared to scenes of elements that the user does not prefer. Focused attention is additionally higher compared to scenes in which multiple characters take a lead role

    The student-produced electronic portfolio in craft education

    Get PDF
    The authors studied primary school students’ experiences of using an electronic portfolio in their craft education over four years. A stimulated recall interview was applied to collect user experiences and qualitative content analysis to analyse the collected data. The results indicate that the electronic portfolio was experienced as a multipurpose tool to support learning. It makes the learning process visible and in that way helps focus on and improves the quality of learning. © ISLS.Peer reviewe

    Diagnostic CALL tool for Arabic learners

    Full text link

    Physical Diagnosis and Rehabilitation Technologies

    Get PDF
    The book focuses on the diagnosis, evaluation, and assistance of gait disorders; all the papers have been contributed by research groups related to assistive robotics, instrumentations, and augmentative devices
    corecore