10 research outputs found

    Use of Landmarks to Improve Spatial Learning and Revisitation in Computer Interfaces

    Get PDF
    Efficient spatial location learning and remembering are just as important for two-dimensional Graphical User Interfaces (GUI) as they are for real environments where locations are revisited multiple times. Rapid spatial memory development in GUIs, however, can be difficult because these interfaces often lack adequate landmarks that have been predominantly used by people to learn and recall real-life locations. In the absence of sufficient landmarks in GUIs, artificially created visual objects (i.e., artificial landmarks) could be used as landmarks to support spatial memory development of spatial locations. In order to understand how spatial memory development occurs in GUIs and explore ways to assist users’ efficient location learning and recalling in GUIs, I carried out five studies exploring the use of landmarks in GUIs – one study that investigated interfaces of four standard desktop applications: Microsoft Word, Facebook, Adobe Photoshop, and Adobe Reader, and other four that tested artificial landmarks augmented two prototype desktop GUIs against non-landmarked versions: command selection interfaces and linear document viewers; in addition, I tested landmarks’ use in variants of these interfaces that varied in the number of command sets (small, medium, and large) and types of linear documents (textual and video). Results indicate that GUIs’ existing features and design elements can be reliable landmarks in GUIs that provide spatial benefits similar to real environments. I also show that artificial landmarks can significantly improve spatial memory development of GUIs, allowing support for rapid spatial location learning and remembering in GUIs. Overall, this dissertation reveals that landmarks can be a valuable addition to graphical systems to improve the memorability and usability of GUIs

    Supporting Transitions To Expertise In Hidden Toolbars

    Get PDF
    Hidden toolbars are becoming common on mobile devices. These techniques maximize the space available for application content by keeping tools off-screen until needed. However, current designs require several actions to make a selection, and they do not provide shortcuts for users who have become familiar with the toolbar. To better understand the performance capabilities and tradeoffs involved in hidden toolbars, we outline a design space that captures the key elements of these controls and report on an empirical evaluation of four designs. Two of our designs provide shortcuts that are based on the user’s spatial memory of item locations. The study found that toolbars with spatial-memory shortcuts had significantly better performance (700ms faster) than standard designs currently in use. Participants quickly learned the shortcut selection method (although switching to a memory-based method led to higher error rates than the visually-guided techniques). Participants strongly preferred one of the shortcut methods that allowed selections by swiping across the screen bezel at the location of the desired item. This work shows that shortcut techniques are feasible and desirable on touch devices and shows that spatial memory can provide a foundation for designing shortcuts

    Measuring user experience for virtual reality

    Get PDF
    In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. These technologies have the potential to create new experiences that combine the advantages of reality and virtuality. While the technology for input as well as output devices is market ready, only a few solutions for everyday VR - online shopping, games, or movies - exist, and empirical knowledge about performance and user preferences is lacking. All this makes the development and design of human-centered user interfaces for VR a great challenge. This thesis investigates the evaluation and design of interactive VR experiences. We introduce the Virtual Reality User Experience (VRUX) model based on VR-specific external factors and evaluation metrics such as task performance and user preference. Based on our novel UX evaluation approach, we contribute by exploring the following directions: shopping in virtual environments, as well as text entry and menu control in the context of everyday VR. Along with this, we summarize our findings by design spaces and guidelines for choosing optimal interfaces and controls in VR.In den letzten Jahren haben Virtual Reality (VR) und 3D User Interfaces (3DUI) stark an PopularitĂ€t gewonnen, insbesondere bei Hard- und Software im Konsumerbereich. Diese Technologien haben das Potenzial, neue Erfahrungen zu schaffen, die die Vorteile von RealitĂ€t und VirtualitĂ€t kombinieren. WĂ€hrend die Technologie sowohl fĂŒr Eingabe- als auch fĂŒr AusgabegerĂ€te marktreif ist, existieren nur wenige Lösungen fĂŒr den Alltag in VR - wie Online-Shopping, Spiele oder Filme - und es fehlt an empirischem Wissen ĂŒber Leistung und BenutzerprĂ€ferenzen. Dies macht die Entwicklung und Gestaltung von benutzerzentrierten BenutzeroberflĂ€chen fĂŒr VR zu einer großen Herausforderung. Diese Arbeit beschĂ€ftigt sich mit der Evaluation und Gestaltung von interaktiven VR-Erfahrungen. Es wird das Virtual Reality User Experience (VRUX)- Modell eingefĂŒhrt, das auf VR-spezifischen externen Faktoren und Bewertungskennzahlen wie Leistung und BenutzerprĂ€ferenz basiert. Basierend auf unserem neuartigen UX-Evaluierungsansatz leisten wir einen Beitrag, indem wir folgende interaktive Anwendungsbereiche untersuchen: Einkaufen in virtuellen Umgebungen sowie Texteingabe und MenĂŒsteuerung im Kontext des tĂ€glichen VR. Die Ergebnisse werden außerdem mittels Richtlinien zur Auswahl optimaler Schnittstellen in VR zusammengefasst

    Enabling Expressive Keyboard Interaction with Finger, Hand, and Hand Posture Identification

    Get PDF
    The input space of conventional physical keyboards is largely limited by the number of keys. To enable more actions than simply entering the symbol represented by a key, standard keyboards use combinations of modifier keys such as command, alternate, or shift to re-purpose the standard text entry behaviour. To explore alternatives to conventional keyboard shortcuts and enable more expressive keyboard interaction, this thesis first presents Finger-Aware Shortcuts, which encode information from finger, hand, and hand posture identification as keyboard shortcuts. By detecting the hand and finger used to press a key, and an open or closed hand posture, a key press can have multiple command mappings. A formative study revealed the performance and preference patterns when using different fingers and postures to press a key. The results were used to develop a computer vision algorithm to identify fingers and hands on a keyboard captured by a built-in laptop camera and a reflector. This algorithm was built into a background service to enable system-wide Finger-Aware Shortcut keys in any application. A controlled experiment used the service to compare the performance of Finger-Aware Shortcuts with existing methods. The results showed that Finger-Aware Shortcuts are comparable with a common class of shortcuts using multiple modifier keys. Several application demonstrations illustrate different use cases and mappings for Finger-Aware Shortcuts. To further explore how introducing finger awareness can help foster the learning and use of keyboard shortcuts, an interview study was conducted with expert computer users to identify the likely causes that hinder the adoption of keyboard shortcuts. Based on this, the concept of Finger-Aware Shortcuts is extended and two guided keyboard shortcut techniques are proposed: FingerArc and FingerChord. The two techniques provide dynamic visual guidance on the screen when users press and hold an alphabetical key semantically related to a set of commands. FingerArc differentiates these commands by examining the angle between the thumb and index finger; FingerChord differentiates these commands by allowing users to press different key areas using a second finger. The thesis contributes comprehensive evaluations of Finger-Aware Shortcuts and proof-of-concept demonstrations of FingerArc and FingerChord. Together, they contribute a novel interaction space that expands the conventional keyboard input space with more expressivity

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application

    An architecutre for the effective use of mobile devices in supporting contact learning

    Get PDF
    The features and capacities of mobile devices offer a wide range of significant opportunities for providing learning content in workplaces and educational institutions. This new approach of teaching, called mobile learning, allows for the delivery of learning content on the move at any time. Mobile learning supports learning by producing learning content to learners in a modern and acceptable way. The number of mobile learning applications has increased rapidly in educational environments. There are, however, limited mobile learning applications that take advantage of mobile devices to support contact learning in the classroom environment. The aim of this research was to design a mobile learning architecture to effectively support contact learning in the classroom. The researcher investigated the historical and theoretical background of mobile learning and reported these findings. This included an overview of existing mobile learning architectures. After identifying their limitations, the researcher designed the Contact Instruction Mobile Learning Architecture (CIMLA) to facilitate the use of mobile devices in the classroom. The researcher developed the LiveLearning prototype based on the proposed architecture as a proof of concept. He conducted a usability evaluation in order to determine the usability of LiveLearning. The results indicated that the LiveLearning prototype is effective in supporting contact learning in the classroom

    Spatial Hypermedia as a programming environment

    Get PDF
    This thesis investigates the possibilities opened to a programmer when their programming environment not only utilises Spatial Hypermedia functionality, but embraces it as a core component. Designed and built to explore these possibilities, SpIDER (standing for Spatial Integrated Development Environment Research) is an IDE featuring not only traditional functionality such as content assist and debugging support but also multimedia integration and free-form spatial code layout. Such functionality allows programmers to visually communicate aspects of the intent and structure of their code that would be tedious—and in some cases impossible—to achieve in conventional IDEs. Drawing from literature on Spatial Memory, the design of SpIDER has been driven by the desire to improve the programming experience while also providing a flexible authoring environment for software development. The programmer’s use of Spatial Memory is promoted, in particular, by: utilising fixed sized authoring canvases; providing the capacity for landmarks; exploiting a hierarchical linking system; and having well defined occlusion and spatial stability of authored code. The key challenge in implementing SpIDER was to devise an algorithm to bridge the gap between spatially expressed source code, and the serial text forms required by compilers. This challenge was met by developing an algorithm that we have called the flow walker. We validated this algorithm through user testing to establish that participants’ interpretation of the meaning of spatially laid out code matched the flow walker’s implementation. SpIDER can be obtained at: https://sourceforge.net/projects/spatial-ide-research-spide

    Understanding Mode and Modality Transfer in Unistroke Gesture Input

    Get PDF
    Unistroke gestures are an attractive input method with an extensive research history, but one challenge with their usage is that the gestures are not always self-revealing. To obtain expertise with these gestures, interaction designers often deploy a guided novice mode -- where users can rely on recognizing visual UI elements to perform a gestural command. Once a user knows the gesture and associated command, they can perform it without guidance; thus, relying on recall. The primary aim of my thesis is to obtain a comprehensive understanding of why, when, and how users transfer from guided modes or modalities to potentially more efficient, or novel, methods of interaction -- through symbolic-abstract unistroke gestures. The goal of my work is to not only study user behaviour from novice to more efficient interaction mechanisms, but also to expand upon the concept of intermodal transfer to different contexts. We garner this understanding by empirically evaluating three different use cases of mode and/or modality transitions. Leveraging marking menus, the first piece investigates whether or not designers should force expertise transfer by penalizing use of the guided mode, in an effort to encourage use of the recall mode. Second, we investigate how well users can transfer skills between modalities, particularly when it is impractical to present guidance in the target or recall modality. Lastly, we assess how well users' pre-existing spatial knowledge of an input method (the QWERTY keyboard layout), transfers to performance in a new modality. Applying lessons from these three assessments, we segment intermodal transfer into three possible characterizations -- beyond the traditional novice to expert contextualization. This is followed by a series of implications and potential areas of future exploration spawning from our work

    Improving command selection in smart environments by exploiting spatial constancy

    Get PDF
    With the a steadily increasing number of digital devices, our environments are becoming increasingly smarter: we can now use our tablets to control our TV, access our recipe database while cooking, and remotely turn lights on and off. Currently, this Human-Environment Interaction (HEI) is limited to in-place interfaces, where people have to walk up to a mounted set of switches and buttons, and navigation-based interaction, where people have to navigate on-screen menus, for example on a smart-phone, tablet, or TV screen. Unfortunately, there are numerous scenarios in which neither of these two interaction paradigms provide fast and convenient access to digital artifacts and system commands. People, for example, might not want to touch an interaction device because their hands are dirty from cooking: they want device-free interaction. Or people might not want to have to look at a screen because it would interrupt their current task: they want system-feedback-free interaction. Currently, there is no interaction paradigm for smart environments that allows people for these kinds of interactions. In my dissertation, I introduce Room-based Interaction to solve this problem of HEI. With room-based interaction, people associate digital artifacts and system commands with real-world objects in the environment and point toward these real-world proxy objects for selecting the associated digital artifact. The design of room-based interaction is informed by a theoretical analysis of navigation- and pointing-based selection techniques, where I investigated the cognitive systems involved in executing a selection. An evaluation of room-based interaction in three user studies and a comparison with existing HEI techniques revealed that room-based interaction solves many shortcomings of existing HEI techniques: the use of real-world proxy objects makes it easy for people to learn the interaction technique and to perform accurate pointing gestures, and it allows for system-feedback-free interaction; the use of the environment as flat input space makes selections fast; the use of mid-air full-arm pointing gestures allows for device-free interaction and increases awareness of other’s interactions with the environment. Overall, I present an alternative selection paradigm for smart environments that is superior to existing techniques in many common HEI-scenarios. This new paradigm can make HEI more user-friendly, broaden the use cases of smart environments, and increase their acceptance for the average user
    corecore