140 research outputs found

    Literature Survey on Interaction Techniques for Large Displays

    Get PDF
    When designing for large screen displays, designers are forced to deal with cursor tracking issues, interacting over distances, and space management issues. Because of the large visual angle of the user that the screen can cover, it may be hard for users to begin and complete search tasks for basic items such as cursors or icons. In addition, maneuvering over long distances and acquiring small targets understandably takes more time than the same interactions on normally sized screen systems. To deal with these issues, large display researchers have developed more and more unconventional devices, methods and widgets for interaction, and systems for space and task management. For tracking cursors there are techniques that deal with the size and shape of the cursor, as well as the “density” of the cursor. There are other techniques that help direct the attention of the user to the cursor. For target acquisition on large screens, many researchers saw fit to try to augment existing 2D GUI metaphors. They try to optimize Fitts’ law to accomplish this. Some techniques sought to enlarge targets while others sought to enlarge the cursor itself. Even other techniques developed ways of closing the distances on large screen displays. However, many researchers feel that existing 2D metaphors do not and will not work for large screens. They feel that the community should move to more unconventional devices and metaphors. These unconventional means include use of eye-tracking, laser-pointing, hand-tracking, two-handed touchscreen techniques, and other high-DOF devices. In the end, many of these developed techniques do provide effective means for interaction on large displays. However, we need to quantify the benefits of these methods and understand them better. The more we understand the advantages and disadvantages of these techniques, the easier it will be to employ them in working large screen systems. We also need to put into place a kind of interaction standard for these large screen systems. This could mean simply supporting desktop events such as pointing and clicking. It may also mean that we need to identify the needs of each domain that large screens are used for and tailor the interaction techniques for the domain

    Steering in layers above the display surface

    Get PDF
    Interaction techniques that use the layers above the display surface to extend the functionality of pen-based digitized surfaces continue to emerge. In such techniques, stylus movements are constrained by the bounds of a layer inside which the interaction is active, as well as constraints on the direction of movement within the layer. The problem addressed in this thesis is that designers currently have no model to predict movement time (MT) or quantify the difficulty, for movement (steering) in layers above the display surface constrained by thickness of the layer, its height above the display, and the width and length of the path. The problem has two main parts: first, how to model steering in layers, and second, how to visualize the layers to provide feedback for the steering task. The solution described is a model that predicts movement time and that quantifies the difficulty of steering through constrained and unconstrained paths in layers above the display surface. Through a series of experiments we validated the derivation and applicability of the proposed models. A predictive model is necessary because the model serves as the basis for design of interaction techniques in the design space; and predictive models can be used for quantitative evaluation of interaction techniques. The predictive models are important as they allow researchers to evaluate potential solutions independent of experimental conditions.Addressing the second part of the problem, we describe four visualization designs using cursors. We evaluated the effectiveness of the visualization by conducting a controlled experiment

    Predicting Endpoint of Goal-Directed Motion in Modern Desktop Interfaces using Motion Kinematics

    Get PDF
    Researchers who study pointing facilitation have identified the ability to identify--during motion--the likely target of a user's pointing gesture, as a necessary precursor to pointing facilitation in modern computer interfaces. To address this need, we develop and analyze how an understanding of the underlying characteristics of motion can enhance our ability to predict the target or endpoint of a goal-directed movement in graphical user interfaces. Using established laws of motion and an analysis of users' kinematic profiles, we demonstrate that the initial 90% of motion is primarly balistic and submovements are limited to the last 10% of gesture movement. Through experimentation, we demonstrate that target constraint and the intended use of a target has either a minimal effect on the motion profile or affects the last 10% of motion. Therefore, we demonstrate that any technique that models the intial 90% of gesture motion will not be affected by target constraint or intended use. Given, these results, we develop a technique to model the initial ballistic motion to predict user endpoint by adopting principles from the minimum jerk principle. Based on this principle, we derive an equation to model the initial ballistic phase of movement in order to predict movement distance and direction. We demonstrate through experimentation that we can successfully model pointing motion to identify a region of likely targets on the computer display. Next, we characterize the effects of target size and target distance on prediction accuracy. We demonstrate that there exists a linear relationship between prediction accuracy and target distance and that this relationship can be leveraged to create a probabilistic model for each target on the computer display. We then demonstrate how these probabilities could be used to enable pointing facilitation in modern computer interfaces. Finally, we demonstrate that the results from our evaluation of our technique are supported by the current motor control literature. In addition, we show that our technique provides optimal accuracy for any optimal accuracy when prediction of motion endpoint is performed using only the ballistic components of motion and before 90% of motion distance

    Redirected Touching

    Get PDF
    In immersive virtual environments, virtual objects cannot be touched. One solution is to use passive haptics - physical props to which virtual objects are registered. The result is compelling; when a user reaches out with a virtual hand to touch a virtual object, her real hand touches and feels a real object. However, for every virtual object to be touched, there must be an analogous physical prop. In the limit, an entire real-world infrastructure would need to be built and changed whenever a virtual scene is changed. Virtual objects and passive haptics have historically been mapped one-to-one. I demonstrate that the mapping need not be one-to-one. One can make a single passive real object provide useful haptic feedback for many virtual objects by exploiting human perception. I developed and investigated three categories of such techniques: 1. Move the virtual world to align different virtual objects in turn with the same real object 2. Move a virtual object into alignment with a real object 3. Map real hand motion to different virtual hand motion, e.g., when the real hand traces a real object, the virtual hand traces a differently shaped virtual object. The first two techniques were investigated for feasibility, and the third was explored more deeply. The first technique (Redirected Passive Haptics) enables users to touch multiple instances of a virtual object, with haptic feedback provided by a single real object. The second technique (The Haptic Hand) attaches a larger-than-hand virtual user interface to the non-dominant hand, mapping the currently relevant part of the interface onto the palm. The third technique (Redirected Touching) warps virtual space to map many differently shaped virtual objects onto a single real object, introducing a discrepancy between real and virtual hand motions. Two studies investigated the technique's effect on task performance and its potential for use in aircraft cockpit procedures training. Users adapt rather quickly to real-virtual discrepancy, and after adaptation, users perform no worse with discrepant virtual objects than with one-to-one virtual objects. Redirected Touching shows promise for training and entertainment applications.Doctor of Philosoph

    Designing Engaging Learning Experiences in Programming

    Get PDF
    In this paper we describe work to investigate the creation of engaging programming learning experiences. Background research informed the design of four fieldwork studies to explore how programming tasks could be framed to motivate learners. Our empirical findings from these four field studies are summarized here, with a particular focus upon one – Whack a Mole – which compared the use of a physical interface with the use of a screen-based equivalent interface to obtain insights into what made for an engaging learning experience. Emotions reported by two sets of participant undergraduate students were analyzed, identifying the links between the emotions experienced during programming and their origin. Evidence was collected of the very positive emotions experienced by learners programming with a physical interface (Arduino) in comparison with a similar program developed using a screen-based equivalent interface. A follow-up study provided further evidence of the motivation of personalized design of programming tangible physical artefacts. Collating all the evidence led to the design of a set of ‘Learning Dimensions’ which may provide educators with insights to support key design decisions for the creation of engaging programming learning experiences

    Eignung von virtueller Physik und Touch-Gesten in Touchscreen-Benutzerschnittstellen fĂŒr kritische Aufgaben

    Get PDF
    The goal of this reasearch was to examine if modern touch screen interaction concepts that are established on consumer electronic devices like smartphones can be used in time-critical and safety-critical use cases like for machine control or healthcare appliances. Several prevalent interaction concepts with and without touch gestures and virtual physics were tested experimentally in common use cases to assess their efficiency, error rate and user satisfaction during task completion. Based on the results, design recommendations for list scrolling and horizontal dialog navigation are given.Das Ziel dieser Forschungsarbeit war es zu untersuchen, ob moderne Touchscreen-Interaktionskonzepte, die auf Consumer-Electronic-GerĂ€ten wie Smartphones etabliert sind, fĂŒr zeit- und sicherheitskritische AnwendungsfĂ€lle wie Maschinensteuerung und MedizingerĂ€te geeignet sind. Mehrere gebrĂ€uchliche Interaktionskonzepte mit und ohne Touch-Gesten und virtueller Physik wurden in hĂ€ufigen AnwendungsfĂ€llen experimentell auf ihre Effizienz, Fehlerrate und Nutzerzufriedenheit bei der Aufgabenlösung untersucht. Basierend auf den Resultaten werden Empfehlungen fĂŒr das Scrollen in Listen und dem horizontalen Navigieren in mehrseitigen Software-Dialogen ausgesprochen
    • 

    corecore