24,267 research outputs found

    Integrating 2D Mouse Emulation with 3D Manipulation for Visualizations on a Multi-Touch Table

    Get PDF
    We present the Rizzo, a multi-touch virtual mouse that has been designed to provide the fine grained interaction for information visualization on a multi-touch table. Our solution enables touch interaction for existing mouse-based visualizations. Previously, this transition to a multi-touch environment was difficult because the mouse emulation of touch surfaces is often insufficient to provide full information visualization functionality. We present a unified design, combining many Rizzos that have been designed not only to provide mouse capabilities but also to act as zoomable lenses that make precise information access feasible. The Rizzos and the information visualizations all exist within a touch-enabled 3D window management system. Our approach permits touch interaction with both the 3D windowing environment as well as with the contents of the individual windows contained therein. We describe an implementation of our technique that augments the VisLink 3D visualization environment to demonstrate how to enable multi-touch capabilities on all visualizations written with the popular prefuse visualization toolkit.

    Reusable Multi-selection in Touch-Screen User Interfaces

    Get PDF
    Multi-selection is the act of selecting a set of elements in a graphical user interface in order to perform an operation on that set. Examples of multi-selection are selecting thumbnails in an image gallery or files on a file explorer. Whether and how multi-selection is supported in different applications varies widely, which leaves user experiences wanting. Järvi and Parent recently introduced an abstract model of multi-selection that helps programmers to implement multi-selection uniformly and correctly in desktop GUIs. This paper adapts the model to touch-screen devices. We present the rationale for choosing particular gestures for selection commands and explain how they map to the original model. A user study comparing our selection model with the established multi-selection features used by major Android and iOS applications shows that our selection feature allows for the fastest and most accurate selection

    A Text Selection Technique using Word Snapping

    Get PDF
    Conventional copy-and-paste technique for touch screen devices utilizes region handles to specify text snippet. The region handles appear so as to select the initially tapped word, and the user controls the region handles. Most of the text-selection task is performed at the boundary of words, however, the minimum movement unit of the region handle is still a character. We propose a context- sensitive text-selection method for the tablet OSs. For the initial consideration, we investigated a word-snapping method that meant a word as a minimum movement unit. From our experiment, we confirmed that the word-snapping method can significantly reduce the text-selection time if the target text consists of one or two words, and no line breaks exist.KES-2014 18th International Conference on Knowledge-Based and Intelligent Information & Engineering Systems, September 15-17, 2014, Gdynia, Polan

    Probe-based visual analysis of geospatial simulations

    Get PDF
    This work documents the design, development, refinement, and evaluation of probes as an interaction technique for expanding both the usefulness and usability of geospatial visualizations, specifically those of simulations. Existing applications that allow the visualization of, and interaction with, geospatial simulations and their results generally present views of the data that restrict the user to a single perspective. When zoomed out, local trends and anomalies become suppressed and lost; when zoomed in, spatial awareness and comparison between regions become limited. The probe-based interaction model integrates coordinated visualizations within individual probe interfaces, which depict the local data in user-defined regions-of-interest. It is especially useful when dealing with complex simulations or analyses where behavior in various localities differs from other localities and from the system as a whole. The technique has been incorporated into a number of geospatial simulations and visualization tools. In each of these applications, and in general, probe-based interaction enhances spatial awareness, improves inspection and comparison capabilities, expands the range of scopes, and facilitates collaboration among multiple users. The great freedom afforded to users in defining regions-of-interest can cause modifiable areal unit problems to affect the reliability of analyses without the user’s knowledge, leading to misleading results. However, by automatically alerting the user to these potential issues, and providing them tools to help adjust their selections, these unforeseen problems can be revealed, and even corrected

    One Way to Select Many

    Get PDF
    Selecting items from a collection is one of the most common tasks users perform with graphical user interfaces. Practically every application supports this task with a selection feature different from that of any other application. Defects are common, especially in manipulating selections of non-adjacent elements, and flexible selection features are often missing when they would clearly be useful. As a consequence, user effort is wasted. The loss of productivity is experienced in small doses, but all computer users are impacted. The undesirable state of support for multi-element selection prevails because the same selection features are redesigned and reimplemented repeatedly. This article seeks to establish common abstractions for multi-selection. It gives generic but precise meanings to selection operations and makes multi-selection reusable; a JavaScript implementation is described. Application vendors benefit because of reduced development effort. Users benefit because correct and consistent multi-selection becomes available in more contexts

    Literature Survey on Interaction Techniques for Large Displays

    Get PDF
    When designing for large screen displays, designers are forced to deal with cursor tracking issues, interacting over distances, and space management issues. Because of the large visual angle of the user that the screen can cover, it may be hard for users to begin and complete search tasks for basic items such as cursors or icons. In addition, maneuvering over long distances and acquiring small targets understandably takes more time than the same interactions on normally sized screen systems. To deal with these issues, large display researchers have developed more and more unconventional devices, methods and widgets for interaction, and systems for space and task management. For tracking cursors there are techniques that deal with the size and shape of the cursor, as well as the “density” of the cursor. There are other techniques that help direct the attention of the user to the cursor. For target acquisition on large screens, many researchers saw fit to try to augment existing 2D GUI metaphors. They try to optimize Fitts’ law to accomplish this. Some techniques sought to enlarge targets while others sought to enlarge the cursor itself. Even other techniques developed ways of closing the distances on large screen displays. However, many researchers feel that existing 2D metaphors do not and will not work for large screens. They feel that the community should move to more unconventional devices and metaphors. These unconventional means include use of eye-tracking, laser-pointing, hand-tracking, two-handed touchscreen techniques, and other high-DOF devices. In the end, many of these developed techniques do provide effective means for interaction on large displays. However, we need to quantify the benefits of these methods and understand them better. The more we understand the advantages and disadvantages of these techniques, the easier it will be to employ them in working large screen systems. We also need to put into place a kind of interaction standard for these large screen systems. This could mean simply supporting desktop events such as pointing and clicking. It may also mean that we need to identify the needs of each domain that large screens are used for and tailor the interaction techniques for the domain

    Feel the Noise: Mid-Air Ultrasound Haptics as a Novel Human-Vehicle Interaction Paradigm

    Get PDF
    Focussed ultrasound can be used to create the sensation of touch in mid-air. Combined with gestures, this can provide haptic feedback to guide users, thereby overcoming the lack of agency associated with pure gestural interfaces, and reducing the need for vision – it is therefore particularly apropos of the driving domain. In a counter-balanced 2×2 driving simulator study, a traditional in-vehicle touchscreen was compared with a virtual mid-air gestural interface, both with and without ultrasound haptics. Forty-eight experienced drivers (28 male, 20 female) undertook representative in-vehicle tasks – discrete target selections and continuous slider-bar manipulations – whilst driving. Results show that haptifying gestures with ultrasound was particularly effective in reducing visual demand (number of long glances and mean off-road glance time), and increasing performance (shortest interaction times, highest number of correct responses and least ‘overshoots’) associated with continuous tasks. In contrast, for discrete, target-selections, the touchscreen enabled the highest accuracy and quickest responses, particularly when combined with haptic feedback to guide interactions, although this also increased visual demand. Subjectively, the gesture interfaces invited higher ratings of arousal compared to the more familiar touch-surface technology, and participants indicated the lowest levels of workload (highest performance, lowest frustration) associated with the gesture-haptics interface. In addition, gestures were preferred by participants for continuous tasks. The study shows practical utility and clear potential for the use of haptified gestures in the automotive domain
    corecore