11,435 research outputs found
Integrating 2D Mouse Emulation with 3D Manipulation for Visualizations on a Multi-Touch Table
We present the Rizzo, a multi-touch virtual mouse that has been designed to provide the fine grained interaction for information visualization on a multi-touch table. Our solution enables touch interaction for existing mouse-based visualizations. Previously, this transition to a multi-touch environment was difficult because the mouse emulation of touch surfaces is often insufficient to provide full information visualization functionality. We present a unified design, combining many Rizzos that have been designed not only to provide mouse capabilities but also to act as zoomable lenses that make precise information access feasible. The Rizzos and the information visualizations all exist within a touch-enabled 3D window management system. Our approach permits touch interaction with both the 3D windowing environment as well as with the contents of the individual windows contained therein. We describe an implementation of our technique that augments the VisLink 3D visualization environment to demonstrate how to enable multi-touch capabilities on all visualizations written with the popular prefuse visualization toolkit.
Design and User Satisfaction of Interactive Maps for Visually Impaired People
Multimodal interactive maps are a solution for presenting spatial information
to visually impaired people. In this paper, we present an interactive
multimodal map prototype that is based on a tactile paper map, a multi-touch
screen and audio output. We first describe the different steps for designing an
interactive map: drawing and printing the tactile paper map, choice of
multi-touch technology, interaction technologies and the software architecture.
Then we describe the method used to assess user satisfaction. We provide data
showing that an interactive map - although based on a unique, elementary,
double tap interaction - has been met with a high level of user satisfaction.
Interestingly, satisfaction is independent of a user's age, previous visual
experience or Braille experience. This prototype will be used as a platform to
design advanced interactions for spatial learning
Issues and techniques for collaborative music making on multi-touch surfaces
A range of systems exist for collaborative music making on multi-touch surfaces. Some of them have been highly successful, but currently there is no systematic way of designing them, to maximise collaboration for a particular user group. We are particularly interested in systems that will engage novices and experts. We designed a simple application in an initial attempt to clearly analyse some of the issues. Our application allows groups of users to express themselves in collaborative music making using pre-composed materials. User studies were video recorded and analysed using two techniques derived from Grounded Theory and Content Analysis. A questionnaire was also conducted and evaluated. Findings suggest that the application affords engaging interaction. Enhancements for collaborative music making on multi-touch surfaces are discussed. Finally, future work on the prototype is proposed to maximise engagement
Gestures in the wild : studying multi-touch gesture sequences on interactive tabletop exhibits
In this paper we describe our findings from a field study that was conducted at the Vancouver Aquarium to investigate how visitors interact with a large interactive table exhibit using multi-touch gestures. Our findings show that the choice and use of multi-touch gestures are influenced not only by general preferences for certain gestures but also by the interaction context and social context they occur in. We found that gestures are not executed in isolation but linked into sequences where previous gestures influence the formation of subsequent gestures. Furthermore, gestures were used beyond the manipulation of media items to support social encounters around the tabletop exhibit. Our findings indicate the importance of versatile many-to-one mappings between gestures and their actions that, other than one-to-one mappings, can support fluid transitions between gestures as part of sequences and facilitate social information exploration
Poking fun at the surface: exploring touch-point overloading on the multi-touch tabletop with child users
In this paper a collaborative game for children is used to explore touch-point overloading on a multi-touch tabletop. Understanding the occurrence of new interactional limitations, such as the situation of touch-point overloading in a multi-touch interface, is highly relevant for interaction designers working with emerging technologies. The game was designed for the Microsoft Surface 1.0 and during gameplay the number of simultaneous touch-points required gradually increases to beyond the physical capacity of the users. Studies were carried out involving a total of 42 children (from 2 different age groups) playing in groups of between 5-7 and all interactions were logged. From quantitative analysis of the interactions occurring during the game and observations made we explore the impact of overloading and identify other salient findings. This paper also highlights the need for empirical evaluation of the physical and cognitive limitations of interaction with emerging technologies
Multi-Touch Attribution Based Budget Allocation in Online Advertising
Budget allocation in online advertising deals with distributing the campaign
(insertion order) level budgets to different sub-campaigns which employ
different targeting criteria and may perform differently in terms of
return-on-investment (ROI). In this paper, we present the efforts at Turn on
how to best allocate campaign budget so that the advertiser or campaign-level
ROI is maximized. To do this, it is crucial to be able to correctly determine
the performance of sub-campaigns. This determination is highly related to the
action-attribution problem, i.e. to be able to find out the set of ads, and
hence the sub-campaigns that provided them to a user, that an action should be
attributed to. For this purpose, we employ both last-touch (last ad gets all
credit) and multi-touch (many ads share the credit) attribution methodologies.
We present the algorithms deployed at Turn for the attribution problem, as well
as their parallel implementation on the large advertiser performance datasets.
We conclude the paper with our empirical comparison of last-touch and
multi-touch attribution-based budget allocation in a real online advertising
setting.Comment: This paper has been published in ADKDD 2014, August 24, New York
City, New York, U.S.
Assessing the effectiveness of multi-touch interfaces for DP operation
Navigating a vessel using dynamic positioning (DP) systems close to offshore installations is a challenge. The operator's only possibility of manipulating the system is through its interface, which can be categorized as the physical appearance of the equipment and the visualization of the system. Are there possibilities of interaction between the operator and the system that can reduce strain and cognitive load during DP operations? Can parts of the system (e.g. displays) be physically brought closer to the user to enhance the feeling of control when operating the system? Can these changes make DP operations more efficient and safe? These questions inspired this research project, which investigates the use of multi-touch and hand gestures known from consumer products to directly manipulate the visualization of a vessel in the 3D scene of a DP system. Usability methodologies and evaluation techniques that are widely used in consumer market research were used to investigate how these interaction techniques, which are new to the maritime domain, could make interaction with the DP system more efficient and transparent both during standard and safety-critical operations. After investigating which gestures felt natural to use by running user tests with a paper prototype, the gestures were implemented into a Rolls-Royce DP system and tested in a static environment. The results showed that the test participants performed significantly faster using direct gesture manipulation compared to using traditional button/menu interaction. To support the results from these tests, further tests were carried out. The purpose is to investigate how gestures are performed in a moving environment, using a motion platform to simulate rough sea conditions. The key results and lessons learned from a collection of four user experiments, together with a discussion of the choice of evaluation techniques will be discussed in this paper
- …
