22 research outputs found

    The inspection of very large images by eye-gaze control

    Get PDF
    The increasing availability and accuracy of eye gaze detection equipment has encouraged its use for both investigation and control. In this paper we present novel methods for navigating and inspecting extremely large images solely or primarily using eye gaze control. We investigate the relative advantages and comparative properties of four related methods: Stare-to-Zoom (STZ), in which control of the image position and resolution level is determined solely by the user’s gaze position on the screen; Head-to-Zoom (HTZ) and Dual-to-Zoom (DTZ), in which gaze control is augmented by head or mouse actions; and Mouse-to-Zoom (MTZ), using conventional mouse input as an experimental control. # The need to inspect large images occurs in many disciplines, such as mapping, medicine, astronomy and surveillance. Here we consider the inspection of very large aerial images, of which Google Earth is both an example and the one employed in our study. We perform comparative search and navigation tasks with each of the methods described, and record user opinions using the Swedish User-Viewer Presence Questionnaire. We conclude that, while gaze methods are effective for image navigation, they, as yet, lag behind more conventional methods and interaction designers may well consider combining these techniques for greatest effect

    Personalised Focus-Metaphor Interfaces: An Eye Tracking Study on User Confusion

    Get PDF
    Personalised web interfaces are expected to improve user interaction with web content. But since the delivery of personalised web content is currently not reliable, a key question is how much users may be confused and slowed down when personalised delivery goes wrong. The aim of the study reported in this paper was to investigate a worst-case scenario of failed personalised content presentation – a dynamic presentation of content where content was dynamically presented, but content units were selected at random. We employed eye-tracking to monitor the differences in users’ attention and navigation when interacting with this “dysfunctional” dynamic interface, and a static version. We found that subjects who interacted with the dysfunctional version took 10% longer to read their material than those with static content, and displayed a different strategy in scanning the interface. The relatively small difference in navigation time in first-time viewers of dynamically presented content, and of the results from the eye-tracking patterns, suggests that users are not significantly confused and slowed down by dynamic presentation of content when using a Focus-Metaphor interfac

    Pointage bi-manuel avec le CubTile dans un espace 2D de type Focus+Contexte

    Get PDF
    8 pages, articles longsNational audienceThe CubTile is a cubic-shaped device with 5 tactile multi-touch faces. It was initially designed for 3D interaction. In this article we explore its use for navigation and pointing in a 2D space. In this way, we propose a bi-manual interaction technique, based on the manipulation of two faces of the CubTile, in a focus+context interface: The non-dominant hand pans the focus with one face of the CubTile while the dominant hand points in the focus area using another face of the CubTile. The results of a first user experiment indicate that an asymmetric tuning, with a low amplification for the non-dominant hand gestures and a high amplification for the dominant hand gestures, provides better performance in a pointing task. These results are a first step towards optimal tuning of the tactile faces for asymmetric bimanual interaction.RESUME Le CubTile est un dispositif de forme cubique qui dispose de 5 faces tactiles multi-point. Conçu initialement pour l'interaction 3D, dans cet article nous explorons son usage pour la navigation et le pointage dans un espace 2D. Pour cela, nous proposons d'appliquer la division du travail bi-manuel aux faces tactiles du CubTile dans le cas d'une visualisation de l'espace d'information de type focus+contexte. La main non-dominante déplace le focus (vue détaillée au sein de l'espace) avec une face du CubTile tandis que la main dominante pointè a l'intérieur du focus en utilisant une autre face du CubTile. Les résultats d'unepremì eré etude expérimentale prospective nous permettent d'´ etablir qu'un réglage dissymétrique avec une amplification faible des gestes de la main non-dominante et plus forte des gestes de la main dominante offre les meilleurs résultats dans le cadre d'une tâche de pointage. Ces résultats sont un premier pas vers des réglages optimaux des faces tactiles pour l'interaction bi-manuelle asymétrique. MOTS CLES : Interaction tactile, interaction multi-surfaces, interaction bi-manuelle, interaction 2D/3D, visualisation Focus+Contexte. ABSTRACT The CubTile is a cubic-shaped device with 5 tactile multi-touch faces. It was initially designed for 3D interaction. In this article we explore its use for navigation and pointing in a 2D space. In this way, we propose a bi-manual interaction technique, based on the manipulation of two faces of the CubTile, in a focus+context interface: The non-dominant hand pans the focus with one face of the CubTile while the dominant hand points in the focus area using another face of the CubTile. The results of a first (a) Navigation 2D. (b) Manipulation 3D. Figure 1 : Interaction bi-manuelle avec le CubTile. user experiment indicate that an asymmetric tuning, with a low amplification for the non-dominant hand gestures and a high amplification for the dominant hand gestures, provides better performance in a pointing task. These results are a first step towards optimal tuning of the tactile faces for asymmetric bimanual interaction

    An Investigation of Target Acquisition with Visually Expanding Targets in Constant Motor-space

    Get PDF
    Target acquisition is a core part of modern computer use. Fitts’ law has frequently been proven to predict performance of target acquisition tasks; even with targets that change size as the cursor approaches. Research into expanding targets has focussed on targets that expand in both visual- and motor-space. We investigate whether a visual expansion with no change in motor-space offers any performance benefit. We investigate constant motor-space visual expansion in both abstract pointing tasks (based on the ISO9241–9 standard) and in a realistic deployment of the technique within fisheye menus. Our fisheye menu system eliminates the ‘hunting effect’ of target acquisition observed in Bederson’s initial proposal of fisheye menus, and in an evaluation we show that it allows faster selection times and is subjectively preferred to Bederson’s menus. We also show that visually expanding targets can improve selection times in target acquisition tasks, particularly with small targets

    Context-Preserving Visual Analytics of Multi-Scale Spatial Aggregation.

    Get PDF
    Spatial datasets (i.e., location-based social media, crime incident reports, and demographic data) often exhibit varied distribution patterns at multiple spatial scales. Examining these patterns across different scales enhances the understanding from global to local perspectives and offers new insights into the nature of various spatial phenomena. Conventional navigation techniques in such multi-scale data-rich spaces are often inefficient, require users to choose between an overview or detailed information, and do not support identifying spatial patterns at varying scales. In this work, we present a context-preserving visual analytics technique that aggregates spatial datasets into hierarchical clusters and visualizes the multi-scale aggregates in a single visual space. We design a boundary distortion algorithm to minimize the visual clutter caused by overlapping aggregates and explore visual encoding strategies including color, transparency, shading, and shapes, in order to illustrate the hierarchical and statistical patterns of the multi-scale aggregates. We also propose a transparency-based technique that maintains a smooth visual transition as the users navigate across adjacent scales. To further support effective semantic exploration in the multi-scale space, we design a set of text-based encoding and layout methods that draw textual labels along the boundary or filled within the aggregates. The text itself not only summarizes the semantics at each scale, but also indicates the spatial coverage of the aggregates and their hierarchical relationships. We demonstrate the effectiveness of the proposed approaches through real-world application examples and user studies

    Handling ambiguous user input on touchscreen kiosks

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 87-94).Touchscreen kiosks are becoming an increasingly popular means of providing a wide arrange of services to the public. However, the principal drawback of these types of systems lies within the elevated error rates due to finger imprecision and screen miscalibration. These issues become worrisome, considering the greater responsibilities and reliance placed upon touchscreens. This thesis investigates two novel techniques that attempt to alleviate these interaction problems. The first technique, predictive pointing, incorporates information regarding past interactions and an area cursor (which maps the user's touch to a circular area rather than a single point) to provide a better estimate of the intended selection. The second technique, gestural drawing, allows users to draw particular shapes onscreen to execute actions as an alternative means of input that is largely unaffected by issues of miscalibration. Results from a user study indicated that both techniques provided significant advantages in not only lowering error rates, but also improving task completion times over traditional tasks of target selection.by Christopher K. Leung.M.Eng

    Intelligent Selection Techniques For Virtual Environments

    Get PDF
    Selection in 3D games and simulations is a well-studied problem. Many techniques have been created to address many of the typical scenarios a user could experience. For any single scenario with consistent conditions, there is likely a technique which is well suited. If there isn\u27t, then there is an opportunity for one to be created to best suit the expected conditions of that new scenario. It is critical that the user be given an appropriate technique to interact with their environment. Without it, the entire experience is at risk of becoming burdensome and not enjoyable. With all of the different possible scenarios, it can become problematic when two or more are part of the same program. If they are put closely together, or even intertwined, then the developer is often forced to pick a single technique that works so-so in both, but is likely not optimal for either, or maybe optimal in just one of them. In this case, the user is left to perform selections with a technique that is lacking in one way or another, which can increase errors and frustration. In our research, we have outlined different selection scenarios, all of which were classified by their level of object density (number of objects in scene) and object velocity. We then performed an initial study on how it impacts performance of various selection techniques, including a new selection technique that we developed just for this test, called Expand. Our results showed, among other things, that a standard Raycast technique works well in slow moving and sparse environments, while revealing that our new Expand technique works well in denser environments. With the results from our first study, we sought to develop something that would bridge the gap in performance between those selection techniques tested. Our idea was a framework that could harvest several different selection techniques and determine which was the most optimal at any time. Each selection technique would report how effective it was, given the provided scenario conditions. The framework was responsible for activating the appropriate selection technique when the user made a selection attempt. With this framework in hand, we performed two additional user studies to determine how effective it could be in actual use, and to identify its strengths and weaknesses. Each study compared several selection techniques individually against the framework which utilized them collectively, picking the most suitable. Again, the same scenarios from our first study were reused. From these studies, we gained a deeper understanding of the many challenges associated with automatic selection technique determination. The results from these two studies showed that transitioning between techniques was potentially viable, but rife with design challenges that made its optimization quite difficult. In an effort to sidestep some of the issues surrounding the switching of discrete techniques, we sought to attack the problem from the other direction, and make a single technique act similarly to two techniques, adjusting dynamically to conditions. We performed a user study to analyze the performance of such a technique, with promising results. While the qualitative differences were small, the user feedback did indicate that users preferred this technique over the others, which were static in nature. Finally, we sought to gain a deeper understanding of existing selection techniques that were dynamic in nature, and study how they were designed, and how they could be improved. We scrutinized the attributes of each technique that were already being adjusted dynamically or that could be adjusted and innovated new ways in which the technique could be improved upon. Within this analysis, we also gave thought to how each technique could be best integrated into the Auto-Select framework we proposed earlier. This overall analysis of the latest selection techniques left us with an array of new variants that warrant being created and tested against their existing versions. Our overall research goal was to perform an analysis of selection techniques that intelligently adapt to their environment. We believe that we achieved this by performing several iterative development cycles, including user studies and ultimately leading to innovation in the field of selection. We conclude our research with yet more questions left to be answered. We intend to pursue further research regarding some of these questions, as time permits

    The calibration and evaluation of speed-dependent automatic zooming interfaces.

    Get PDF
    Speed-Dependent Automatic Zooming (SDAZ) is an exciting new navigation technique that couples the user's rate of motion through an information space with the zoom level. The faster a user scrolls in the document, the 'higher' they fly above the work surface. At present, there are few guidelines for the calibration of SDAZ. Previous work by Igarashi & Hinckley (2000) and Cockburn & Savage (2003) fails to give values for predefined constants governing their automatic zooming behaviour. The absence of formal guidelines means that SDAZ implementers are forced to adjust the properties of the automatic zooming by trial and error. This thesis aids calibration by identifying the low-level components of SDAZ. Base calibration settings for these components are then established using a formal evaluation recording participants' comfortable scrolling rates at different magnification levels. To ease our experiments with SDAZ calibration, we implemented a new system that provides a comprehensive graphical user interface for customising SDAZ behaviour. The system was designed to simplify future extensions---for example new components such as interaction techniques and methods to render information can easily be added with little modification to existing code. This system was used to configure three SDAZ interfaces: a text document browser, a flat map browser and a multi-scale globe browser. The three calibrated SDAZ interfaces were evaluated against three equivalent interfaces with rate-based scrolling and manual zooming. The evaluation showed that SDAZ is 10% faster for acquiring targets in a map than rate-based scrolling with manual zooming, and SDAZ is 4% faster for acquiring targets in a text document. Participants also preferred using automatic zooming over manual zooming. No difference was found for the globe browser for acquisition time or preference. However, in all interfaces participants commented that automatic zooming was less physically and mentally draining than manual zooming

    Helping users learn about social processes while learning from users : developing a positive feedback in social computing

    Get PDF
    Advisors: Philippe J. GiabbanelliSocial computing is concerned with the interaction of social behavior and computational systems. From its early days, social computing has had two foci. One was the development of technology and interfaces to support online communities. The other was to use computational techniques to study society and assess the expected impact of policies. This thesis seeks to develop systems for social computing, both in the context of online communities and the study of societal processes, that allow users to learn while in turn learning from users. Communities are approached through the problem of Massive Open Online Courses (MOOCs), via a complementary use of network analysis and text mining. In particular, we show that an efficient system can be designed such that instructors do not need to categorize the interactions of all students to assess their learning experience. This thesis explores the study of societal processes by showing how text analytics, visual analytics, and fuzzy cognitive map (FCM) can collectively help an analyst to understand complex scenarios such as obesity. Overall, this work had two key limitations. One was in the dataset we used, as it was small and didn't show all possible interactions, and the other is in the scalability of our systems. Future work can include the use of non-n-gram features to improve our MOOC system and the use of graph layouts for our visualization system.M.S. (Master of Science

    Verwendung von Hover Detection zur Verbesserung der Texteingabe auf Smartphones

    Get PDF
    Interaction with smartphones can be challenging for some users, especially with regards to text entry. In these handheld devices the available screen space limits the size of the user interfaces elements. This problem is excerbated when a lot of UI elements has to be displayed simultaneously, for example in a on-screen keyboard. Especially users with limitations like decreased vision or motor control can have a hard time using these devices, effectively excluding them from a part of modern social life. In this work we evaluated if hover detection can be used to improve usability for text entry on a smartphone. In several experiments the position of the hovering finger was used to selectively enlarge the UI, to provide visual location feedback on the keyboard or to offer audio assistance. When testing with elderly users, the visual feedback was positively received. Unfortunately the comparatively high latency of the hover detection (about 250 ms) negated any gains in usability. This result was confirmed in tests with young users, who also did not benefit from the hover detection. Most usability gains for elderly users were made by introducing a keyboard layout with larger keys which stayed at that size, regardless of hover position. Visually impaired users liked the idea of a context sensitive magnification as well, but hover detection was not really usable to its inherent lack of haptic feedback. Acoustic feedback did not produce a better user experience for the same reason. Reliable use of hover detection was just not possible without adequate levels of vision. This research showed that assistive technologies on smartphones like selective magnification of the user interface can help users, but only when technical parameters are sufficient for the input process. In this case hover detection allowed us to implement visual, haptic and audio feedback based on the hover position of the finger as a proof of concept. Unfortunately high latency only allowed us to show qualitative improvement, not quantitive. Further improvements in hover detection hardware may make this research relevant again, though
    corecore