991 research outputs found

    Integrating 2D Mouse Emulation with 3D Manipulation for Visualizations on a Multi-Touch Table

    Get PDF
    We present the Rizzo, a multi-touch virtual mouse that has been designed to provide the fine grained interaction for information visualization on a multi-touch table. Our solution enables touch interaction for existing mouse-based visualizations. Previously, this transition to a multi-touch environment was difficult because the mouse emulation of touch surfaces is often insufficient to provide full information visualization functionality. We present a unified design, combining many Rizzos that have been designed not only to provide mouse capabilities but also to act as zoomable lenses that make precise information access feasible. The Rizzos and the information visualizations all exist within a touch-enabled 3D window management system. Our approach permits touch interaction with both the 3D windowing environment as well as with the contents of the individual windows contained therein. We describe an implementation of our technique that augments the VisLink 3D visualization environment to demonstrate how to enable multi-touch capabilities on all visualizations written with the popular prefuse visualization toolkit.

    A Glove for Tapping and Discrete 1D/2D Input

    Get PDF
    This paper describes a glove with which users enter input by tapping fingertips with the thumb or by rubbing the thumb over the palmar surfaces of the middle and index fingers. The glove has been informally tested as the controller for two semi-autonomous robots in a a 3D simulation environment. A preliminary evaluation of the glove s performance is presented

    EdgeGlass: Exploring Tapping Performance on Smart Glasses while Sitting and Walking

    Get PDF
    Department of Human Factors EngineeringCurrently, smart glasses allow only touch sensing area which supports front mounted touch pads. However, touches on top, front and bottom sides of glass mounted touchpad is not yet explored. We made a customized touch sensor (length: 5-6 cm, height: 1 cm, width: 0.5 cm) featuring the sensing on its top, front, and bottom surfaces. For doing that, we have used capacitive touch sensing technology (MPR121 chips) with an electrode size of ~4.5 mm square, which is typical in the modern touchscreens. We have created a hardware system which consists of a total of 48 separate touch sensors. We investigated the interaction technique by it for both the sitting and walking situation, using a single finger sequential tapping and a pair finger simultaneous tapping. We have divided each side into three equal target areas and this separation made a total of 36 combinations. Our quantitative result showed that pair finger simultaneous tapping touches were faster, less error-prone in walking condition, compared to single finger sequential tapping into walking condition. Whereas, single finger sequence tapping touches were slower, but less error-prone in sitting condition, compared to pair simultaneous tapping in sitting condition. However, single finger sequential tapping touches were slower, much less error-prone in sitting condition compared to walking. Interestingly, double finger tapping touches had similar performance result in terms of both, error rate and completion time, in both sitting and walking conditions. Mental, physical, performance, effort did not have any effect on any temporal tapping???s and body poses experience of workload. In case of the parameter of temporal demand, for single finger sequential tapping mean temporal (time pressure) workload demand was higher than pair finger simultaneous tapping but body poses did not affect temporal (time pressure) workload for both of the sequential and simultaneous tapping type. In case of the parameter of frustration, the result suggested that mean frustration workload was higher for single finger sequential tapping experienced by the participants compared to pair finger simultaneous tapping and among body poses, walking experienced higher frustration mean workload than sitting. The subjective measure of overall workload during the performance study showed no significant difference between both independent variable: body pose (sitting and walking) and temporal tapping (single finger sequential tapping and pair finger simultaneous tapping).ope

    A Single-Handed Partial Zooming Technique for Touch-Screen Mobile Devices

    Get PDF
    Despite its ubiquitous use, the pinch zooming technique is not effective for one-handed interaction. We propose ContextZoom, a novel technique for single-handed zooming on touch-screen mobile devices. It allows users to specify any place on a device screen as the zooming center to ensure that the intended zooming target is always visible on the screen after zooming. ContextZoom supports zooming in/out a portion of a viewport, and provides a quick switch between the partial and whole viewports. We conducted an empirical evaluation of ContextZoom through a controlled lab experiment to compare ContextZoom and the Google maps’ single-handed zooming technique. Results show that ContextZoom outperforms the latter in task completion time and the number of discrete actions taken. Participants also reported higher levels of perceived effectiveness and overall satisfaction with ContextZoom than with the Google maps’ single-handed zooming technique, as well as a similar level of perceived ease of use

    Evaluation of Physical Finger Input Properties for Precise Target Selection

    Get PDF
    The multitouch tabletop display provides a collaborative workspace for multiple users around a table. Users can perform direct and natural multitouch interaction to select target elements using their bare fingers. However, physical size of fingertip varies from one person to another which generally introduces a fat finger problem. Consequently, it creates the imprecise selection of small size target elements during direct multitouch input. In this respect, an attempt is made to evaluate the physical finger input properties i.e. contact area and shape in the context of imprecise selection

    Design and Evaluation of 3D Positioning Techniques for Multi-touch Displays

    Get PDF
    Multi-touch displays represent a promising technology for the display and manipulation of 3D data. To fully exploit their capabilities, appropriate interaction techniques must be designed. In this paper, we explore the design of free 3D positioning techniques for multi-touch displays to exploit the additional degrees of freedom provided by this technology. We present a first interaction technique to extend the standard four viewports technique found in commercial CAD applications and a second technique designed to allow free 3D positioning with a single view of the scene. The two techniques were then evaluated in a controlled experiment. Results show no statistical difference for the positioning time but a clear preference for the Z-technique

    Digital fabrication of custom interactive objects with rich materials

    Get PDF
    As ubiquitous computing is becoming reality, people interact with an increasing number of computer interfaces embedded in physical objects. Today, interaction with those objects largely relies on integrated touchscreens. In contrast, humans are capable of rich interaction with physical objects and their materials through sensory feedback and dexterous manipulation skills. However, developing physical user interfaces that offer versatile interaction and leverage these capabilities is challenging. It requires novel technologies for prototyping interfaces with custom interactivity that support rich materials of everyday objects. Moreover, such technologies need to be accessible to empower a wide audience of researchers, makers, and users. This thesis investigates digital fabrication as a key technology to address these challenges. It contributes four novel design and fabrication approaches for interactive objects with rich materials. The contributions enable easy, accessible, and versatile design and fabrication of interactive objects with custom stretchability, input and output on complex geometries and diverse materials, tactile output on 3D-object geometries, and capabilities of changing their shape and material properties. Together, the contributions of this thesis advance the fields of digital fabrication, rapid prototyping, and ubiquitous computing towards the bigger goal of exploring interactive objects with rich materials as a new generation of physical interfaces.Computer werden zunehmend in Geräten integriert, mit welchen Menschen im Alltag interagieren. Heutzutage basiert diese Interaktion weitgehend auf Touchscreens. Im Kontrast dazu steht die reichhaltige Interaktion mit physischen Objekten und Materialien durch sensorisches Feedback und geschickte Manipulation. Interfaces zu entwerfen, die diese Fähigkeiten nutzen, ist allerdings problematisch. Hierfür sind Technologien zum Prototyping neuer Interfaces mit benutzerdefinierter Interaktivität und Kompatibilität mit vielfältigen Materialien erforderlich. Zudem sollten solche Technologien zugänglich sein, um ein breites Publikum zu erreichen. Diese Dissertation erforscht die digitale Fabrikation als Schlüsseltechnologie, um diese Probleme zu adressieren. Sie trägt vier neue Design- und Fabrikationsansätze für das Prototyping interaktiver Objekte mit reichhaltigen Materialien bei. Diese ermöglichen einfaches, zugängliches und vielseitiges Design und Fabrikation von interaktiven Objekten mit individueller Dehnbarkeit, Ein- und Ausgabe auf komplexen Geometrien und vielfältigen Materialien, taktiler Ausgabe auf 3D-Objektgeometrien und der Fähigkeit ihre Form und Materialeigenschaften zu ändern. Insgesamt trägt diese Dissertation zum Fortschritt der Bereiche der digitalen Fabrikation, des Rapid Prototyping und des Ubiquitous Computing in Richtung des größeren Ziels, der Exploration interaktiver Objekte mit reichhaltigen Materialien als eine neue Generation von physischen Interfaces, bei

    Press-n-Paste : Copy-and-Paste Operations with Pressure-sensitive Caret Navigation for Miniaturized Surface in Mobile Augmented Reality

    Get PDF
    Publisher Copyright: © 2021 ACM.Copy-and-paste operations are the most popular features on computing devices such as desktop computers, smartphones and tablets. However, the copy-and-paste operations are not sufficiently addressed on the Augmented Reality (AR) smartglasses designated for real-time interaction with texts in physical environments. This paper proposes two system solutions, namely Granularity Scrolling (GS) and Two Ends (TE), for the copy-and-paste operations on AR smartglasses. By leveraging a thumb-size button on a touch-sensitive and pressure-sensitive surface, both the multi-step solutions can capture the target texts through indirect manipulation and subsequently enables the copy-and-paste operations. Based on the system solutions, we implemented an experimental prototype named Press-n-Paste (PnP). After the eight-session evaluation capturing 1,296 copy-and-paste operations, 18 participants with GS and TE achieve the peak performance of 17,574 ms and 13,951 ms per copy-and-paste operation, with 93.21% and 98.15% accuracy rates respectively, which are as good as the commercial solutions using direct manipulation on touchscreen devices. The user footprints also show that PnP has a distinctive feature of miniaturized interaction area within 12.65 mm∗14.48 mm. PnP not only proves the feasibility of copy-and-paste operations with the flexibility of various granularities on AR smartglasses, but also gives significant implications to the design space of pressure widgets as well as the input design on smart wearables.Peer reviewe

    Modeling Cumulative Arm Fatigue on Large Multi-touch Displays

    Get PDF
    Large multi-touch displays have long been studied in the lab, and are beginning to see widespread deployment in public spaces. Although they are technologically feasible, research has found that large multi-touch displays are not always used, and that fatigue is commonly identified as a significant barrier. Fatigue, often called the `gorilla arm' effect, prevents people from using large displays for extended periods of time. One solution to this problem is to design large-scale interfaces that can minimize actual fatigue in practice. A first step towards building such an interface is to quantify fatigue, and more importantly, to quantify it easily. While there have been methods developed to estimate arm fatigue in mid-air interaction, there remains little understanding of fatigue on touch-based interfaces. To address this gap, we propose that existing models for mid-air interaction may be effective for measuring fatigue on large multi-touch displays. We evaluated the accuracy of Jang et al.'s mid-air Cumulative Fatigue model for touch interaction tasks on a large display. We found that their model underestimates subjective fatigue for multi-touch interaction, but can provide accurate estimates of subjective fatigue after fine-tuning of model parameters. We discuss the implications of this finding, and the need to further develop tools to evaluate fatigue on large, multi-touch displays
    corecore