921 research outputs found

    Leveraging finger identification to integrate multi-touch command selection and parameter manipulation

    Get PDF
    International audienceIdentifying which fingers are touching a multi-touch surface provides a very large input space. We describe FingerCuts, an interaction technique inspired by desktop keyboard shortcuts to exploit this potential. FingerCuts enables integrated command selection and parameter manipulation, it uses feed-forward and feedback to increase discoverability, it is backward compatible with current touch input techniques, and it is adaptable for different touch device form factors. We implemented three variations of FingerCuts, each tailored to a different device form factor: tabletop, tablet, and smartphone. Qualitative and quantitative studies conducted on the tabletop suggests that with some practice, FingerCuts is expressive, easy-to-use, and increases a sense of continuous interaction flow and that interaction with FingerCuts is as fast, or faster than using a graphical user interface. A theoretical analysis of FingerCuts using the Fingerstroke-Level Model (FLM) matches our quantitative study results, justifying our use of FLM to analyse and validate the performance for the other device form factors

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application

    SpeciFingers

    Get PDF
    The inadequate use of finger properties has limited the input space of touch interaction. By leveraging the category of contacting fingers, finger-specific interaction is able to expand input vocabulary. However, accurate finger identification remains challenging, as it requires either additional sensors or limited sets of identifiable fingers to achieve ideal accuracy in previous works. We introduce SpeciFingers, a novel approach to identify fingers with the capacitive raw data on touchscreens. We apply a neural network of an encoder-decoder architecture, which captures the spatio-temporal features in capacitive image sequences. To assist users in recovering from misidentification, we propose a correction mechanism to replace the existing undo-redo process. Also, we present a design space of finger-specific interaction with example interaction techniques. In particular, we designed and implemented a use case of optimizing the performance in pointing on small targets. We evaluated our identification model and error correction mechanism in our use case

    On the critical role of the sensorimotor loop on the design of interaction techniques and interactive devices

    Get PDF
    People interact with their environment thanks to their perceptual and motor skills. This is the way they both use objects around them and perceive the world around them. Interactive systems are examples of such objects. Therefore to design such objects, we must understand how people perceive them and manipulate them. For example, haptics is both related to the human sense of touch and what I call the motor ability. I address a number of research questions related to the design and implementation of haptic, gestural, and touch interfaces and present examples of contributions on these topics. More interestingly, perception, cognition, and action are not separated processes, but an integrated combination of them called the sensorimotor loop. Interactive systems follow the same overall scheme, with differences that make the complementarity of humans and machines. The interaction phenomenon is a set of connections between human sensorimotor loops, and interactive systems execution loops. It connects inputs with outputs, users and systems, and the physical world with cognition and computing in what I call the Human-System loop. This model provides a complete overview of the interaction phenomenon. It helps to identify the limiting factors of interaction that we can address to improve the design of interaction techniques and interactive devices.Les humains interagissent avec leur environnement grâce à leurs capacités perceptives et motrices. C'est ainsi qu'ils utilisent les objets qui les entourent et perçoivent le monde autour d'eux. Les systèmes interactifs sont des exemples de tels objets. Par conséquent, pour concevoir de tels objets, nous devons comprendre comment les gens les perçoivent et les manipulent. Par exemple, l'haptique est à la fois liée au sens du toucher et à ce que j'appelle la capacité motrice. J'aborde un certain nombre de questions de recherche liées à la conception et à la mise en œuvre d'interfaces haptiques, gestuelles et tactiles et je présente des exemples de contributions sur ces sujets. Plus intéressant encore, la perception, la cognition et l'action ne sont pas des processus séparés, mais une combinaison intégrée d'entre eux appelée la boucle sensorimotrice. Les systèmes interactifs suivent le même schéma global, avec des différences qui forme la complémentarité des humains et des machines. Le phénomène d'interaction est un ensemble de connexions entre les boucles sensorimotrices humaines et les boucles d'exécution des systèmes interactifs. Il relie les entrées aux sorties, les utilisateurs aux systèmes, et le monde physique à la cognition et au calcul dans ce que j'appelle la boucle Humain-Système. Ce modèle fournit un aperçu complet du phénomène d'interaction. Il permet d'identifier les facteurs limitatifs de l'interaction que nous pouvons aborder pour améliorer la conception des techniques d'interaction et des dispositifs interactifs

    An Exploration of Multi-touch Interaction Techniques

    Get PDF
    Research in multi-touch interaction has typically been focused on direct spatial manipulation; techniques have been created to result in the most intuitive mapping between the movement of the hand and the resultant change in the virtual object. As we attempt to design for more complex operations, the effectiveness of spatial manipulation as a metaphor becomes weak. We introduce two new platforms for multi-touch computing: a gesture recognition system, and a new interaction technique. I present Multi-Tap Sliders, a new interaction technique for operation in what we call non-spatial parametric spaces. Such spaces do not have an obvious literal spatial representation, (Eg.: exposure, brightness, contrast and saturation for image editing). The multi-tap sliders encourage the user to keep her visual focus on the tar- get, instead of requiring her to look back at the interface. My research emphasizes ergonomics, clear visual design, and fluid transition between modes of operation. Through a series of iterations, I develop a new technique for quickly selecting and adjusting multiple numerical parameters. Evaluations of multi-tap sliders show improvements over traditional sliders. To facilitate further research on multi-touch gestural interaction, I developed mGestr: a training and recognition system using hidden Markov models for designing a multi-touch gesture set. Our evaluation shows successful recognition rates of up to 95%. The recognition framework is packaged into a service for easy integration with existing applications

    Phrasing Bimanual Interaction for Visual Design

    Get PDF
    Architects and other visual thinkers create external representations of their ideas to support early-stage design. They compose visual imagery with sketching to form abstract diagrams as representations. When working with digital media, they apply various visual operations to transform representations, often engaging in complex sequences. This research investigates how to build interactive capabilities to support designers in putting together, that is phrasing, sequences of operations using both hands. In particular, we examine how phrasing interactions with pen and multi-touch input can support modal switching among different visual operations that in many commercial design tools require using menus and tool palettes—techniques originally designed for the mouse, not pen and touch. We develop an interactive bimanual pen+touch diagramming environment and study its use in landscape architecture design studio education. We observe interesting forms of interaction that emerge, and how our bimanual interaction techniques support visual design processes. Based on the needs of architects, we develop LayerFish, a new bimanual technique for layering overlapping content. We conduct a controlled experiment to evaluate its efficacy. We explore the use of wearables to identify which user, and distinguish what hand, is touching to support phrasing together direct-touch interactions on large displays. From design and development of the environment and both field and controlled studies, we derive a set methods, based upon human bimanual specialization theory, for phrasing modal operations through bimanual interactions without menus or tool palettes

    Enabling Expressive Keyboard Interaction with Finger, Hand, and Hand Posture Identification

    Get PDF
    The input space of conventional physical keyboards is largely limited by the number of keys. To enable more actions than simply entering the symbol represented by a key, standard keyboards use combinations of modifier keys such as command, alternate, or shift to re-purpose the standard text entry behaviour. To explore alternatives to conventional keyboard shortcuts and enable more expressive keyboard interaction, this thesis first presents Finger-Aware Shortcuts, which encode information from finger, hand, and hand posture identification as keyboard shortcuts. By detecting the hand and finger used to press a key, and an open or closed hand posture, a key press can have multiple command mappings. A formative study revealed the performance and preference patterns when using different fingers and postures to press a key. The results were used to develop a computer vision algorithm to identify fingers and hands on a keyboard captured by a built-in laptop camera and a reflector. This algorithm was built into a background service to enable system-wide Finger-Aware Shortcut keys in any application. A controlled experiment used the service to compare the performance of Finger-Aware Shortcuts with existing methods. The results showed that Finger-Aware Shortcuts are comparable with a common class of shortcuts using multiple modifier keys. Several application demonstrations illustrate different use cases and mappings for Finger-Aware Shortcuts. To further explore how introducing finger awareness can help foster the learning and use of keyboard shortcuts, an interview study was conducted with expert computer users to identify the likely causes that hinder the adoption of keyboard shortcuts. Based on this, the concept of Finger-Aware Shortcuts is extended and two guided keyboard shortcut techniques are proposed: FingerArc and FingerChord. The two techniques provide dynamic visual guidance on the screen when users press and hold an alphabetical key semantically related to a set of commands. FingerArc differentiates these commands by examining the angle between the thumb and index finger; FingerChord differentiates these commands by allowing users to press different key areas using a second finger. The thesis contributes comprehensive evaluations of Finger-Aware Shortcuts and proof-of-concept demonstrations of FingerArc and FingerChord. Together, they contribute a novel interaction space that expands the conventional keyboard input space with more expressivity

    Cruiser and PhoTable: Exploring Tabletop User Interface Software for Digital Photograph Sharing and Story Capture

    Get PDF
    Digital photography has not only changed the nature of photography and the photographic process, but also the manner in which we share photographs and tell stories about them. Some traditional methods, such as the family photo album or passing around piles of recently developed snapshots, are lost to us without requiring the digital photos to be printed. The current, purely digital, methods of sharing do not provide the same experience as printed photographs, and they do not provide effective face-to-face social interaction around photographs, as experienced during storytelling. Research has found that people are often dissatisfied with sharing photographs in digital form. The recent emergence of the tabletop interface as a viable multi-user direct-touch interactive large horizontal display has provided the hardware that has the potential to improve our collocated activities such as digital photograph sharing. However, while some software to communicate with various tabletop hardware technologies exists, software aspects of tabletop user interfaces are still at an early stage and require careful consideration in order to provide an effective, multi-user immersive interface that arbitrates the social interaction between users, without the necessary computer-human interaction interfering with the social dialogue. This thesis presents PhoTable, a social interface allowing people to effectively share, and tell stories about, recently taken, unsorted digital photographs around an interactive tabletop. In addition, the computer-arbitrated digital interaction allows PhoTable to capture the stories told, and associate them as audio metadata to the appropriate photographs. By leveraging the tabletop interface and providing a highly usable and natural interaction we can enable users to become immersed in their social interaction, telling stories about their photographs, and allow the computer interaction to occur as a side-effect of the social interaction. Correlating the computer interaction with the corresponding audio allows PhoTable to annotate an automatically created digital photo album with audible stories, which may then be archived. These stories remain useful for future sharing -- both collocated sharing and remote (e.g. via the Internet) -- and also provide a personal memento both of the event depicted in the photograph (e.g. as a reminder) and of the enjoyable photo sharing experience at the tabletop. To provide the necessary software to realise an interface such as PhoTable, this thesis explored the development of Cruiser: an efficient, extensible and reusable software framework for developing tabletop applications. Cruiser contributes a set of programming libraries and the necessary application framework to facilitate the rapid and highly flexible development of new tabletop applications. It uses a plugin architecture that encourages code reuse, stability and easy experimentation, and leverages the dedicated computer graphics hardware and multi-core processors of modern consumer-level systems to provide a responsive and immersive interactive tabletop user interface that is agnostic to the tabletop hardware and operating platform, using efficient, native cross-platform code. Cruiser's flexibility has allowed a variety of novel interactive tabletop applications to be explored by other researchers using the framework, in addition to PhoTable. To evaluate Cruiser and PhoTable, this thesis follows recommended practices for systems evaluation. The design rationale is framed within the above scenario and vision which we explore further, and the resulting design is critically analysed based on user studies, heuristic evaluation and a reflection on how it evolved over time. The effectiveness of Cruiser was evaluated in terms of its ability to realise PhoTable, use of it by others to explore many new tabletop applications, and an analysis of performance and resource usage. Usability, learnability and effectiveness of PhoTable was assessed on three levels: careful usability evaluations of elements of the interface; informal observations of usability when Cruiser was available to the public in several exhibitions and demonstrations; and a final evaluation of PhoTable in use for storytelling, where this had the side effect of creating a digital photo album, consisting of the photographs users interacted with on the table and associated audio annotations which PhoTable automatically extracted from the interaction. We conclude that our approach to design has resulted in an effective framework for creating new tabletop interfaces. The parallel goal of exploring the potential for tabletop interaction as a new way to share digital photographs was realised in PhoTable. It is able to support the envisaged goal of an effective interface for telling stories about one's photos. As a serendipitous side-effect, PhoTable was effective in the automatic capture of the stories about individual photographs for future reminiscence and sharing. This work provides foundations for future work in creating new ways to interact at a tabletop and to the ways to capture personal stories around digital photographs for sharing and long-term preservation

    Enhanced Multi-Touch Gestures for Complex Tasks

    Get PDF
    Recent technological advances have resulted in a major shift, from high-performance notebook and desktop computers -- devices that rely on keyboard and mouse for input -- towards smaller, personal devices like smartphones, tablets and smartwatches which rely primarily on touch input. Users of these devices typically have a relatively high level of skill in using multi-touch gestures to interact with them, but the multi-touch gesture sets that are supported are often restricted to a small subset of one and two-finger gestures, such as tap, double tap, drag, flick, pinch and spread. This is not due to technical limitations, since modern multi-touch smartphones and tablets are capable of accepting at least ten simultaneous points of contact. Likewise, human movement models suggest that humans are capable of richer and more expressive forms of interaction that utilize multiple fingers. This suggests a gap between the technical capabilities of multi-touch devices, the physical capabilities of end-users, and the gesture sets that have been implemented for these devices. Our work explores ways in which we can enrich multi-touch interaction on these devices by expanding these common gesture sets. Simple gestures are fine for simple use cases, but if we want to support a wide range of sophisticated behaviours -- the types of interactions required by expert users -- we need equally sophisticated capabilities from our devices. In this thesis, we refer to these more sophisticated, complex interactions as `enhanced gestures' to distinguish them from common but simple gestures, and to suggest the types of expert scenarios that we are targeting in their design. We do not need to necessarily replace current, familiar gestures, but it makes sense to consider augmenting them as multi-touch becomes more prevalent, and is applied to more sophisticated problems. This research explores issues of approachability and user acceptance around gesture sets. Using pinch-to-zoom as an example, we establish design guidelines for enhanced gestures, and systematically design, implement and evaluate two different types of expert gestures, illustrative of the type of functionality that we might build into future systems
    • …
    corecore