90 research outputs found

    AQUA-G: a universal gesture recognition framework

    Get PDF
    In this thesis, I describe a software architecture and implementation which is designed to ease the process of 1) developing gesture-enabled applications and 2) using multiple disparate interaction devices simultaneously to create gestures. Developing gesture-enabled applications from scratch can be a time-consuming process involving obtaining input from novel input devices, processing that input in order to recognize gestures, and connecting this information to the application. Previously, developers have turned to gesture recognition systems to assist them in developing these applications. However, existing systems to date are limited in flexibility and adaptability. I propose AQUA-G, a universal gesture recognition framework that utilizes a unified event architecture to communicate with a limitless variety of input devices. AQUA-G provides abstraction of gesture recognition and allows developers to write custom gestures. Its features have been driven in part by previous architectures and are partially based on a needs assessment with a sample of developers. This research contributes a scalable and reliable software system for gesture-enabled application development, which makes developing and prototyping novel interaction styles more accessible to a larger development community

    Adapting Multi-touch Systems to Capitalise on Different Display Shapes

    Get PDF
    The use of multi-touch interaction has become more widespread. With this increase of use, the change in input technique has prompted developers to reconsider other elements of typical computer design such as the shape of the display. There is an emerging need for software to be capable of functioning correctly with different display shapes. This research asked: ‘What must be considered when designing multi-touch software for use on different shaped displays?’ The results of two structured literature surveys highlighted the lack of support for multi-touch software to utilise more than one display shape. From a prototype system, observations on the issues of using different display shapes were made. An evaluation framework to judge potential solutions to these issues in multi-touch software was produced and employed. Solutions highlighted as being suitable were implemented into existing multi-touch software. A structured evaluation was then used to determine the success of the design and implementation of the solutions. The hypothesis of the evaluation stated that the implemented solutions would allow the applications to be used with a range of different display shapes in such a way that did not leave visual content items unfit for purpose. The majority of the results conformed to this hypothesis despite minor deviations from the designs of solutions being discovered in the implementation. This work highlights how developers, when producing multi-touch software intended for more than one display shape, must consider the issue of visual content items being occluded. Developers must produce, or identify, solutions to resolve this issue which conform to the criteria outlined in this research. This research shows that it is possible for multi-touch software to be made display shape independent

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application

    Expressy : Using a Wrist-worn Inertial Measurement Unit to Add Expressiveness to Touch-based Interactions

    Get PDF
    Expressiveness, which we define as the extent to which rich and complex intent can be conveyed through action, is a vital aspect of many human interactions. For instance, paint on canvas is said to be an expressive medium, because it affords the artist the ability to convey multifaceted emotional intent through intricate manipulations of a brush. To date, touch devices have failed to offer users a level of expressiveness in their interactions that rivals that experienced by the painter and those completing other skilled physical tasks. We investigate how data about hand movement – provided by a motion sensor, similar to those found in many smart watches or fitness trackers – can be used to expand the expressiveness of touch interactions. We begin by introducing a conceptual model that formalizes a design space of possible expressive touch interactions. We then describe and evaluate Expressy, an approach that uses a wrist-worn inertial measurement unit to detect and classify qualities of touch interaction that extend beyond those offered by today’s typical sensing hardware. We conclude by describing a number of sample applications, which demonstrate the enhanced, expressive interaction capabilities made possible by Expressy

    Sensor Augmented Large Interactive Surfaces

    Get PDF
    Large interactive surfaces enable effective multi-user collaboration, but a majority of the current multi-touch systems are not truly multi-user. In this work we present a novel sensor-based approach for both user identification around a touch table and integration of unique gestures above the table. The work proposes the criteria for a successful and robust user identification system. The Cricket sensor based user identification system is integrated with an open source gesture recognition system Sparsh-UI to enable rapid multi-touch application development. Finally we evaluate the Cricket-based algorithm with contemporary multi-user, multi-touch systems and describe the various interaction affordances provided by the Cricket based user identification system

    Leveraging finger identification to integrate multi-touch command selection and parameter manipulation

    Get PDF
    International audienceIdentifying which fingers are touching a multi-touch surface provides a very large input space. We describe FingerCuts, an interaction technique inspired by desktop keyboard shortcuts to exploit this potential. FingerCuts enables integrated command selection and parameter manipulation, it uses feed-forward and feedback to increase discoverability, it is backward compatible with current touch input techniques, and it is adaptable for different touch device form factors. We implemented three variations of FingerCuts, each tailored to a different device form factor: tabletop, tablet, and smartphone. Qualitative and quantitative studies conducted on the tabletop suggests that with some practice, FingerCuts is expressive, easy-to-use, and increases a sense of continuous interaction flow and that interaction with FingerCuts is as fast, or faster than using a graphical user interface. A theoretical analysis of FingerCuts using the Fingerstroke-Level Model (FLM) matches our quantitative study results, justifying our use of FLM to analyse and validate the performance for the other device form factors

    Physical Interaction Concepts for Knowledge Work Practices

    Get PDF
    The majority of workplaces in developed countries concern knowledge work. Accordingly, the IT industry and research made great efforts for many years to support knowledge workers -- and indeed, computer-based information workplaces have come of age. Nevertheless, knowledge work in the physical world has still quite a number of unique advantages, and the integration of physical and digital knowledge work leaves a lot to be desired. The present thesis aims at reducing these deficiencies; thereby, it leverages late technology trends, in particular interactive tabletops and resizable hand-held displays. We start from the observation that knowledge workers develop highly efficient practices, skills, and dexterity of working with physical objects in the real world, whether content-unrelated (coffee mugs, stationery etc.) or content-related (books, notepads etc.). Among the latter, paper-based objects -- the notorious analog information bearers -- represent by far the most relevant (super-) category. We discern two kinds of practices: collective practices concern the arrangement of objects with respect to other objects and the desk, while specific practices operate on individual objects and usually alter them. The former are mainly employed for an effective management of the physical desktop workspace -- e.g., everyday objects are frequently moved on tables to optimize the desk as a workplace -- or an effective organization of paper-based documents on the desktop -- e.g., stacking, fanning out, sorting etc. The latter concern the specific manipulation of physical objects related to the task at hand, i.e. knowledge work. Widespread assimilated practices concern not only writing on, annotating, or spatially arranging paper documents but also sophisticated manipulations -- such as flipping, folding, bending, etc. Compared to the wealth of such well-established practices in the real world, those for digital knowledge work are bound by the indirection imposed by mouse and keyboard input, where the mouse provided such a great advancement that researchers were seduced to calling its use "direct manipulation". In this light, the goal of this thesis can be rephrased as exploring novel interaction concepts for knowledge workers that i) exploit the flexible and direct manipulation potential of physical objects (as present in the real world) for more intuitive and expressive interaction with digital content, and ii) improve the integration of the physical and digital knowledge workplace. Thereby, two directions of research are pursued. Firstly, the thesis investigates the collective practices executed on the desks of knowledge workers, thereby discerning content-related (more precisely, paper-based documents) and content-unrelated object -- this part is coined as table-centric approaches and leverages the technology of interactive tabletops. Secondly, the thesis looks at specific practices executed on paper, obviously concentrating on knowledge related tasks due to the specific role of paper -- this part is coined as paper-centric approaches and leverages the affordances of paper-like displays, more precisely of resizable i.e. rollable and foldable displays. The table-centric approach leads to the challenge of blending interactive tabletop technology with the established use of physical desktop workspaces. We first conduct an exploratory user study to investigate behavioral and usage patterns of interaction with both physical and digital documents on tabletop surfaces while performing tasks such as grouping and browsing. Based on results of the study, we contribute two sets of interaction and visualization concepts -- coined as PaperTop and ObjecTop -- that concern specific paper based practices and collective practices, respectively. Their efficiency and effectiveness are evaluated in a series of user studies. As mentioned, the paper-centric perspective leverages late ultra-thin resizable display technology. We contribute two sets of novel interaction concepts again -- coined as FoldMe and Xpaaand -- that respond to the design space of dual-sided foldable and of rollout displays, respectively. In their design, we leverage the physical act of resizing not "just" for adjusting the screen real estate but also for interactively performing operations. Initial user studies show a great potential for interaction with digital contents, i.e. for knowledge work

    Assessing the effectiveness of direct gesture interaction for a safety critical maritime application

    Get PDF
    Multi-touch interaction, in particular multi-touch gesture interaction, is widely believed to give a more natural interaction style. We investigated the utility of multi-touch interaction in the safety critical domain of maritime dynamic positioning (DP) vessels. We conducted initial paper prototyping with domain experts to gain an insight into natural gestures; we then conducted observational studies aboard a DP vessel during operational duties and two rounds of formal evaluation of prototypes - the second on a motion platform ship simulator. Despite following a careful user-centred design process, the final results show that traditional touch-screen button and menu interaction was quicker and less erroneous than gestures. Furthermore, the moving environment accentuated this difference and we observed initial use problems and handedness asymmetries on some multi-touch gestures. On the positive side, our results showed that users were able to suspend gestural interaction more naturally, thus improving situational awareness

    Multi-touch For General-purpose Computing An Examination Of Text Entry

    Get PDF
    In recent years, multi-touch has been heralded as a revolution in humancomputer interaction. Multi-touch provides features such as gestural interaction, tangible interfaces, pen-based computing, and interface customization – features embraced by an increasingly tech-savvy public. However, multi-touch platforms have not been adopted as everyday computer interaction devices; that is, multi-touch has not been applied to general-purpose computing. The questions this thesis seeks to address are: Will the general public adopt these systems as their chief interaction paradigm? Can multi-touch provide such a compelling platform that it displaces the desktop mouse and keyboard? Is multi-touch truly the next revolution in human-computer interaction? As a first step toward answering these questions, we observe that generalpurpose computing relies on text input, and ask: Can multi-touch, without a text entry peripheral, provide a platform for efficient text entry? And, by extension, is such a platform viable for general-purpose computing? We investigate these questions through four user studies that collected objective and subjective data for text entry and word processing tasks. The first of these studies establishes a benchmark for text entry performance on a multi-touch platform, across a variety of input modes. The second study attempts to improve this performance by iv examining an alternate input technique. The third and fourth studies include mousestyle interaction for formatting rich-text on a multi-touch platform, in the context of a word processing task. These studies establish a foundation for future efforts in general-purpose computing on a multi-touch platform. Furthermore, this work details deficiencies in tactile feedback with modern multi-touch platforms, and describes an exploration of audible feedback. Finally, the thesis conveys a vision for a general-purpose multi-touch platform, its design and rationale
    • …
    corecore