4 research outputs found

    Ink-based Note Taking On Mobile Devices

    Get PDF
    Although touchscreen mobile phones are widely used for recording informal text notes (e.g., grocery lists, reminders and directions), the lack of efficient mechanisms for combining informal graphical content with text is a persistent challenge. In the first part of the thesis, we present InkAnchor, a digital ink editor that allows users to easily create ink-based notes by finger sketching on a mobile phone touchscreen. InkAnchor incorporates flexible anchoring, focus-plus-context input, content chunking, and lightweight editing mechanisms to support the capture of informal notes and annotations. We describe the design and evaluation of InkAnchor through a series of user studies, which revealed that the integrated support enabled by InkAnchor is a significant improvement over current mobile note taking applications on a range of mobile note-taking tasks. The thesis also introduces FingerTip, a shift-targeting solution to facilitate detailed drawings. Occlusion caused by users' finger on the screen and users' uncertainty of the pixel they interact with are resolved in FingerTip via shifting the actual point where inking occurs beyond the end of the user's finger. However, despite a positive first impression on the part of prospective end users, fingertip turned out only passable on the drawing experience for non-text content. Combining the results of InkAnchor and FigerTip, this thesis does demonstrate that a significant subset of mobile note taking tasks can be supported using focus+context input, and that tuning for hand drawn text input has significant value in the mobile smartphone note taking and sketch input domain

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application

    Enhanced Multi-Touch Gestures for Complex Tasks

    Get PDF
    Recent technological advances have resulted in a major shift, from high-performance notebook and desktop computers -- devices that rely on keyboard and mouse for input -- towards smaller, personal devices like smartphones, tablets and smartwatches which rely primarily on touch input. Users of these devices typically have a relatively high level of skill in using multi-touch gestures to interact with them, but the multi-touch gesture sets that are supported are often restricted to a small subset of one and two-finger gestures, such as tap, double tap, drag, flick, pinch and spread. This is not due to technical limitations, since modern multi-touch smartphones and tablets are capable of accepting at least ten simultaneous points of contact. Likewise, human movement models suggest that humans are capable of richer and more expressive forms of interaction that utilize multiple fingers. This suggests a gap between the technical capabilities of multi-touch devices, the physical capabilities of end-users, and the gesture sets that have been implemented for these devices. Our work explores ways in which we can enrich multi-touch interaction on these devices by expanding these common gesture sets. Simple gestures are fine for simple use cases, but if we want to support a wide range of sophisticated behaviours -- the types of interactions required by expert users -- we need equally sophisticated capabilities from our devices. In this thesis, we refer to these more sophisticated, complex interactions as `enhanced gestures' to distinguish them from common but simple gestures, and to suggest the types of expert scenarios that we are targeting in their design. We do not need to necessarily replace current, familiar gestures, but it makes sense to consider augmenting them as multi-touch becomes more prevalent, and is applied to more sophisticated problems. This research explores issues of approachability and user acceptance around gesture sets. Using pinch-to-zoom as an example, we establish design guidelines for enhanced gestures, and systematically design, implement and evaluate two different types of expert gestures, illustrative of the type of functionality that we might build into future systems

    Focus+Context sketching on a pocket PC

    No full text
    corecore