6 research outputs found
Recommended from our members
Text or image? Investigating the effects of instruction type on mid-air gesture making with novice older adults
Unlike traditional interaction methods where the same command (e.g. mouse click) is used for different purposes, mid-air gesture interaction often makes use of different gesture commands for different functions, but first novice users need to learn these commands in order to interact with the system successfully. We describe an empirical study with 25 novice older adults that investigated the effectiveness of 3 “on screen” instruction types for demonstrating how to make mid-air gesture commands. We compared three interface design choices for providing instructions: descriptive (text-based), pictorial (static), and pictorial (animated). Results showed a significant advantage of pictorial instructions (static and animated) over text-based instructions for guiding novice older adults in making mid-air gestures with regards to accuracy, completion time and user preference. Pictorial (animated) was the instruction type leading to the fastest gesture making with 100% accuracy and may be the most suitable choice to support age-friendly gesture learning
Paper Augmented Digital Documents
Paper Augmented Digital Documents (PADD), are digital documents that
can be manipulated either on a computer screen or on paper. PADD, and the
infrastructure supporting them, can be seen as a bridge between the digital and
the paper worlds. As digital documents, PADD are easy to edit, distribute and
archive; as paper documents, PADD are easy to navigate, annotate and well
accepted in social settings. The chimeric nature of PADD makes them well suited
for many tasks such as proofreading, editing, and annotation of large format
document like blueprints. We are presenting an architecture which supports the
seamless manipulation of PADs using today's technologies and reports on the
lessons we learned while implementing the first PADD system.
Keywords: Paper Augmented Digital Document, Paper based user interface, digital
pen
UMIACS-TR-2003-4
Recommended from our members
An investigation of mid-air gesture interaction for older adults
Older adults (60+) face natural and gradual decline in cognitive, sensory and motor functions that are often the reason for the difficulties that older users come up against when interacting with computers. For that reason, the investigation and design of age-inclusive input methods for computer interaction is much needed and relevant due to an ageing population. The advances of motion sensing technologies and mid-air gesture interaction reinvented how individuals can interact with computer interfaces and this modality of input method is often deemed as a more “natural” and “intuitive” than using purely traditional input devices such mouse interaction. Although explored in gaming and entertainment, the suitability of mid-air gesture interaction for older users in particular is still little known. The purpose of this research is to investigate the potential of mid-air gesture interaction to facilitate computer use for older users, and to address the challenges that older adults may face when interacting with gestures in mid-air. This doctoral research is presented as a collection of papers that, together, develop the topic of ageing and computer interaction through mid-air gestures. The initial point for this research was to establish how older users differ from younger users and focus on the challenges faced by older adults when interacting with mid-air gesture interaction. Once these challenges were identified, this work aimed to explore a series of usability challenges and opportunities to further develop age-inclusive interfaces based on mid-air gesture interaction. Through a series of empirical studies, this research intends to provide recommendations for designing mid-air gesture interaction that better take into consideration the needs and skills of the older population and aims to contribute to the advance of age-friendly interfaces
PAPIERCRAFT: A PAPER-BASED INTERFACE TO SUPPORT INTERACTION WITH DIGITAL DOCUMENTS
Many researchers extensively interact with documents using both computers and paper printouts, which provide an opposite set of supports. Paper is comfortable to read from and write on, and it is flexible to be arranged in space; computers provide an efficient way to archive, transfer, search, and edit information. However, due to the gap between the two media, it is difficult to seamlessly integrate them together to optimize the user's experience of document interaction.
Existing solutions either sacrifice inherent paper flexibility or support very limited digital functionality on paper. In response, we have proposed PapierCraft, a novel paper-based interface that supports rich digital facilities on paper without sacrificing paper's flexibility. By employing the emerging digital pen technique and multimodal pen-top feedback, PapierCraft allows people to use a digital pen to draw gesture marks on a printout, which are captured, interpreted, and applied to the corresponding digital copy. Conceptually, the pen and the paper form a paper-based computer, able to interact with other paper sheets and computing devices for operations like copy/paste, hyperlinking, and web searches. Furthermore, it retains the full range of paper advantages through the light-weighted, pen-paper-only interface. By combining the advantages of paper and digital media and by supporting the smooth transition between them, PapierCraft bridges the paper-computer gap.
The contributions of this dissertation focus on four respects. First, to accommodate the static nature of paper, we proposed a pen-gesture command system that does not rely on screen-rendered feedback, but rather on the self-explanatory pen ink left on the paper. Second, for more interactive tasks, such as searching for keywords on paper, we explored pen-top multimodal (e.g. auditory, visual, and tactile) feedback that enhances the command system without sacrificing the inherent paper flexibility. Third, we designed and implemented a multi-tier distributed infrastructure to map pen-paper interactions to digital operations and to unify document interaction on paper and on computers. Finally, we systematically evaluated PapierCraft through three lab experiments and two application deployments in the areas of field biology and e-learning. Our research has demonstrated the feasibility, usability, and potential applications of the paper-based interface, shedding light on the design of the future interface for digital document interaction. More generally, our research also contributes to ubiquitous computing, mobile interfaces, and pen-computing