4 research outputs found

    Suspenseful Design: Engaging Emotionally with Complex Applications through Compelling Narratives

    Get PDF
    Stories are fundamental to how we learn about and experience the world, but few software interfaces incorporate stories or use story-telling techniques. This thesis explores the possibility of applying principles of suspenseful storytelling to interaction design to create more engaging and emotionally compelling applications. Specifically, this work introduces a theoretical framework for designing suspenseful software-applications; describes the process of constructing a suspenseful, story-based tutorial; and documents a controlled experiment in which this suspenseful tutorial was pitted against two more traditional tutorial designs. Participants who used the narrative-based tutorial reported greater feelings of hopeful suspense than those who worked through an unsuspenseful tutorial

    Interactions en 3D : cycle de vie du geste, de la génération à sa consommation

    Get PDF
    In the gesture recognition domain, human movements are tracked, recognized and mapped to functional primitives to control a system or to manipulate an object. Human-Computer Interaction researchers have particularity focused on the tracking of the gesture made after the contact with the object to be manipulating a 3D object is a wider process that starts with vision, includes reaching, grasping, manipulating and ends with "event consumption" in target applications. In this thesis, we have collected and organized a HCI literature review coming from many fields (vision, neuropsychology, grasping and technical). We have created a system that tracks, from low level 3D point-cloud input, user movements on the table but also above the surface. We have specified multiple cases of gesture activities and we used them in two applications. We have also proposed a novel way of creating future adaptable applications to new forms of interactions using a software bus.Dans le domaine de la reconnaissance gestuelle, les mouvements humains sont observés, reconnus et transformés en primitives fonctionnelles pour contrôler un système ou manipuler un objet. Les chercheurs en Interaction Homme-Machine ont plus particulièrement étudié le suivi des gestes réalisés après le contact avec l'objet à manipuler (geste apparent). Nous montrons dans cette thèse qu'intéragir et manipuler un objet 3D est un processus plus large qui commence par la vision, qui inclut le rapprochement, la saisie, la manipulation et qui se termine avec la "consommation des évènements" dans les applications cibles. Dans cette thèse, nous avons collecté et organisé un état de l'art sur l'interaction Homma-Machine provenant de différents points de vue (visions, neuropsychologie, de saisie et techniques). Nous avons créé un système qui suit les mouvements des utilisateurs sur une table mais également au dessus de la surface à partir d'un nuage de points 3D. Nous avons spécifié des cas d'activités gestuelles et nous les avons utilisés dans deux applications. Nous avons aussi proposé une nouvelle façon de créer des applications adaptables aux nouvelles formes d'interactions en se basant sur un bus logiciel

    Task-Centric User Interfaces

    Get PDF
    Software applications for design and creation typically contain hundreds or thousands of commands, which collectively give users enormous expressive power. Unfortunately, rich feature sets also take a toll on usability. Current interfaces to feature-rich software address this dilemma by adopting menus, toolbars, and other hierarchical schemes to organize functionality—approaches that enable efficient navigation to specific commands and features, but do little to reveal how to perform unfamiliar tasks. We present an alternative task-centric user interface design that explicitly supports users in performing unfamiliar tasks. A task-centric interface is able to quickly adapt itself to the user’s intended goal, presenting relevant functionality and required procedures in task-specific customized interfaces. To achieve this, task-centric interfaces (1) represent tasks as first-class objects in the interface; (2) allow the user to declare their intended goal (or infer it from the user’s actions); (3) restructure the interface to provide step-by-step scaffolding for the current goal; and (4) provide additional knowledge and guidance within the application’s interface. Our inspiration for task-centric interfaces comes from a study we conducted, which revealed that a valid use case for feature-rich software is to perform short, targeted tasks that use a small fraction of the application’s full functionality. Task-centric interfaces provide explicit support for this use. We developed and tested our task-centric interface approach by creating AdaptableGIMP, a modified version of the GIMP image editor, and Workflows, an iteration on AdaptableGIMP’s design based on insights from a semi-structured interview study and a think-aloud study. Based on a two-session study of Workflows, we show that task-centric interfaces can successfully support a guided-and-constrained problem solving strategy for performing unfamiliar tasks, which enables faster task completion and reduced cognitive load as compared to current practices. We also provide evidence that task-centric interfaces can enable a higher-level form of application learning, in which the user associates tasks with relevant keywords, as opposed to low-level commands and procedures. This keyword learning has potential benefits for memorability, because the keywords themselves are descriptive of the task being learned, and scalability, because a few keywords can map to an arbitrarily complex set of commands and procedures. Finally, our findings suggest a range of different ways that the idea of task-centric interfaces could be further developed

    Tutorial-based interfaces for cloud-enabled applications

    No full text
    Figure 1: TAPPCLOUD turns static online tutorials into tutorial-based applications (tapps). Once a tapp has been created, a TAPPCLOUD bookmarklet appears on the source tutorial page (a). Clicking on the bookmarklet opens the TAPPCLOUD wiki that hosts all created tapps (b). Users can upload their own images (c) to a tapp to apply the target technique (d). Powerful image editing software like Adobe Photoshop and GIMP have complex interfaces that can be hard to master. To help users perform image editing tasks, we introduce tutorialbased applications (tapps) that retain the step-by-step structure and descriptive text of tutorials but can also automatically apply tutorial steps to new images. Thus, tapps can be used to batch process many images automatically, similar to traditional macros. Tapps also support interactive exploration of parameters, automatic variations, and direct manipulation (e.g., selection, brushing). Another key feature of tapps is that they execute on remote instances of Photoshop, which allows users to edit their images on any Web-enabled device. We demonstrate a working prototype system called TAPPCLOUD for creating, managing and using tapps. Initial user feedback indicates support for both the interactive features of tapps and their ability to automate image editing. We conclude with a discussion of approaches and challenges of pushing monolithic direct-manipulation GUIs to the cloud. ACM Classification: H5.2 [Information interfaces and presentation]
    corecore