12 research outputs found

    SketchWizard: Wizard of Oz Prototyping of Pen-based User Interfaces

    Get PDF
    SketchWizard allows designers to create Wizard of Oz prototypes of pen-based user interfaces in the early stages of design. In the past, designers have been inhibited from participating in the design of pen-based interfaces because of the inadequacy of paper prototypes and the difficulty of developing functional prototypes. In SketchWizard, designers and end users share a drawing canvas between two computers, allowing the designer to simulate the behavior of recognition or other technologies. Special editing features are provided to help designers respond quickly to end-user input. This paper describes the SketchWizard system and presents two evaluations of our approach. The first is an early feasibility study in which Wizard of Oz was used to prototype a pen-based user interface. The second is a laboratory study in which designers used SketchWizard to simulate existing pen-based interfaces. Both showed that end users gave valuable feedback in spite of delays between end-user actions and wizard updates

    Interacção multimodal : contribuiçÔes para simplificar o desenvolvimento de aplicaçÔes

    Get PDF
    Doutoramento em Engenharia InformĂĄticaA forma como interagimos com os dispositivos que nos rodeiam, no nosso diaa- dia, estĂĄ a mudar constantemente, consequĂȘncia do aparecimento de novas tecnologias e mĂ©todos que proporcionam melhores e mais aliciantes formas de interagir com as aplicaçÔes. No entanto, a integração destas tecnologias, para possibilitar a sua utilização alargada, coloca desafios significativos e requer, da parte de quem desenvolve, um conhecimento alargado das tecnologias envolvidas. Apesar de a literatura mais recente apresentar alguns avanços no suporte ao desenho e desenvolvimento de sistemas interactivos multimodais, vĂĄrios aspectos chave tĂȘm ainda de ser resolvidos para que se atinja o seu real potencial. Entre estes aspectos, um exemplo relevante Ă© o da dificuldade em desenvolver e integrar mĂșltiplas modalidades de interacção. Neste trabalho, propomos, desenhamos e implementamos uma framework que permite um mais fĂĄcil desenvolvimento de interacção multimodal. A nossa proposta mantĂ©m as modalidades de interacção completamente separadas da aplicação, permitindo um desenvolvimento, independente de cada uma das partes. A framework proposta jĂĄ inclui um conjunto de modalidades genĂ©ricas e mĂłdulos que podem ser usados em novas aplicaçÔes. De entre as modalidades genĂ©ricas, a modalidade de voz mereceu particular atenção, tendo em conta a relevĂąncia crescente da interacção por voz, por exemplo em cenĂĄrios como AAL, e a complexidade associada ao seu desenvolvimento. Adicionalmente, a nossa proposta contempla ainda o suporte Ă  gestĂŁo de aplicaçÔes multi-dispositivo e inclui um mĂ©todo e respectivo mĂłdulo para criar fusĂŁo entre eventos. O desenvolvimento da arquitectura e da framework ocorreu num contexto de I&D diversificado, incluindo vĂĄrios projectos, cenĂĄrios de aplicação e parceiros internacionais. A framework permitiu o desenho e desenvolvimento de um conjunto alargado de aplicaçÔes multimodais, sendo um exemplo digno de nota o assistente pessoal AALFred, do projecto PaeLife. Estas aplicaçÔes, por sua vez, serviram um contĂ­nuo melhoramento da framework, suportando a recolha iterativa de novos requisitos, e permitido demonstrar a sua versatilidade e capacidades.The way we interact with the devices around us, in everyday life, is constantly changing, boosted by emerging technologies and methods, providing better and more engaging ways to interact with applications. Nevertheless, the integration with these technologies, to enable their widespread use in current systems, presents a notable challenge and requires considerable knowhow from developers. While the recent literature has made some advances in supporting the design and development of multimodal interactive systems, several key aspects have yet to be addressed to enable its full potential. Among these, a relevant example is the difficulty to develop and integrate multiple interaction modalities. In this work, we propose, design and implement a framework enabling easier development of multimodal interaction. Our proposal fully decouples the interaction modalities from the application, allowing the separate development of each part. The proposed framework already includes a set of generic modalities and modules ready to be used in novel applications. Among the proposed generic modalities, the speech modality deserved particular attention, attending to the increasing relevance of speech interaction, for example in scenarios such as AAL, and the complexity behind its development. Additionally, our proposal also tackles the support for managing multi-device applications and includes a method and corresponding module to create fusion of events. The development of the architecture and framework profited from a rich R&D context including several projects, scenarios, and international partners. The framework successfully supported the design and development of a wide set of multimodal applications, a notable example being AALFred, the personal assistant of project PaeLife. These applications, in turn, served the continuous improvement of the framework by supporting the iterative collection of novel requirements, enabling the proposed framework to show its versatility and potential

    Multimodal Content Delivery for Geo-services

    Get PDF
    This thesis describes a body of work carried out over several research projects in the area of multimodal interaction for location-based services. Research in this area has progressed from using simulated mobile environments to demonstrate the visual modality, to the ubiquitous delivery of rich media using multimodal interfaces (geo- services). To effectively deliver these services, research focused on innovative solutions to real-world problems in a number of disciplines including geo-location, mobile spatial interaction, location-based services, rich media interfaces and auditory user interfaces. My original contributions to knowledge are made in the areas of multimodal interaction underpinned by advances in geo-location technology and supported by the proliferation of mobile device technology into modern life. Accurate positioning is a known problem for location-based services, contributions in the area of mobile positioning demonstrate a hybrid positioning technology for mobile devices that uses terrestrial beacons to trilaterate position. Information overload is an active concern for location-based applications that struggle to manage large amounts of data, contributions in the area of egocentric visibility that filter data based on field-of-view demonstrate novel forms of multimodal input. One of the more pertinent characteristics of these applications is the delivery or output modality employed (auditory, visual or tactile). Further contributions in the area of multimodal content delivery are made, where multiple modalities are used to deliver information using graphical user interfaces, tactile interfaces and more notably auditory user interfaces. It is demonstrated how a combination of these interfaces can be used to synergistically deliver context sensitive rich media to users - in a responsive way - based on usage scenarios that consider the affordance of the device, the geographical position and bearing of the device and also the location of the device

    Model-Driven Development of Interactive Multimedia Applications

    Get PDF
    The development of highly interactive multimedia applications is still a challenging and complex task. In addition to the application logic, multimedia applications typically provide a sophisticated user interface with integrated media objects. As a consequence, the development process involves different experts for software design, user interface design, and media design. There is still a lack of concepts for a systematic development which integrates these aspects. This thesis provides a model-driven development approach addressing this problem. Therefore it introduces the Multimedia Modeling Language (MML), a visual modeling language supporting a design phase in multimedia application development. The language is oriented on well-established software engineering concepts, like UML 2, and integrates concepts from the areas of multimedia development and model-based user interface development. MML allows the generation of code skeletons from the models. Thereby, the core idea is to generate code skeletons which can be directly processed in multimedia authoring tools. In this way, the strengths of both are combined: Authoring tools are used to perform the creative development tasks while models are used to design the overall application structure and to enable a well-coordinated development process. This is demonstrated using the professional authoring tool Adobe Flash. MML is supported by modeling and code generation tools which have been used to validate the approach over several years in various student projects and teaching courses. Additional prototypes have been developed to demonstrate, e.g., the ability to generate code for different target platforms. Finally, it is discussed how models can contribute in general to a better integration of well-structured software development and creative visual design

    USER INTERFACES FOR MOBILE DEVICES: TECHNIQUES AND CASE STUDIES

    Get PDF
    The interactive capabilities of portable devices that are nowadays increasingly available, enable mobile computing in diverse contexts. However, in order to fully exploit the potentialities of such technologies and to let end users benefit from them, effective and usable techniques are still needed. In general, differences in capabilities, such as computational power and interaction resources, lead to an heterogeneity that is sometimes positively referred to as device diversity but also, negatively, as device fragmentation. When designing applications for mobile devices, besides general rules and principles of usability, developers cope with further constraints. Restricted capabilities, due to display size, input modality and computational power, imply important design and implementation choices in order to guarantee usability. In addition, when the application is likely to be used by subjects affected by some impairment, the system has also to comply with accessibility requirements. The aim of this dissertation is to propose and discuss examples of such techniques, aimed to support user interfaces on mobile devices, by tackling design, development and evaluation of specific solutions for portable terminals as well as for enabling interoperability across diverse devices (including desktops, handhelds, smartphones). Usefulness and usability aspects are taken into great consideration by the main research questions that drove the activities of the study. With respect the such questions, the three central chapters of the dissertation are respectively aimed at evaluating: hardware/software solutions for edutainment and accessibility in mobile museum guides, visualization strategies for mobile users visiting smart environments, and techniques for user interface migration across diverse devices in multi-user contexts. Motivations, design, implementation and evaluation about a number of solutions aimed to support several dimensions of user interfaces for mobile devices are widely discussed throughout the dissertation, and some findings are drawn. Each one of the prototypes described in the following chapters has been entirely developed within the research activities of the laboratory where the author performed his PhD. Most activities were related to tasks of international research projects and the organization of this dissertation reflects their evolution chronology

    Capturing User Tests in a Multimodal, Multidevice Informal Prototyping Tool

    No full text
    Interaction designers are increasingly faced with the challenge of creating interfaces that incorporate multiple input modalities, such as pen and speech, and span multiple devices. Few early stage prototyping tools allow non-programmers to prototype these interfaces. Here we describe CrossWeaver, a tool for informally prototyping multimodal, multidevice user interfaces. This tool embodies the informal prototyping paradigm, leaving design representations in an informal, sketched form, and creates a working prototype from these sketches. CrossWeaver allows a user interface designer to sketch storyboard scenes on the computer, specifying simple multimodal command transitions between scenes. The tool also allows scenes to target different output devices. Prototypes can run across multiple standalone devices simultaneously, processing multimodal input from each one. Thus, a designer can visually create a multimodal prototype for a collaborative meeting or classroom application. CrossWeaver captures all of the user interaction when running a test of a prototype. This input log can quickly be viewed visually for the details of the users ’ multimodal interaction or it can be replayed across all participating devices, giving the designer information to help him or her analyze and iterate on the interface design

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Integration von physiologischem Feedback in Lernanwendungen unter Alltagsbedingungen

    Get PDF
    Diese Arbeit untersucht, wie herkömmliche Lernanwendungen um Informationen ĂŒber den emotionalen Erregungszustand eines Nutzers erweitert werden können. Den Benutzer zu jedem Zeitpunkt des Lernens auf einem optimalen Erregungsniveau zu halten, wirkt sich positiv auf den Lernerfolg und im Zuge dessen auch auf die Motivation des Lernenden aus. Da wĂ€hrend des Lernens sowohl auf Nutzer- als auch auf Systemseite eine Anpassung erfolgen kann, werden beide Aspekte in dieser Arbeit beleuchtet
    corecore