74,510 research outputs found

    XPL the Extensible Presentation Language

    Get PDF
    The last decade has witnessed a growing interest in the development of web interfaces enabling both multiple ways to access contents and, at the same time, fruition by multiple modalities of interaction (point-and-click, contents reading, voice commands, gestures, etc.). In this paper we describe a framework aimed at streamlining the design process of multi-channel, multimodal interfaces enabling full reuse of software components. This framework is called the eXtensible Presentation architecture and Language (XPL), a presentation language based on design pattern paradigm that keeps separated the presentation layer from the underlying programming logic. The language supplies a methodology to expedite multimodal interface development and to reduce the effort to implement interfaces for multiple access devices, by means of using the same code. This paper describes a methodology approach based on Visual Design Pattern (ViDP) and Verbal Design Pattern (VeDP), offering examples of multimodal and multichannel interfaces created with the XPL Editor

    New sociotechnical insights in interaction design

    Get PDF
    New challenges are facing interaction design. On one hand because of advances in technology – pervasive, ubiquitous, multimodal and adaptive computing – are changing the nature of interaction. On the other, web 2.0, massive multiplayer games and collaboration software extends the boundaries of HCI to deal with interaction in settings of remote communication and collaboration. The aim of this workshop is to provide a forum for HCI practitioners and researchers interested in knowledge from the social sciences to discuss how sociotechnical insights can be used to inform interaction design, and more generally how social science methods and theories can help to enrich the conceptual framework of systems development and participatory design. Position papers submissions are invited to address key aspects of current research and practical case studies

    User-defined multimodal interaction to enhance children's number learning

    Get PDF
    Children today are already exposed to the new technology and have experienced excellent number learning applications at an early age. Despite that, most of the children's application softwares either fail to establish the interaction design or are not child-friendly. Involving children in the design phase of any children application is therefore essential as adults or developers do not know the children’s needs and requirements. In other words, designing children's computer applications adapted to the capabilities of children is an important part of today's software development methodology. The goal of this research is to propose a new interaction technique and usability that evaluates children learning performance of numbers. The new interaction technique is designed by participatory design in which children are involved in the design process. A VisionMath interface was implemented with the user-defined multimodal interaction dialogues which was proposed to evaluate the children’s learning ability and subjective satisfaction. An evaluation with 20 participants was conducted using usability testing methods. The result shows that there is a significant difference in the number learning performance between tactile interaction and multimodal interaction. This study reveals the proposed user-defined multimodal interaction dialogue was successful in providing a new interaction technique for children’s number learning by offering alternative input modality and potentially providing a rich field of research in the future

    The role of avatars in e-government interfaces

    Get PDF
    This paper investigates the use of avatars to communicate live message in e-government interfaces. A comparative study is presented that evaluates the contribution of multimodal metaphors (including avatars) to the usability of interfaces for e-government and user trust. The communication metaphors evaluated included text, earcons, recorded speech and avatars. The experimental platform used for the experiment involved two interface versions with a sample of 30 users. The results demonstrated that the use of multimodal metaphors in an e-government interface can significantly contribute to enhancing the usability and increase trust of users to the e-government interface. A set of design guidelines, for the use of multimodal metaphors in e-government interfaces, was also produced

    Ambient Gestures

    No full text
    We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ‘in the environment’ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing
    • 

    corecore