1,703 research outputs found

    On the Development of Adaptive and User-Centred Interactive Multimodal Interfaces

    Get PDF
    Multimodal systems have attained increased attention in recent years, which has made possible important improvements in the technologies for recognition, processing, and generation of multimodal information. However, there are still many issues related to multimodality which are not clear, for example, the principles that make it possible to resemble human-human multimodal communication. This chapter focuses on some of the most important challenges that researchers have recently envisioned for future multimodal interfaces. It also describes current efforts to develop intelligent, adaptive, proactive, portable and affective multimodal interfaces

    Evaluating the development of wearable devices, personal data assistants and the use of other mobile devices in further and higher education institutions

    Get PDF
    This report presents technical evaluation and case studies of the use of wearable and mobile computing mobile devices in further and higher education. The first section provides technical evaluation of the current state of the art in wearable and mobile technologies and reviews several innovative wearable products that have been developed in recent years. The second section examines three scenarios for further and higher education where wearable and mobile devices are currently being used. The three scenarios include: (i) the delivery of lectures over mobile devices, (ii) the augmentation of the physical campus with a virtual and mobile component, and (iii) the use of PDAs and mobile devices in field studies. The first scenario explores the use of web lectures including an evaluation of IBM's Web Lecture Services and 3Com's learning assistant. The second scenario explores models for a campus without walls evaluating the Handsprings to Learning projects at East Carolina University and ActiveCampus at the University of California San Diego . The third scenario explores the use of wearable and mobile devices for field trips examining San Francisco Exploratorium's tool for capturing museum visits and the Cybertracker field computer. The third section of the report explores the uses and purposes for wearable and mobile devices in tertiary education, identifying key trends and issues to be considered when piloting the use of these devices in educational contexts

    Staging Transformations for Multimodal Web Interaction Management

    Get PDF
    Multimodal interfaces are becoming increasingly ubiquitous with the advent of mobile devices, accessibility considerations, and novel software technologies that combine diverse interaction media. In addition to improving access and delivery capabilities, such interfaces enable flexible and personalized dialogs with websites, much like a conversation between humans. In this paper, we present a software framework for multimodal web interaction management that supports mixed-initiative dialogs between users and websites. A mixed-initiative dialog is one where the user and the website take turns changing the flow of interaction. The framework supports the functional specification and realization of such dialogs using staging transformations -- a theory for representing and reasoning about dialogs based on partial input. It supports multiple interaction interfaces, and offers sessioning, caching, and co-ordination functions through the use of an interaction manager. Two case studies are presented to illustrate the promise of this approach.Comment: Describes framework and software architecture for multimodal web interaction managemen

    Embedding Intelligence. Designerly reflections on AI-infused products

    Get PDF
    Artificial intelligence is more-or-less covertly entering our lives and houses, embedded into products and services that are acquiring novel roles and agency on users. Products such as virtual assistants represent the first wave of materializa- tion of artificial intelligence in the domestic realm and beyond. They are new interlocutors in an emerging redefined relationship between humans and computers. They are agents, with miscommunicated or unclear proper- ties, performing actions to reach human-set goals. They embed capabilities that industrial products never had. They can learn users’ preferences and accordingly adapt their responses, but they are also powerful means to shape people’s behavior and build new practices and habits. Nevertheless, the way these products are used is not fully exploiting their potential, and frequently they entail poor user experiences, relegating their role to gadgets or toys. Furthermore, AI-infused products need vast amounts of personal data to work accurately, and the gathering and processing of this data are often obscure to end-users. As well, how, whether, and when it is preferable to implement AI in products and services is still an open debate. This condition raises critical ethical issues about their usage and may dramatically impact users’ trust and, ultimately, the quality of user experience. The design discipline and the Human-Computer Interaction (HCI) field are just beginning to explore the wicked relationship between Design and AI, looking for a definition of its borders, still blurred and ever-changing. The book approaches this issue from a human-centered standpoint, proposing designerly reflections on AI-infused products. It addresses one main guiding question: what are the design implications of embedding intelligence into everyday objects

    IMAGINE Final Report

    No full text

    Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms

    Get PDF
    The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent “devices”, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew “cognitive devices” are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications

    Talking to computers

    Get PDF
    A popular belief amongst UX designers is that the more voice user interfaces (i.e. Alexa, Siri, Google Assistant) speak and behave like people, the more functional they will be. But, conversational mimicry is not the only way a screenless computer can communicate information. The scope of sounds humans can interpret, manipulate, and make is broad. This project seeks to identify ways designers can mine this domain for interaction cues that promote a deeper understanding of digital content and the systems that deliver it

    Context-aware gestural interaction in the smart environments of the ubiquitous computing era

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms
    corecore