86,382 research outputs found

    Exploring User Interface Improvements for Software Developers who are Blind

    Get PDF
    Software developers who are blind and interact with the computer non-visually face unique challenges with information retrieval. We explore the use of speech and Braille combined with software to provide an improved interface to aid with challenges associated with information retrieval. We motivate our design on common tasks performed by students in a software development course using a Microprocessor without Interlocked Pipeline Stages (MIPS) architecture simulation tool. We test our interface via a single-subject longitudinal study, and we measure and show improvement in both the user’s performance and the user experience

    CUI@IUI: Theoretical and Methodological Challenges in Intelligent Conversational User Interface Interactions

    Get PDF
    This workshop aims to bring together the Intelligent User Inter- face (IUI) and Conversational User Interface (CUI) communities to understand the theoretical and methodological challenges in designing, deploying and evaluating CUIs. CUIs have continued to prosper with the increased use and technological developments in both text-based chatbots and speech-based systems. However, challenges remain in creating established theoretical and method- ological approaches for CUIs, and how these can be used with recent engineering advances. These include assessing the impact of inter- face design on user behaviours and perceptions, developing design guidelines, understanding the role of personalisation and issues of ethics and privacy. Our half-day multidisciplinary workshop brings together researchers and practitioners from the IUI and CUI communities in academia and industry. We aim to (1) identify and map out key focus areas and research challenges to address these critical theoretical and methodological gaps and (2) foster strong relationships between disciplines within and related to Artificial Intelligence (AI) and Human-Computer Interaction (HCI)

    HTML5 and the Learner of Spoken Languages

    Get PDF
    Traditional corpora are not renowned for being user friendly. If learners are to derive maximum benefit from speech corpora, then better interfaces are needed. This paper proposes such a role for HTML5. DIT’s dynamic speech corpus, FLUENT, contains a limited series of informal dialogues between friends and acquaintances. They are characterised by naturalness and their audio quality and marked-up using a schema which allows learners to retrieve features of spoken language, such as speaker intention, formulaicity and prosodic characteristics such as speed of delivery. The requirement to combine audio assets and synchronous text animation has in the past necessitated the use of browser ‘plug-in’ technologies, such as Adobe Flash. Plug-in-based systems all suffer from major drawbacks. They are not installed by default on deployed browsers. More critically they obscure the underlying speech corpus structure. Also proprietary UIs offer no standard way of dealing with accessibility or dynamic interface reconfiguration, e.g. moving from corpus playback to concordance views. This makes design of a unified interface framework, with audio playback, synchronous text and speech, more difficult. Given the profusion of plug-in architectures and plug-in types, it is clear that such an environment is unsustainable for building tools for speech corpus visualisation. In order to overcome these challenges, FLUENT drew heavily on the HTML5 specification coupled with a user-centred design for L2 learners to specify and develop scalable, reusable and accessible UIs for many devices.This paper describes the design of the corpus schema and its close integration with the UI model

    Integrating user-centred design in the development of a silent speech interface based on permanent magnetic articulography

    Get PDF
    Abstract: A new wearable silent speech interface (SSI) based on Permanent Magnetic Articulography (PMA) was developed with the involvement of end users in the design process. Hence, desirable features such as appearance, port-ability, ease of use and light weight were integrated into the prototype. The aim of this paper is to address the challenges faced and the design considerations addressed during the development. Evaluation on both hardware and speech recognition performances are presented here. The new prototype shows a com-parable performance with its predecessor in terms of speech recognition accuracy (i.e. ~95% of word accuracy and ~75% of sequence accuracy), but significantly improved appearance, portability and hardware features in terms of min-iaturization and cost

    Navigation and interaction in a real-scale digital mock-up using natural language and user gesture

    Get PDF
    This paper tries to demonstrate a very new real-scale 3D system and sum up some firsthand and cutting edge results concerning multi-modal navigation and interaction interfaces. This work is part of the CALLISTO-SARI collaborative project. It aims at constructing an immersive room, developing a set of software tools and some navigation/interaction interfaces. Two sets of interfaces will be introduced here: 1) interaction devices, 2) natural language (speech processing) and user gesture. The survey on this system using subjective observation (Simulator Sickness Questionnaire, SSQ) and objective measurements (Center of Gravity, COG) shows that using natural languages and gesture-based interfaces induced less cyber-sickness comparing to device-based interfaces. Therefore, gesture-based is more efficient than device-based interfaces.FUI CALLISTO-SAR

    17 ways to say yes:Toward nuanced tone of voice in AAC and speech technology

    Get PDF
    People with complex communication needs who use speech-generating devices have very little expressive control over their tone of voice. Despite its importance in human interaction, the issue of tone of voice remains all but absent from AAC research and development however. In this paper, we describe three interdisciplinary projects, past, present and future: The critical design collection Six Speaking Chairs has provoked deeper discussion and inspired a social model of tone of voice; the speculative concept Speech Hedge illustrates challenges and opportunities in designing more expressive user interfaces; the pilot project Tonetable could enable participatory research and seed a research network around tone of voice. We speculate that more radical interactions might expand frontiers of AAC and disrupt speech technology as a whole

    A toolkit of mechanism and context independent widgets

    Get PDF
    Most human-computer interfaces are designed to run on a static platform (e.g. a workstation with a monitor) in a static environment (e.g. an office). However, with mobile devices becoming ubiquitous and capable of running applications similar to those found on static devices, it is no longer valid to design static interfaces. This paper describes a user-interface architecture which allows interactors to be flexible about the way they are presented. This flexibility is defined by the different input and output mechanisms used. An interactor may use different mechanisms depending upon their suitability in the current context, user preference and the resources available for presentation using that mechanism
    • 

    corecore