40,056 research outputs found

    IMAGINE Final Report

    No full text

    Talk your way round: a speech interface to a virtual museum

    Get PDF
    Purpose: To explore the development of a speech interface to a Virtual World and to consider its relevance for disabled users. Method: The system was developed using mainly software that is available at minimal cost. How well the system functioned was assessed by measuring the number of times a group of users with a range of voices had to repeat commands in order for them to be successfully recognised. During an initial session, these users were asked to use the system with no instruction to see how easy this was. Results: Most of the spoken commands had to be repeated less than twice on average for successful recognition. For a set of ‘teleportation’ commands this figure was higher (2.4), but it was clear why this was so and could easily be rectified. The system was easy to use without instruction. Comments on the system were generally positive. Conclusions: While the system has some limitations, a Virtual World with a reasonably reliable speech interface has been developed almost entirely from software which is available at minimal cost. Improvements and further testing are considered. Such a system would clearly improve access to Virtual Reality technologies for those without the skills or physical ability to use a standard keyboard and mouse. It is an example of both Assistive Technology and Universal Design

    PRESENCE: A human-inspired architecture for speech-based human-machine interaction

    No full text
    Recent years have seen steady improvements in the quality and performance of speech-based human-machine interaction driven by a significant convergence in the methods and techniques employed. However, the quantity of training data required to improve state-of-the-art systems seems to be growing exponentially and performance appears to be asymptotic to a level that may be inadequate for many real-world applications. This suggests that there may be a fundamental flaw in the underlying architecture of contemporary systems, as well as a failure to capitalize on the combinatorial properties of human spoken language. This paper addresses these issues and presents a novel architecture for speech-based human-machine interaction inspired by recent findings in the neurobiology of living systems. Called PRESENCE-"PREdictive SENsorimotor Control and Emulation" - this new architecture blurs the distinction between the core components of a traditional spoken language dialogue system and instead focuses on a recursive hierarchical feedback control structure. Cooperative and communicative behavior emerges as a by-product of an architecture that is founded on a model of interaction in which the system has in mind the needs and intentions of a user and a user has in mind the needs and intentions of the system

    A study of the very high order natural user language (with AI capabilities) for the NASA space station common module

    Get PDF
    The requirements are identified for a very high order natural language to be used by crew members on board the Space Station. The hardware facilities, databases, realtime processes, and software support are discussed. The operations and capabilities that will be required in both normal (routine) and abnormal (nonroutine) situations are evaluated. A structure and syntax for an interface (front-end) language to satisfy the above requirements are recommended
    corecore