1,213 research outputs found

    Acoustic Space Movement Planning in a Neural Model of Motor Equivalent Vowel Production

    Full text link
    Recent evidence suggests that speakers utilize an acoustic-like reference frame for the planning of speech movements. DIVA, a computational model of speech acquisition and motor equivalent speech production, has previously been shown to provide explanations for a wide range of speech production data using a constriction-based reference frame for movement planning. This paper extends the previous work by investigating an acoustic-like planning frame in the DIVA modeling framework. During a babbling phase, the model self-organizes targets in the planning space for each of ten vowels and learns a mapping from desired movement directions in this planning space into appropriate articulator velocities. Simulation results verify that after babbling the model is capable of producing easily recognizable vowel sounds using an acoustic planning space consisting of the formants F1 and F2. The model successfully reaches all vowel targets from any initial vocal tract configuration, even in the presence of constraints such as a blocked jaw.Office of Naval Research (N00014-91-J-4100, N00014-92-J-4015); Air Force Office of Scientific Research (F49620-92-J-0499

    Text or image? Investigating the effects of instruction type on mid-air gesture making with novice older adults

    Get PDF
    Unlike traditional interaction methods where the same command (e.g. mouse click) is used for different purposes, mid-air gesture interaction often makes use of different gesture commands for different functions, but first novice users need to learn these commands in order to interact with the system successfully. We describe an empirical study with 25 novice older adults that investigated the effectiveness of 3 “on screen” instruction types for demonstrating how to make mid-air gesture commands. We compared three interface design choices for providing instructions: descriptive (text-based), pictorial (static), and pictorial (animated). Results showed a significant advantage of pictorial instructions (static and animated) over text-based instructions for guiding novice older adults in making mid-air gestures with regards to accuracy, completion time and user preference. Pictorial (animated) was the instruction type leading to the fastest gesture making with 100% accuracy and may be the most suitable choice to support age-friendly gesture learning

    Creating mobile gesture-based interaction design patterns for older adults : a study of tap and swipe gestures with portuguese seniors

    Get PDF
    Tese de mestrado. Multimédia. Faculdade de Engenharia. Universidade do Porto. 201

    Smart Avatars in JackMOO

    Get PDF
    Creation of compelling 3-dimensional, multi-user virtual worlds for education and training applications requires a high degree of realism in the appearance, interaction, and behavior of avatars within the scene. Our goal is to develop and/or adapt existing 3-dimensional technologies to provide training scenarios across the Internet in a form as close as possible to the appearance and interaction expected of live situations with human participants. We have produced a prototype system, JackMOO, which combines Jack, a virtual human system, and LambdaMOO, a multiuser, network-accessible, programmable, interactive server. Jack provides the visual realization of avatars and other objects. LambdaMOO provides the web-accessible communication, programability, and persistent object database. The combined JackMOO allows us to store the richer semantic information necessitated by the scope and range of human actions that an avatar must portray, and to express those actions in the form of imperative sentences. This paper describes JackMOO, its components, and a prototype application with five virtual human agents

    Virtual reality: Theoretical basis, practical applications

    Get PDF
    Virtual reality (VR) is a powerful multimedia visualization technique offering a range of mechanisms by which many new experiences can be made available. This paper deals with the basic nature of VR, the technologies needed to create it, and its potential, especially for helping disabled people. It also offers an overview of some examples of existing VR systems

    Context-aware gestural interaction in the smart environments of the ubiquitous computing era

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms

    Exploring parental behavior and child interactive engagement : a study on children with a significant cognitive and motor developmental delay

    Get PDF
    Background and aims: Parenting factors are one of the most striking gaps in the current scientific literature on the development of young children with significant cognitive and motor disabilities. We aim to explore the characteristics of, and the association between, parental behavior and children's interactive engagement within this target group. Methods and procedures: Twenty-five parent-child dyads (with children aged 6-59 months) were video-taped during a 15-min unstructured play situation. Parents were also asked to complete the Parental Behavior Scale for toddlers. The video-taped observations were scored using the Child and Maternal Behavior Rating Scales. Outcomes and results: Low levels of parental discipline and child initiation were found. Parental responsivity was positively related to child attention and initiation. Conclusions and implications: Compared to children with no or other levels of disabilities, this target group exhibits large differences in frequency levels and, to a lesser extent, the concrete operationalization of parenting domains Further, this study confirms the importance of sensitive responsivity as the primary variable in parenting research

    Proceedings of the international conference on cooperative multimodal communication CMC/95, Eindhoven, May 24-26, 1995:proceedings

    Get PDF
    corecore