8,013 research outputs found

    A Virtual Conversational Agent for Teens with Autism: Experimental Results and Design Lessons

    Full text link
    We present the design of an online social skills development interface for teenagers with autism spectrum disorder (ASD). The interface is intended to enable private conversation practice anywhere, anytime using a web-browser. Users converse informally with a virtual agent, receiving feedback on nonverbal cues in real-time, and summary feedback. The prototype was developed in consultation with an expert UX designer, two psychologists, and a pediatrician. Using the data from 47 individuals, feedback and dialogue generation were automated using a hidden Markov model and a schema-driven dialogue manager capable of handling multi-topic conversations. We conducted a study with nine high-functioning ASD teenagers. Through a thematic analysis of post-experiment interviews, identified several key design considerations, notably: 1) Users should be fully briefed at the outset about the purpose and limitations of the system, to avoid unrealistic expectations. 2) An interface should incorporate positive acknowledgment of behavior change. 3) Realistic appearance of a virtual agent and responsiveness are important in engaging users. 4) Conversation personalization, for instance in prompting laconic users for more input and reciprocal questions, would help the teenagers engage for longer terms and increase the system's utility

    Predicting continuous conflict perception with Bayesian Gaussian processes

    Get PDF
    Conflict is one of the most important phenomena of social life, but it is still largely neglected by the computing community. This work proposes an approach that detects common conversational social signals (loudness, overlapping speech, etc.) and predicts the conflict level perceived by human observers in continuous, non-categorical terms. The proposed regression approach is fully Bayesian and it adopts Automatic Relevance Determination to identify the social signals that influence most the outcome of the prediction. The experiments are performed over the SSPNet Conflict Corpus, a publicly available collection of 1430 clips extracted from televised political debates (roughly 12 hours of material for 138 subjects in total). The results show that it is possible to achieve a correlation close to 0.8 between actual and predicted conflict perception

    Nonverbal Vocal Interface

    Get PDF
    Nonverbal vocal interface, meaning the use of non-speech vocal sounds such as oooh and ahhh as input to a computer, provides an interesting and useful input modality for a graphical user interface. Nonverbal vocal interface is a novel improvement over speech-based solutions because voiced sounds may be smoothly modulated, meaning they are well suited to control of continuous variables such as cursor position, while spoken commands are inherently discrete. A graphical user interface is an excellent environment for vocal input because instantaneous visual feedback is crucial to usability, enabling users to see the results of their vocalizations and learn the interface very quickly. Continuously voiced sounds may be easily and independently modulated in dimensions such as volume, pitch, and vowel. These dimensions may be used to augment a familiar input device such as the mouse, adding another degree of freedom to the interaction. For example, a mouse-based painting program may be improved by using vocal volume to control brush size while painting. Vocal input may alternatively be used without other input devices, for example to control the cursor in two dimensions. This offers an opportunity to improve access to computing for users unable to operate a mouse. In this thesis, the use of nonverbal vocal interface for graphical interaction is explored. Vocal dimensions of volume, pitch, and vowel are detected in real time using input from a simple USB microphone and used to affect parameters in several example graphical applications. Effectiveness of the interactive method is tested via measurement of user performance with these example applications

    Continuous Interaction with a Virtual Human

    Get PDF
    Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access

    Exploiting the robot kinematic redundancy for emotion conveyance to humans as a lower priority task

    Get PDF
    Current approaches do not allow robots to execute a task and simultaneously convey emotions to users using their body motions. This paper explores the capabilities of the Jacobian null space of a humanoid robot to convey emotions. A task priority formulation has been implemented in a Pepper robot which allows the specification of a primary task (waving gesture, transportation of an object, etc.) and exploits the kinematic redundancy of the robot to convey emotions to humans as a lower priority task. The emotions, defined by Mehrabian as points in the pleasure–arousal–dominance space, generate intermediate motion features (jerkiness, activity and gaze) that carry the emotional information. A map from this features to the joints of the robot is presented. A user study has been conducted in which emotional motions have been shown to 30 participants. The results show that happiness and sadness are very well conveyed to the user, calm is moderately well conveyed, and fear is not well conveyed. An analysis on the dependencies between the motion features and the emotions perceived by the participants shows that activity correlates positively with arousal, jerkiness is not perceived by the user, and gaze conveys dominance when activity is low. The results indicate a strong influence of the most energetic motions of the emotional task and point out new directions for further research. Overall, the results show that the null space approach can be regarded as a promising mean to convey emotions as a lower priority task.Postprint (author's final draft

    Voice data entry in air traffic control

    Get PDF
    Several of the keyboard data languages were tabulated and analyzed. The key language chosen as a test vehicle was that used by the nonradar or flight data controllers. This application was undertaken to minimize effort in a cost efficient way and with less research and development

    Non-Verbal Communication for a Virtual Reality Interface

    Get PDF
    The steady growth of technology has allowed to extend all forms of human-computer communication. Since the emergence of more sophisticated interaction devices, Human Computer Interaction (HCI) science has added the issue of Non-Verbal Communication (NVC). Nowadays, there are a lot of applications such as interactive entertainments and virtual reality requiring more natural and intuitive interfaces. Human gestures constitute a great space of actions expressed by the body, face, and/or hands. Hand Gesture is frequently used in people’s daily life, thus it is an alternative form to communicate with computers in an easy way. This paper introduces a real-time hand gesture recognition and tracking system to identify different and dinamic hand postures. In order to improve the user experience, a set of different system functions into a virtual world had been implemented so interaction can be performed by the user through a data glove device.XIV Workshop Computación Gráfica, Imágenes y Visualización (WCGIV).Red de Universidades con Carreras en Informática (RedUNCI
    corecore