1,020 research outputs found
A Virtual Conversational Agent for Teens with Autism: Experimental Results and Design Lessons
We present the design of an online social skills development interface for
teenagers with autism spectrum disorder (ASD). The interface is intended to
enable private conversation practice anywhere, anytime using a web-browser.
Users converse informally with a virtual agent, receiving feedback on nonverbal
cues in real-time, and summary feedback. The prototype was developed in
consultation with an expert UX designer, two psychologists, and a pediatrician.
Using the data from 47 individuals, feedback and dialogue generation were
automated using a hidden Markov model and a schema-driven dialogue manager
capable of handling multi-topic conversations. We conducted a study with nine
high-functioning ASD teenagers. Through a thematic analysis of post-experiment
interviews, identified several key design considerations, notably: 1) Users
should be fully briefed at the outset about the purpose and limitations of the
system, to avoid unrealistic expectations. 2) An interface should incorporate
positive acknowledgment of behavior change. 3) Realistic appearance of a
virtual agent and responsiveness are important in engaging users. 4)
Conversation personalization, for instance in prompting laconic users for more
input and reciprocal questions, would help the teenagers engage for longer
terms and increase the system's utility
Continuous Interaction with a Virtual Human
Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access
How to build a supervised autonomous system for robot-enhanced therapy for children with autism spectrum disorder
Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms.However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy
Multimodal agents for cooperative interaction
2020 Fall.Includes bibliographical references.Embodied virtual agents offer the potential to interact with a computer in a more natural manner, similar to how we interact with other people. To reach this potential requires multimodal interaction, including both speech and gesture. This project builds on earlier work at Colorado State University and Brandeis University on just such a multimodal system, referred to as Diana. I designed and developed a new software architecture to directly address some of the difficulties of the earlier system, particularly with regard to asynchronous communication, e.g., interrupting the agent after it has begun to act. Various other enhancements were made to the agent systems, including the model itself, as well as speech recognition, speech synthesis, motor control, and gaze control. Further refactoring and new code were developed to achieve software engineering goals that are not outwardly visible, but no less important: decoupling, testability, improved networking, and independence from a particular agent model. This work, combined with the effort of others in the lab, has produced a "version 2'' Diana system that is well positioned to serve the lab's research needs in the future. In addition, in order to pursue new research opportunities related to developmental and intervention science, a "Faelyn Fox'' agent was developed. This is a different model, with a simplified cognitive architecture, and a system for defining an experimental protocol (for example, a toy-sorting task) based on Unity's visual state machine editor. This version too lays a solid foundation for future research
Human Factor and Usability Testing of a Binocular Optical Coherence Tomography System
PURPOSE: To perform usability testing of a binocular optical coherence tomography (OCT) prototype to predict its function in a clinical setting, and to identify any potential user errors, especially in an elderly and visually impaired population. METHODS: Forty-five participants with chronic eye disease (mean age 62.7 years) and 15 healthy controls (mean age 53 years) underwent automated eye examination using the prototype. Examination included 'whole-eye' OCT, ocular motility, visual acuity measurement, perimetry, and pupillometry. Interviews were conducted to assess the subjective appeal and ease of use for this cohort of first-time users. RESULTS: All participants completed the full suite of tests. Eighty-one percent of the chronic eye disease group, and 79% of healthy controls, found the prototype easier to use than common technologies, such as smartphones. Overall, 86% described the device to be appealing for use in a clinical setting. There was no statistically significant difference in the total time taken to complete the examination between participants with chronic eye disease (median 702 seconds) and healthy volunteers (median 637 seconds) (P = 0.81). CONCLUSION: On their first use, elderly and visually impaired users completed the automated examination without assistance. Binocular OCT has the potential to perform a comprehensive eye examination in an automated manner, and thus improve the efficiency and quality of eye care. TRANSLATIONAL RELEVANCE: A usable binocular OCT system has been developed that can be administered in an automated manner. We have identified areas that would benefit from further development to guide the translation of this technology into clinical practice
Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems
International audienceA person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: "The face is the portrait of the mind; the eyes, its informers.". This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and Robotics
Accessibility requirements for human-robot interaction for socially assistive robots
Mención Internacional en el título de doctorPrograma de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: María Ángeles Malfaz Vázquez.- Secretario: Diego Martín de Andrés.- Vocal: Mike Wal
The Internet of Things Will Thrive by 2025
This report is the latest research report in a sustained effort throughout 2014 by the Pew Research Center Internet Project to mark the 25th anniversary of the creation of the World Wide Web by Sir Tim Berners-LeeThis current report is an analysis of opinions about the likely expansion of the Internet of Things (sometimes called the Cloud of Things), a catchall phrase for the array of devices, appliances, vehicles, wearable material, and sensor-laden parts of the environment that connect to each other and feed data back and forth. It covers the over 1,600 responses that were offered specifically about our question about where the Internet of Things would stand by the year 2025. The report is the next in a series of eight Pew Research and Elon University analyses to be issued this year in which experts will share their expectations about the future of such things as privacy, cybersecurity, and net neutrality. It includes some of the best and most provocative of the predictions survey respondents made when specifically asked to share their views about the evolution of embedded and wearable computing and the Internet of Things
- …