1,216 research outputs found
Multimodal Dialogue Management for Multiparty Interaction with Infants
We present dialogue management routines for a system to engage in multiparty
agent-infant interaction. The ultimate purpose of this research is to help
infants learn a visual sign language by engaging them in naturalistic and
socially contingent conversations during an early-life critical period for
language development (ages 6 to 12 months) as initiated by an artificial agent.
As a first step, we focus on creating and maintaining agent-infant engagement
that elicits appropriate and socially contingent responses from the baby. Our
system includes two agents, a physical robot and an animated virtual human. The
system's multimodal perception includes an eye-tracker (measures attention) and
a thermal infrared imaging camera (measures patterns of emotional arousal). A
dialogue policy is presented that selects individual actions and planned
multiparty sequences based on perceptual inputs about the baby's internal
changing states of emotional engagement. The present version of the system was
evaluated in interaction with 8 babies. All babies demonstrated spontaneous and
sustained engagement with the agents for several minutes, with patterns of
conversationally relevant and socially contingent behaviors. We further
performed a detailed case-study analysis with annotation of all agent and baby
behaviors. Results show that the baby's behaviors were generally relevant to
agent conversations and contained direct evidence for socially contingent
responses by the baby to specific linguistic samples produced by the avatar.
This work demonstrates the potential for language learning from agents in very
young babies and has especially broad implications regarding the use of
artificial agents with babies who have minimal language exposure in early life
Communicating with Humans and Robots: A Motion Tracking Data Glove for Enhanced Support of Deafblind
In this work, we discuss the design and development of a communication system for enhanced support of the deafblind. The system is based on an advanced motion tracking Data Glove that allows for high fidelity determination of finger postures with consequent identification of the basic Malossi alphabet signs. A natural, easy-to-master alphabet extension that supports single-hand signing without touch surface sensing is described, and different scenarios for its use are discussed. The focus is on using the extended Malossi alphabet as a communication medium in a Data Glove-based interface for remote messaging and interactive control of mobile robots. This may be of particular interest to the deafblind community, where distant communications and robotized support and services are rising. The designed Data Glove-based communication interface requires minimal adjustments to the Malossi alphabet and can be mastered after a short training period. The natural interaction style supported by the Data Glove and the popularity of the Malossi alphabet among the deafblind should greatly facilitate the wider adoption of the developed interface
The blossom method: development of a somatic psychotherapy model, its use in clinical and everyday settings: a heuristic, reflexive inquiry
The Public Works considered for this submission include The Blossom Method Model, a parenting book on this approach, and a therapeutic childrenâs book. The submission includes a detailed, heuristic and reflexive account of the life experiences, clinical and linguistic training, and influences which have contributed to these works, and considers the impact the works have made to the field of psychotherapy.
Originally, The Blossom Method was developed with a focus upon non-verbal communication between parent and infant, using an integrative, relational approach with a particular emphasis on visual, kinaesthetic, gestural, sensorimotor communication. The modelâs key components and the theoretical framework that it provides can be considered for use in psychotherapy training and practice.
In this account the submission reflects upon the authorâs formative years and the experience of being raised by a profoundly deaf, non-signing mother. It is recognised that parent-child communication and connectivity has been complex for the author, which influenced their decision to study linguistics and undertake immersion training as a sign language interpreter with a university, developing fluency in both BSL and English. The context statement explores the authorâs leadership role in a charitable organisation; the various professional and personal challenges which led to psychotherapy training; the experience of infant loss; and motherhood which provided an opportunity to experiment with non-verbal communication and promote connection with the authorâs daughter, Blossom. The model has been developed through heuristic learning, reflexive study and anecdotal research undertaken with parents and their infants, and it brings together linguistic training and therapeutic experience. The concepts of the model have been disseminated internationally through a popular parenting book, which has led to further research, speaking engagements, article writing, course content writing, and an involvement in training and developing a practice with parents and their infants, both Deaf and hearing.
The submission provides the model explanation initially published in the book and discusses the theoretical influences which form the content for the Public Works.
During the course of writing this submission, a particular feature in relation to influence and impact emerged, as the author noted that recognition reach has been achieved through the careful use of social media platforms. This has resulted in the author reaching international audiences in India, Australia, South Korea and South America.
Although the model is perhaps not distinctly a ânewâ approach to psychotherapy, the considerations and findings in relation to the âlanguage of infantsâ provide a platform for additional research in the field of infant somatic narratives. Furthermore, there is a distinctive synthesis of personal background, linguistic training, professional knowledge and expertise as a psychotherapist with both Deaf and non-deaf adults, children and infants
Coordinating attention requires coordinated senses
From playing basketball to ordering at a food counter, we frequently and effortlessly coordinate our attention with others towards a common focus: we look at the ball, or point at a piece of cake. This non-verbal coordination of attention plays a fundamental role in our social lives: it ensures that we refer to the same object, develop a shared language, understand each otherâs mental states, and coordinate our actions. Models of joint attention generally attribute this accomplishment to gaze coordination. But are visual attentional mechanisms sufficient to achieve joint attention, in all cases? Besides cases where visual information is missing, we show how combining it with other senses can be helpful, and even necessary to certain uses of joint attention. We explain the two ways in which non-visual cues contribute to joint attention: either as enhancers, when they complement gaze and pointing gestures in order to coordinate joint attention on visible objects, or as modality pointers, when joint attention needs to be shifted away from the whole object to one of its properties, say weight or texture. This multisensory approach to joint attention has important implications for social robotics, clinical diagnostics, pedagogy and theoretical debates on the construction of a shared world
Evaluating Temporal Patterns in Applied Infant Affect Recognition
Agents must monitor their partners' affective states continuously in order to
understand and engage in social interactions. However, methods for evaluating
affect recognition do not account for changes in classification performance
that may occur during occlusions or transitions between affective states. This
paper addresses temporal patterns in affect classification performance in the
context of an infant-robot interaction, where infants' affective states
contribute to their ability to participate in a therapeutic leg movement
activity. To support robustness to facial occlusions in video recordings, we
trained infant affect recognition classifiers using both facial and body
features. Next, we conducted an in-depth analysis of our best-performing models
to evaluate how performance changed over time as the models encountered missing
data and changing infant affect. During time windows when features were
extracted with high confidence, a unimodal model trained on facial features
achieved the same optimal performance as multimodal models trained on both
facial and body features. However, multimodal models outperformed unimodal
models when evaluated on the entire dataset. Additionally, model performance
was weakest when predicting an affective state transition and improved after
multiple predictions of the same affective state. These findings emphasize the
benefits of incorporating body features in continuous affect recognition for
infants. Our work highlights the importance of evaluating variability in model
performance both over time and in the presence of missing data when applying
affect recognition to social interactions.Comment: 8 pages, 6 figures, 10th International Conference on Affective
Computing and Intelligent Interaction (ACII 2022
How a Diverse Research Ecosystem Has Generated New Rehabilitation Technologies: Review of NIDILRRâs Rehabilitation Engineering Research Centers
Over 50 million United States citizens (1 in 6 people in the US) have a developmental, acquired, or degenerative disability. The average US citizen can expect to live 20% of his or her life with a disability. Rehabilitation technologies play a major role in improving the quality of life for people with a disability, yet widespread and highly challenging needs remain. Within the US, a major effort aimed at the creation and evaluation of rehabilitation technology has been the Rehabilitation Engineering Research Centers (RERCs) sponsored by the National Institute on Disability, Independent Living, and Rehabilitation Research. As envisioned at their conception by a panel of the National Academy of Science in 1970, these centers were intended to take a âtotal approach to rehabilitationâ, combining medicine, engineering, and related science, to improve the quality of life of individuals with a disability. Here, we review the scope, achievements, and ongoing projects of an unbiased sample of 19 currently active or recently terminated RERCs. Specifically, for each center, we briefly explain the needs it targets, summarize key historical advances, identify emerging innovations, and consider future directions. Our assessment from this review is that the RERC program indeed involves a multidisciplinary approach, with 36 professional fields involved, although 70% of research and development staff are in engineering fields, 23% in clinical fields, and only 7% in basic science fields; significantly, 11% of the professional staff have a disability related to their research. We observe that the RERC program has substantially diversified the scope of its work since the 1970âs, addressing more types of disabilities using more technologies, and, in particular, often now focusing on information technologies. RERC work also now often views users as integrated into an interdependent society through technologies that both people with and without disabilities co-use (such as the internet, wireless communication, and architecture). In addition, RERC research has evolved to view users as able at improving outcomes through learning, exercise, and plasticity (rather than being static), which can be optimally timed. We provide examples of rehabilitation technology innovation produced by the RERCs that illustrate this increasingly diversifying scope and evolving perspective. We conclude by discussing growth opportunities and possible future directions of the RERC program
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
In this paper, an overview of human-robot interactive communication is
presented, covering verbal as well as non-verbal aspects of human-robot
interaction. Following a historical introduction, and motivation towards fluid
human-robot communication, ten desiderata are proposed, which provide an
organizational axis both of recent as well as of future research on human-robot
communication. Then, the ten desiderata are examined in detail, culminating to
a unifying discussion, and a forward-looking conclusion
Embodied Language Learning and Cognitive Bootstrapping:Methods and Design Principles
Co-development of action, conceptualization and social interaction mutually scaffold and support each other within a virtuous feedback cycle in the development of human language in children. Within this framework, the purpose of this article is to bring together diverse but complementary accounts of research methods that jointly contribute to our understanding of cognitive development and in particular, language acquisition in robots. Thus, we include research pertaining to developmental robotics, cognitive science, psychology, linguistics and neuroscience, as well as practical computer science and engineering. The different studies are not at this stage all connected into a cohesive whole; rather, they are presented to illuminate the need for multiple different approaches that complement each other in the pursuit of understanding cognitive development in robots. Extensive experiments involving the humanoid robot iCub are reported, while human learning relevant to developmental robotics has also contributed useful results. Disparate approaches are brought together via common underlying design principles. Without claiming to model human language acquisition directly, we are nonetheless inspired by analogous development in humans and consequently, our investigations include the parallel co-development of action, conceptualization and social interaction. Though these different approaches need to ultimately be integrated into a coherent, unified body of knowledge, progress is currently also being made by pursuing individual methods
Embodied language learning and cognitive bootstrapping: methods and design principles
Co-development of action, conceptualization and social interaction mutually scaffold and support each other within a virtuous feedback cycle in the development of human language in children. Within this framework, the purpose of this article is to bring together diverse but complementary accounts of research methods that jointly contribute to our understanding of cognitive development and in particular, language acquisition in robots. Thus, we include research pertaining to developmental robotics, cognitive science, psychology, linguistics and neuroscience, as well as practical computer science and engineering. The different studies are not at this stage all connected into a cohesive whole; rather, they are presented to illuminate the need for multiple different approaches that complement each other in the pursuit of understanding cognitive development in robots. Extensive experiments involving the humanoid robot iCub are reported, while human learning relevant to developmental robotics has also contributed useful results.
Disparate approaches are brought together via common underlying design principles. Without claiming to model human language acquisition directly, we are nonetheless inspired by analogous development in humans and consequently, our investigations include the parallel co-development of action, conceptualization and social interaction. Though these different approaches need to ultimately be integrated into a coherent, unified body of knowledge, progress is currently also being made by pursuing individual methods
- âŠ