4,658 research outputs found

    Psychophysiological analysis of a pedagogical agent and robotic peer for individuals with autism spectrum disorders.

    Get PDF
    Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by ongoing problems in social interaction and communication, and engagement in repetitive behaviors. According to Centers for Disease Control and Prevention, an estimated 1 in 68 children in the United States has ASD. Mounting evidence shows that many of these individuals display an interest in social interaction with computers and robots and, in general, feel comfortable spending time in such environments. It is known that the subtlety and unpredictability of people’s social behavior are intimidating and confusing for many individuals with ASD. Computerized learning environments and robots, however, prepare a predictable, dependable, and less complicated environment, where the interaction complexity can be adjusted so as to account for these individuals’ needs. The first phase of this dissertation presents an artificial-intelligence-based tutoring system which uses an interactive computer character as a pedagogical agent (PA) that simulates a human tutor teaching sight word reading to individuals with ASD. This phase examines the efficacy of an instructional package comprised of an autonomous pedagogical agent, automatic speech recognition, and an evidence-based instructional procedure referred to as constant time delay (CTD). A concurrent multiple-baseline across-participants design is used to evaluate the efficacy of intervention. Additionally, post-treatment probes are conducted to assess maintenance and generalization. The results suggest that all three participants acquired and maintained new sight words and demonstrated generalized responding. The second phase of this dissertation describes the augmentation of the tutoring system developed in the first phase with an autonomous humanoid robot which serves the instructional role of a peer for the student. In this tutoring paradigm, the robot adopts a peer metaphor, where its function is to act as a peer. With the introduction of the robotic peer (RP), the traditional dyadic interaction in tutoring systems is augmented to a novel triadic interaction in order to enhance the social richness of the tutoring system, and to facilitate learning through peer observation. This phase evaluates the feasibility and effects of using PA-delivered sight word instruction, based on a CTD procedure, within a small-group arrangement including a student with ASD and the robotic peer. A multiple-probe design across word sets, replicated across three participants, is used to evaluate the efficacy of intervention. The findings illustrate that all three participants acquired, maintained, and generalized all the words targeted for instruction. Furthermore, they learned a high percentage (94.44% on average) of the non-target words exclusively instructed to the RP. The data show that not only did the participants learn nontargeted words by observing the instruction to the RP but they also acquired their target words more efficiently and with less errors by the addition of an observational component to the direct instruction. The third and fourth phases of this dissertation focus on physiology-based modeling of the participants’ affective experiences during naturalistic interaction with the developed tutoring system. While computers and robots have begun to co-exist with humans and cooperatively share various tasks; they are still deficient in interpreting and responding to humans as emotional beings. Wearable biosensors that can be used for computerized emotion recognition offer great potential for addressing this issue. The third phase presents a Bluetooth-enabled eyewear – EmotiGO – for unobtrusive acquisition of a set of physiological signals, i.e., skin conductivity, photoplethysmography, and skin temperature, which can be used as autonomic readouts of emotions. EmotiGO is unobtrusive and sufficiently lightweight to be worn comfortably without interfering with the users’ usual activities. This phase presents the architecture of the device and results from testing that verify its effectiveness against an FDA-approved system for physiological measurement. The fourth and final phase attempts to model the students’ engagement levels using their physiological signals collected with EmotiGO during naturalistic interaction with the tutoring system developed in the second phase. Several physiological indices are extracted from each of the signals. The students’ engagement levels during the interaction with the tutoring system are rated by two trained coders using the video recordings of the instructional sessions. Supervised pattern recognition algorithms are subsequently used to map the physiological indices to the engagement scores. The results indicate that the trained models are successful at classifying participants’ engagement levels with the mean classification accuracy of 86.50%. These models are an important step toward an intelligent tutoring system that can dynamically adapt its pedagogical strategies to the affective needs of learners with ASD

    Social Influences on Songbird Behavior: From Song Learning to Motion Coordination

    Full text link
    Social animals learn during development how to integrate successfully into their group. How do social interactions combine to maintain group cohesion? We first review how social environments can influence the development of vocal learners, such as songbirds and humans (Chapter 1). To bypass the complexity of natural social interactions and gain experimental control, we developed Virtual Social Environments, surrounding the bird with videos of manipulated playbacks. This way we were able to design sensory and social scenarios and test how social zebra finches adjust their behavior (Chapters 2 & 3). A serious challenge is that the color output of a video monitor does not match the color vision of zebra finches. To minimize chromatic distortion, we eliminated all of the colors from the videos, except in the beak and cheeks where we superimposed colors that match the sensitivity of zebra finch photoreceptors (Chapter 2). Birds strongly preferred to watch these manipulated ‘bird appropriate’ videos. We also designed Virtual Social Environments for assessing how observing movement patterns might affect behavior in real-time (Chapter 3). We found that presenting birds with manipulated movement patterns of virtual males promptly affects the mobility of birds watching the videos: birds move more when virtual males increase their movements, and they decrease their movements and ‘cuddle’ next to virtual males that stop moving. These results suggest that individuals adjust their activity levels to the statistical patterns of observed conspecific movements, which can explain zebra finch group cohesion. Finally, we studied the song development process in the absence of social input to determine how intrinsic biases and external stimuli shape song from undifferentiated syllables into well-defined categorical signals of adult song (Chapter 4). Do juveniles learn the statistics of early sub-song to guide vocal development? We trained juvenile zebra finches with playbacks of their own, highly variable, developing song and showed that these self-tutored birds developed distinct syllable types (categories) as fast as birds that were trained with a categorical, adult song template. Therefore, the statistical structure of early input seems to have no bearing on the development of phonetic categories. Overall, our results uncover social forces that influence individual behaviors, from motion coordination to vocal development, which have implications for how group structures and vocal culture are maintained

    Ideology and dramatic technic of Juan de la Cueva

    Get PDF
    Thesis (M.A.)--University of Kansas, Romance Language and Literature, 1925

    To transduce a zebra finch: interrogating behavioral mechanisms in a model system for speech.

    Get PDF
    The ability to alter neuronal gene expression, either to affect levels of endogenous molecules or to express exogenous ones, is a powerful tool for linking brain and behavior. Scientists continue to finesse genetic manipulation in mice. Yet mice do not exhibit every behavior of interest. For example, Mus musculus do not readily imitate sounds, a trait known as vocal learning and a feature of speech. In contrast, thousands of bird species exhibit this ability. The circuits and underlying molecular mechanisms appear similar between disparate avian orders and are shared with humans. An advantage of studying vocal learning birds is that the neurons dedicated to this trait are nested within the surrounding brain regions, providing anatomical targets for relating brain and behavior. In songbirds, these nuclei are known as the song control system. Molecular function can be interrogated in non-traditional model organisms by exploiting the ability of viruses to insert genetic material into neurons to drive expression of experimenter-defined genes. To date, the use of viruses in the song control system is limited. Here, we review prior successes and test additional viruses for their capacity to transduce basal ganglia song control neurons. These findings provide a roadmap for troubleshooting the use of viruses in animal champions of fascinating behaviors-nowhere better featured than at the 12th International Congress

    Dynamic Expression of Cadherins Regulates Vocal Development in a Songbird

    Get PDF
    BACKGROUND: Since, similarly to humans, songbirds learn their vocalization through imitation during their juvenile stage, they have often been used as model animals to study the mechanisms of human verbal learning. Numerous anatomical and physiological studies have suggested that songbirds have a neural network called 'song system' specialized for vocal learning and production in their brain. However, it still remains unknown what molecular mechanisms regulate their vocal development. It has been suggested that type-II cadherins are involved in synapse formation and function. Previously, we found that type-II cadherin expressions are switched in the robust nucleus of arcopallium from cadherin-7-positive to cadherin-6B-positive during the phase from sensory to sensorimotor learning stage in a songbird, the Bengalese finch. Furthermore, in vitro analysis using cultured rat hippocampal neurons revealed that cadherin-6B enhanced and cadherin-7 suppressed the frequency of miniature excitatory postsynaptic currents via regulating dendritic spine morphology. METHODOLOGY/PRINCIPAL FINDINGS: To explore the role of cadherins in vocal development, we performed an in vivo behavioral analysis of cadherin function with lentiviral vectors. Overexpression of cadherin-7 in the juvenile and the adult stages resulted in severe defects in vocal production. In both cases, harmonic sounds typically seen in the adult Bengalese finch songs were particularly affected. CONCLUSIONS/SIGNIFICANCE: Our results suggest that cadherins control vocal production, particularly harmonic sounds, probably by modulating neuronal morphology of the RA nucleus. It appears that the switching of cadherin expressions from sensory to sensorimotor learning stage enhances vocal production ability to make various types of vocalization that is essential for sensorimotor learning in a trial and error manner

    Learning Algorithm Design for Human-Robot Skill Transfer

    Get PDF
    In this research, we develop an intelligent learning scheme for performing human-robot skills transfer. Techniques adopted in the scheme include the Dynamic Movement Prim- itive (DMP) method with Dynamic Time Warping (DTW), Gaussian Mixture Model (G- MM) with Gaussian Mixture Regression (GMR) and the Radical Basis Function Neural Networks (RBFNNs). A series of experiments are conducted on a Baxter robot, a NAO robot and a KUKA iiwa robot to verify the effectiveness of the proposed design.During the design of the intelligent learning scheme, an online tracking system is de- veloped to control the arm and head movement of the NAO robot using a Kinect sensor. The NAO robot is a humanoid robot with 5 degrees of freedom (DOF) for each arm. The joint motions of the operator’s head and arm are captured by a Kinect V2 sensor, and this information is then transferred into the workspace via the forward and inverse kinematics. In addition, to improve the tracking performance, a Kalman filter is further employed to fuse motion signals from the operator sensed by the Kinect V2 sensor and a pair of MYO armbands, so as to teleoperate the Baxter robot. In this regard, a new strategy is developed using the vector approach to accomplish a specific motion capture task. For instance, the arm motion of the operator is captured by a Kinect sensor and programmed through a processing software. Two MYO armbands with embedded inertial measurement units are worn by the operator to aid the robots in detecting and replicating the operator’s arm movements. For this purpose, the armbands help to recognize and calculate the precise velocity of motion of the operator’s arm. Additionally, a neural network based adaptive controller is designed and implemented on the Baxter robot to illustrate the validation forthe teleoperation of the Baxter robot.Subsequently, an enhanced teaching interface has been developed for the robot using DMP and GMR. Motion signals are collected from a human demonstrator via the Kinect v2 sensor, and the data is sent to a remote PC for teleoperating the Baxter robot. At this stage, the DMP is utilized to model and generalize the movements. In order to learn from multiple demonstrations, DTW is used for the preprocessing of the data recorded on the robot platform, and GMM is employed for the evaluation of DMP to generate multiple patterns after the completion of the teaching process. Next, we apply the GMR algorithm to generate a synthesized trajectory to minimize position errors in the three dimensional (3D) space. This approach has been tested by performing tasks on a KUKA iiwa and a Baxter robot, respectively.Finally, an optimized DMP is added to the teaching interface. A character recombination technology based on DMP segmentation that uses verbal command has also been developed and incorporated in a Baxter robot platform. To imitate the recorded motion signals produced by the demonstrator, the operator trains the Baxter robot by physically guiding it to complete the given task. This is repeated five times, and the generated training data set is utilized via the playback system. Subsequently, the DTW is employed to preprocess the experimental data. For modelling and overall movement control, DMP is chosen. The GMM is used to generate multiple patterns after implementing the teaching process. Next, we employ the GMR algorithm to reduce position errors in the 3D space after a synthesized trajectory has been generated. The Baxter robot, remotely controlled by the user datagram protocol (UDP) in a PC, records and reproduces every trajectory. Additionally, Dragon Natural Speaking software is adopted to transcribe the voice data. This proposed approach has been verified by enabling the Baxter robot to perform a writing task of drawing robot has been taught to write only one character

    Growth and splitting of neural sequences in songbird vocal development

    Get PDF
    Neural sequences are a fundamental feature of brain dynamics underlying diverse behaviours, but the mechanisms by which they develop during learning remain unknown. Songbirds learn vocalizations composed of syllables; in adult birds, each syllable is produced by a different sequence of action potential bursts in the premotor cortical area HVC. Here we carried out recordings of large populations of HVC neurons in singing juvenile birds throughout learning to examine the emergence of neural sequences. Early in vocal development, HVC neurons begin producing rhythmic bursts, temporally locked to a prototype syllable. Different neurons are active at different latencies relative to syllable onset to form a continuous sequence. Through development, as new syllables emerge from the prototype syllable, initially highly overlapping burst sequences become increasingly distinct. We propose a mechanistic model in which multiple neural sequences can emerge from the growth and splitting of a commo n precursor sequence.National Institutes of Health (U.S.) (Grant R01DC009183)National Science Foundation (U.S.) (Grant DGE-114747
    corecore