5,150 research outputs found

    Effects of congenital hearing loss and cochlear implantation on audiovisual speech perception in infants and children

    Get PDF
    Purpose: Cochlear implantation has recently become available as an intervention strategy for young children with profound hearing impairment. In fact, infants as young as 6 months are now receiving cochlear implants (CIs), and even younger infants are being fitted with hearing aids (HAs). Because early audiovisual experience may be important for normal development of speech perception, it is important to investigate the effects of a period of auditory deprivation and amplification type on multimodal perceptual processes of infants and children. The purpose of this study was to investigate audiovisual perception skills in normal-hearing (NH) infants and children and deaf infants and children with CIs and HAs of similar chronological ages. Methods: We used an Intermodal Preferential Looking Paradigm to present the same woman\u27s face articulating two words ( judge and back ) in temporal synchrony on two sides of a TV monitor, along with an auditory presentation of one of the words. Results: The results showed that NH infants and children spontaneously matched auditory and visual information in spoken words; deaf infants and children with HAs did not integrate the audiovisual information; and deaf infants and children with CIs initially did not initially integrate the audiovisual information but gradually matched the auditory and visual information in spoken words. Conclusions: These results suggest that a period of auditory deprivation affects multimodal perceptual processes that may begin to develop normally after several months of auditory experience

    Modeling the Synchronization of Multimodal Perceptions as a Basis for the Emergence of Deterministic Behaviors.

    Get PDF
    Living organisms have either innate or acquired mechanisms for reacting to percepts with an appropriate behavior e.g., by escaping from the source of a perception detected as threat, or conversely by approaching a target perceived as potential food. In the case of artifacts, such capabilities must be built in through either wired connections or software. The problem addressed here is to define a neural basis for such behaviors to be possibly learned by bio-inspired artifacts. Toward this end, a thought experiment involving an autonomous vehicle is first simulated as a random search. The stochastic decision tree that drives this behavior is then transformed into a plastic neuronal circuit. This leads the vehicle to adopt a deterministic behavior by learning and applying a causality rule just as a conscious human driver would do. From there, a principle of using synchronized multimodal perceptions in association with the Hebb principle of wiring together neuronal cells is induced. This overall framework is implemented as a virtual machine i.e., a concept widely used in software engineering. It is argued that such an interface situated at a meso-scale level between abstracted micro-circuits representing synaptic plasticity, on one hand, and that of the emergence of behaviors, on the other, allows for a strict delineation of successive levels of complexity. More specifically, isolating levels allows for simulating yet unknown processes of cognition independently of their underlying neurological grounding

    Interactive rhythms across species: The evolutionary biology of animal chorusing and turn-taking

    No full text
    The study of human language is progressively moving toward comparative and interactive frameworks, extending the concept of turn‐taking to animal communication. While such an endeavor will help us understand the interactive origins of language, any theoretical account for cross‐species turn‐taking should consider three key points. First, animal turn‐taking must incorporate biological studies on animal chorusing, namely how different species coordinate their signals over time. Second, while concepts employed in human communication and turn‐taking, such as intentionality, are still debated in animal behavior, lower level mechanisms with clear neurobiological bases can explain much of animal interactive behavior. Third, social behavior, interactivity, and cooperation can be orthogonal, and the alternation of animal signals need not be cooperative. Considering turn‐taking a subset of chorusing in the rhythmic dimension may avoid overinterpretation and enhance the comparability of future empirical work

    Neural correlates of the processing of co-speech gestures

    Get PDF
    In communicative situations, speech is often accompanied by gestures. For example, speakers tend to illustrate certain contents of speech by means of iconic gestures which are hand movements that bear a formal relationship to the contents of speech. The meaning of an iconic gesture is determined both by its form as well as the speech context in which it is performed. Thus, gesture and speech interact in comprehension. Using fMRI, the present study investigated what brain areas are involved in this interaction process. Participants watched videos in which sentences containing an ambiguous word (e.g. She touched the mouse) were accompanied by either a meaningless grooming movement, a gesture supporting the more frequent dominant meaning (e.g. animal) or a gesture supporting the less frequent subordinate meaning (e.g. computer device). We hypothesized that brain areas involved in the interaction of gesture and speech would show greater activation to gesture-supported sentences as compared to sentences accompanied by a meaningless grooming movement. The main results are that when contrasted with grooming, both types of gestures (dominant and subordinate) activated an array of brain regions consisting of the left posterior superior temporal sulcus (STS), the inferior parietal lobule bilaterally and the ventral precentral sulcus bilaterally. Given the crucial role of the STS in audiovisual integration processes, this activation might reflect the interaction between the meaning of gesture and the ambiguous sentence. The activations in inferior frontal and inferior parietal regions may reflect a mechanism of determining the goal of co-speech hand movements through an observation-execution matching process

    Neural Correlates of Social Behavior in Mushroom Body Extrinsic Neurons of the Honeybee Apis mellifera

    Get PDF
    The social behavior of honeybees (Apis mellifera) has been extensively investigated, but little is known about its neuronal correlates. We developed a method that allowed us to record extracellularly from mushroom body extrinsic neurons (MB ENs) in a freely moving bee within a small but functioning mini colony of approximately 1,000 bees. This study aimed to correlate the neuronal activity of multimodal high-order MB ENs with social behavior in a close to natural setting. The behavior of all bees in the colony was video recorded. The behavior of the recorded animal was compared with other hive mates and no significant differences were found. Changes in the spike rate appeared before, during or after social interactions. The time window of the strongest effect on spike rate changes ranged from 1 s to 2 s before and after the interaction, depending on the individual animal and recorded neuron. The highest spike rates occurred when the experimental animal was situated close to a hive mate. The variance of the spike rates was analyzed as a proxy for high order multi-unit processing. Comparing randomly selected time windows with those in which the recorded animal performed social interactions showed a significantly increased spike rate variance during social interactions. The experimental set-up employed for this study offers a powerful opportunity to correlate neuronal activity with intrinsically motivated behavior of socially interacting animals. We conclude that the recorded MB ENs are potentially involved in initiating and controlling social interactions in honeybees

    Advanced Content and Interface Personalization through Conversational Behavior and Affective Embodied Conversational Agents

    Get PDF
    Conversation is becoming one of the key interaction modes in HMI. As a result, the conversational agents (CAs) have become an important tool in various everyday scenarios. From Apple and Microsoft to Amazon, Google, and Facebook, all have adapted their own variations of CAs. The CAs range from chatbots and 2D, carton-like implementations of talking heads to fully articulated embodied conversational agents performing interaction in various concepts. Recent studies in the field of face-to-face conversation show that the most natural way to implement interaction is through synchronized verbal and co-verbal signals (gestures and expressions). Namely, co-verbal behavior represents a major source of discourse cohesion. It regulates communicative relationships and may support or even replace verbal counterparts. It effectively retains semantics of the information and gives a certain degree of clarity in the discourse. In this chapter, we will represent a model of generation and realization of more natural machine-generated output

    Being in-sync: A multimodal framework on the emotional and cognitive synchronization of collaborative learners

    Get PDF
    Collaborative learners share an experience when focusing on a task together and coevally influence each other’s emotions and motivations. Continuous emotional synchronization relates to how learners co-regulate their cognitive resources, especially regarding their joint attention and transactive discourse. “Being in-sync” then refers to multiple emotional and cognitive group states and processes, raising the question: to what extent and when is being in-sync beneficial and when is it not? In this article, we propose a framework of multi-modal learning analytics addressing synchronization of collaborative learners across emotional and cognitive dimensions and different modalities. To exemplify this framework and approach the question of how emotions and cognitions intertwine in collaborative learning, we present contrasting cases of learners in a tabletop environment that have or have not been instructed to coordinate their gaze. Qualitative analysis of multimodal data incorporating eye-tracking and electrodermal sensors shows that gaze instruction facilitated being emotionally, cognitively, and behaviorally “in-sync” during the peer collaboration. Identifying and analyzing moments of shared emotional shifts shows how learners are establishing shared understanding regarding both the learning task as well as the relationship among them when they are emotionally “in-sync.
    corecore