1,221 research outputs found

    Event-driven visual attention for the humanoid robot iCub.

    Get PDF
    Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend

    A Developmental Organization for Robot Behavior

    Get PDF
    This paper focuses on exploring how learning and development can be structured in synthetic (robot) systems. We present a developmental assembler for constructing reusable and temporally extended actions in a sequence. The discussion adopts the traditions of dynamic pattern theory in which behavior is an artifact of coupled dynamical systems with a number of controllable degrees of freedom. In our model, the events that delineate control decisions are derived from the pattern of (dis)equilibria on a working subset of sensorimotor policies. We show how this architecture can be used to accomplish sequential knowledge gathering and representation tasks and provide examples of the kind of developmental milestones that this approach has already produced in our lab

    Consciosusness in Cognitive Architectures. A Principled Analysis of RCS, Soar and ACT-R

    Get PDF
    This report analyses the aplicability of the principles of consciousness developed in the ASys project to three of the most relevant cognitive architectures. This is done in relation to their aplicability to build integrated control systems and studying their support for general mechanisms of real-time consciousness.\ud To analyse these architectures the ASys Framework is employed. This is a conceptual framework based on an extension for cognitive autonomous systems of the General Systems Theory (GST).\ud A general qualitative evaluation criteria for cognitive architectures is established based upon: a) requirements for a cognitive architecture, b) the theoretical framework based on the GST and c) core design principles for integrated cognitive conscious control systems

    Speech-brain synchronization: a possible cause for developmental dyslexia

    Get PDF
    152 p.Dyslexia is a neurological learning disability characterized by the difficulty in an individual¿s ability to read despite adequate intelligence and normal opportunities. The majority of dyslexic readers present phonological difficulties. The phonological difficulty most often associated with dyslexia is a deficit in phonological awareness, that is, the ability to hear and manipulate the sound structure of language. Some appealing theories of dyslexia attribute a causal role to auditory atypical oscillatory neural activity, suggesting it generates some of the phonological problems in dyslexia. These theories propose that auditory cortical oscillations of dyslexic individuals entrain less accurately to the spectral properties of auditory stimuli at distinct frequency bands (delta, theta and gamma) that are important for speech processing. Nevertheless, there are diverging hypotheses concerning the specific bands that would be disrupted in dyslexia, and which are the consequences of such difficulties on speech processing. The goal of the present PhD thesis was to portray the neural oscillatory basis underlying phonological difficulties in developmental dyslexia. We evaluated whether phonological deficits in developmental dyslexia are associated with impaired auditory entrainment to a specific frequency band. In that aim, we measured auditory neural synchronization to linguistic and non-linguistic auditory signals at different frequencies corresponding to key phonological units of speech (prosodic, syllabic and phonemic information). We found that dyslexic readers presented atypical neural entrainment to delta, theta and gamma frequency bands. Importantly, we showed that atypical entrainment to theta and gamma modulations in dyslexia could compromise perceptual computations during speech processing, while reduced delta entrainment in dyslexia could affect perceptual and attentional operations during speech processing. In addition, we characterized the links between the anatomy of the auditory cortex and its oscillatory responses, taking into account previous studies which have observed structural alterations in dyslexia. We observed that the cortical pruning in auditory regions was linked to a stronger sensitivity to gamma oscillation in skilled readers, but to stronger theta band sensitivity in dyslexic readers. Thus, we concluded that the left auditory regions might be specialized for processing phonological information at different time scales (phoneme vs. syllable) in skilled and dyslexic readers. Lastly, by assessing both children and adults on similar tasks, we provided the first evaluation of developmental modulations of typical and atypical auditory sampling (and their structural underpinnings). We found that atypical neural entrainment to delta, theta and gamma are present in dyslexia throughout the lifespan and is not modulated by reading experience

    TEMPORAL CODING OF SPEECH IN HUMAN AUDITORY CORTEX

    Get PDF
    Human listeners can reliably recognize speech in complex listening environments. The underlying neural mechanisms, however, remain unclear and cannot yet be emulated by any artificial system. In this dissertation, we study how speech is represented in the human auditory cortex and how the neural representation contributes to reliable speech recognition. Cortical activity from normal hearing human subjects is noninvasively recorded using magnetoencephalography, during natural speech listening. It is first demonstrated that neural activity from auditory cortex is precisely synchronized to the slow temporal modulations of speech, when the speech signal is presented in a quiet listening environment. How this neural representation is affected by acoustic interference is then investigated. Acoustic interference degrades speech perception via two mechanisms, informational masking and energetic masking, which are addressed respectively by using a competing speech stream and a stationary noise as the interfering sound. When two speech streams are presented simultaneously, cortical activity is predominantly synchronized to the speech stream the listener attends to, even if the unattended, competing speech stream is 8 dB more intense. When speech is presented together with spectrally matched stationary noise, cortical activity remains precisely synchronized to the temporal modulations of speech until the noise is 9 dB more intense. Critically, the accuracy of neural synchronization to speech predicts how well individual listeners can understand speech in noise. Further analysis reveals that two neural sources contribute to speech synchronized cortical activity, one with a shorter response latency of about 50 ms and the other with a longer response latency of about 100 ms. The longer-latency component, but not the shorter-latency component, shows selectivity to the attended speech and invariance to background noise, indicating a transition from encoding the acoustic scene to encoding the behaviorally important auditory object, in auditory cortex. Taken together, we have demonstrated that during natural speech comprehension, neural activity in the human auditory cortex is precisely synchronized to the slow temporal modulations of speech. This neural synchronization is robust to acoustic interference, whether speech or noise, and therefore provides a strong candidate for the neural basis of acoustic background invariant speech recognition

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    corecore