30,681 research outputs found

    Multiplicative Auditory Spatial Receptive Fields Created by a Hierarchy of Population Codes

    Get PDF
    A multiplicative combination of tuning to interaural time difference (ITD) and interaural level difference (ILD) contributes to the generation of spatially selective auditory neurons in the owl's midbrain. Previous analyses of multiplicative responses in the owl have not taken into consideration the frequency-dependence of ITD and ILD cues that occur under natural listening conditions. Here, we present a model for the responses of ITD- and ILD-sensitive neurons in the barn owl's inferior colliculus which satisfies constraints raised by experimental data on frequency convergence, multiplicative interaction of ITD and ILD, and response properties of afferent neurons. We propose that multiplication between ITD- and ILD-dependent signals occurs only within frequency channels and that frequency integration occurs using a linear-threshold mechanism. The model reproduces the experimentally observed nonlinear responses to ITD and ILD in the inferior colliculus, with greater accuracy than previous models. We show that linear-threshold frequency integration allows the system to represent multiple sound sources with natural sound localization cues, whereas multiplicative frequency integration does not. Nonlinear responses in the owl's inferior colliculus can thus be generated using a combination of cellular and network mechanisms, showing that multiple elements of previous theories can be combined in a single system

    A Model of Local Adaptation

    Get PDF
    The visual system constantly adapts to different luminance levels when viewing natural scenes. The state of visual adaptation is the key parameter in many visual models. While the time-course of such adaptation is well understood, there is little known about the spatial pooling that drives the adaptation signal. In this work we propose a new empirical model of local adaptation, that predicts how the adaptation signal is integrated in the retina. The model is based on psychophysical measurements on a high dynamic range (HDR) display. We employ a novel approach to model discovery, in which the experimental stimuli are optimized to find the most predictive model. The model can be used to predict the steady state of adaptation, but also conservative estimates of the visibility(detection) thresholds in complex images.We demonstrate the utility of the model in several applications, such as perceptual error bounds for physically based rendering, determining the backlight resolution for HDR displays, measuring the maximum visible dynamic range in natural scenes, simulation of afterimages, and gaze-dependent tone mapping

    Motor (but not auditory) attention affects syntactic choice

    Get PDF
    Understanding the determinants of syntactic choice in sentence production is a salient topic in psycholinguistics. Existing evidence suggests that syntactic choice results from an interplay between linguistic and non-linguistic factors, and a speaker’s attention to the elements of a described event represents one such factor. Whereas multimodal accounts of attention suggest a role for different modalities in this process, existing studies examining attention effects in syntactic choice are primarily based on visual cueing paradigms. Hence, it remains unclear whether attentional effects on syntactic choice are limited to the visual modality or are indeed more general. This issue is addressed by the current study. Native English participants viewed and described line drawings of simple transitive events while their attention was directed to the location of the agent or the patient of the depicted event by means of either an auditory (monaural beep) or a motor (unilateral key press) lateral cue. Our results show an effect of cue location, with participants producing more passive-voice descriptions in the patient-cued conditions. Crucially, this cue location effect emerged in the motor-cue but not (or substantially less so) in the auditory-cue condition, as confirmed by a reliable interaction between cue location (agent vs. patient) and cue type (auditory vs. motor). Our data suggest that attentional effects on the speaker’s syntactic choices are modality-specific and limited to the visual and motor, but not the auditory, domain

    Affective interactions between expressive characters

    Get PDF
    When people meet in virtual worlds they are represented by computer animated characters that lack a variety of expression and can seem stiff and robotic. By comparison human bodies are highly expressive; a casual observation of a group of people mil reveals a large diversity of behavior, different postures, gestures and complex patterns of eye gaze. In order to make computer mediated communication between people more like real face-to-face communication, it is necessary to add an affective dimension. This paper presents Demeanour, an affective semi-autonomous system for the generation of realistic body language in avatars. Users control their avatars that in turn interact autonomously with other avatars to produce expressive behaviour. This allows people to have affectively rich interactions via their avatars

    Constructing sonified haptic line graphs for the blind student: first steps

    Get PDF
    Line graphs stand as an established information visualisation and analysis technique taught at various levels of difficulty according to standard Mathematics curricula. It has been argued that blind individuals cannot use line graphs as a visualisation and analytic tool because they currently primarily exist in the visual medium. The research described in this paper aims at making line graphs accessible to blind students through auditory and haptic media. We describe (1) our design space for representing line graphs, (2) the technology we use to develop our prototypes and (3) the insights from our preliminary work

    Eye muscle proprioception is represented bilaterally in the sensorimotor cortex

    Get PDF
    The cortical representation of eye position is still uncertain. In the monkey a proprioceptive representation of the extraocular muscles (EOM) of an eye were recently found within the contralateral central sulcus. In humans, we have previously shown a change in the perceived position of the right eye after a virtual lesion with rTMS over the left somatosensory area. However, it is possible that the proprioceptive representation of the EOM extends to other brain sites, which were not examined in these previous studies. The aim of this fMRI study was to sample the whole brain to identify the proprioceptive representation for the left and the right eye separately. Data were acquired while passive eye movement was used to stimulate EOM proprioceptors in the absence of a motor command. We also controlled for the tactile stimulation of the eyelid by removing from the analysis voxels activated by eyelid touch alone. For either eye, the brain area commonly activated by passive and active eye movement was located bilaterally in the somatosensory area extending into the motor and premotor cytoarchitectonic areas. We suggest this is where EOM proprioception is processed. The bilateral representation for either eye contrasts with the contralateral representation of hand proprioception. We suggest that the proprioceptive representation of the two eyes next to each other in either somatosensory cortex and extending into the premotor cortex reflects the integrative nature of the eye position sense, which combines proprioceptive information across the two eyes with the efference copy of the oculomotor comman

    The mechanisms of tinnitus: perspectives from human functional neuroimaging

    Get PDF
    In this review, we highlight the contribution of advances in human neuroimaging to the current understanding of central mechanisms underpinning tinnitus and explain how interpretations of neuroimaging data have been guided by animal models. The primary motivation for studying the neural substrates of tinnitus in humans has been to demonstrate objectively its representation in the central auditory system and to develop a better understanding of its diverse pathophysiology and of the functional interplay between sensory, cognitive and affective systems. The ultimate goal of neuroimaging is to identify subtypes of tinnitus in order to better inform treatment strategies. The three neural mechanisms considered in this review may provide a basis for TI classification. While human neuroimaging evidence strongly implicates the central auditory system and emotional centres in TI, evidence for the precise contribution from the three mechanisms is unclear because the data are somewhat inconsistent. We consider a number of methodological issues limiting the field of human neuroimaging and recommend approaches to overcome potential inconsistency in results arising from poorly matched participants, lack of appropriate controls and low statistical power

    Gaze Behavior, Believability, Likability and the iCat

    Get PDF
    The iCat is a user-interface robot with the ability to express a range of emotions through its facial features. This paper summarizes our research whether we can increase the believability and likability of the iCat for its human partners through the application of gaze behaviour. Gaze behaviour serves several functions during social interaction such as mediating conversation flow, communicating emotional information and avoiding distraction by restricting visual input. There are several types of eye and head movements that are necessary for realizing these functions. We designed and evaluated a gaze behaviour system for the iCat robot that implements realistic models of the major types of eye and head movements found in living beings: vergence, vestibulo ocular reflexive, smooth pursuit movements and gaze shifts. We discuss how these models are integrated into the software environment of the iCat and can be used to create complex interaction scenarios. We report about some user tests and draw conclusions for future evaluation scenarios

    Influence of hand position on the near-effect in 3D attention

    Get PDF
    Voluntary reorienting of attention in real depth situations is characterized by an attentional bias to locations near the viewer once attention is deployed to a spatially cued object in depth. Previously this effect (initially referred to as the ‘near-effect’) was attributed to access of a 3D viewer-centred spatial representation for guiding attention in 3D space. The aim of this study was to investigate whether the near-bias could have been associated with the position of the response-hand, always near the viewer in previous studies investigating endogenous attentional shifts in real depth. In Experiment 1, the response-hand was placed at either the near or far target depth in a depth cueing task. Placing the response-hand at the far target depth abolished the near-effect, but failed to bias spatial attention to the far location. Experiment 2 showed that the response-hand effect was not modulated by the presence of an additional passive hand, whereas Experiment 3 confirmed that attentional prioritization of the passive hand was not masked by the influence of the responding hand on spatial attention in Experiment 2. The pattern of results is most consistent with the idea that response preparation can modulate spatial attention within a 3D viewer-centred spatial representation
    corecore