26 research outputs found

    Computationally efficient implementation of the edge model.

    No full text
    <p>(<b>A</b>) Spike probability function (SPF) corresponding to the maximum model. (<b>B</b>) Narrow distribution of IPDs. (<b>C</b>) Results of convolving the SPF in (A) with the distribution of IPDs in (B). (<b>D</b>) Spike probability function for the left IC under the edge model. (<b>E</b>) Spike probability function for the right IC under the edge model. (<b>F</b>) Narrow distribution of IPD for which an open circle indicates the IPD that arose half-maximally. (<b>G</b>) Modeled population activities when the distribution of IPD is narrow (F). Cross-hatching indicates activities of neurons with best IPDs beyond the ‘pi-limit’ (< -0.13 or > 0.13 cycles) which may rarely occur (e.g., [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0137900#pone.0137900.ref003" target="_blank">3</a>, <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0137900#pone.0137900.ref004" target="_blank">4</a>]) and therefore contribute little to spatial perception. (<b>H</b>) Relative contributions of population activities to spatial perception under the edge model. (<b>I</b>) Modeled activities obtained after converting IPD to ITD and averaging across frequency to simplify presentation (0.2–1.5 kHz). <b>(J-Q</b>) Distributions of IPD and modeled activities as described in (F-I) but when the distributions of IPD (J and N) were broader.</p

    Demonstration of how envelope weights were obtained from band-filtered noise-pairs.

    No full text
    <p><b>(A)</b> Envelope derivatives measured from the envelope of the signal in the left ear. A horizontal dashed line shows the mean derivative (μ) whereas gray shading shows ±1 standard deviation (±σ). <b>(B)</b> Envelope of the signal in the left ear from which the derivatives were measured. Shading shows the weights that were attributed to the signal using <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0137900#pone.0137900.e001" target="_blank">Eq 1</a>. <b>(C)</b> Weights calculated using <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0137900#pone.0137900.e001" target="_blank">Eq 1</a> (solid line) for a continuous range of dE/dt values (abscissa). The dashed line shows normalized weights derived for the barn owl from the responses of space-map neurons to the derivatives of deeply amplitude-modulated noise bursts. <b>(D)</b> Spectra obtained from the envelopes of band-filtered noises (0.5–8 kHz; solid lines). A single dashed line, resembling the 500 Hz band, shows the average spectrum of the low-pass envelopes that were examined by Nelson and Takahashi [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0137900#pone.0137900.ref027" target="_blank">27</a>, <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0137900#pone.0137900.ref036" target="_blank">36</a>]. Note that weights used in the owl study (dashed line in C) are similar to those at 500 Hz (solid line in C) because their envelope spectra (D) are similar.</p

    A Neural Model of Auditory Space Compatible with Human Perception under Simulated Echoic Conditions

    No full text
    <div><p>In a typical auditory scene, sounds from different sources and reflective surfaces summate in the ears, causing spatial cues to fluctuate. Prevailing hypotheses of how spatial locations may be encoded and represented across auditory neurons generally disregard these fluctuations and must therefore invoke additional mechanisms for detecting and representing them. Here, we consider a different hypothesis in which spatial perception corresponds to an intermediate or sub-maximal firing probability across spatially selective neurons within each hemisphere. The precedence or Haas effect presents an ideal opportunity for examining this hypothesis, since the temporal superposition of an acoustical reflection with sounds arriving directly from a source can cause otherwise stable cues to fluctuate. Our findings suggest that subjects’ experiences may simply reflect the spatial cues that momentarily arise under various acoustical conditions and how these cues are represented. We further suggest that auditory objects may acquire “edges” under conditions when interaural time differences are broadly distributed.</p></div

    Demonstration of how ITD may be shifted toward the leading source (positively).

    No full text
    <p><b>(A)</b> Amplitude envelopes of identical, 500 Hz, signals in the left and right ears when the lead-lag delay was 2.5 ms. <b>(B)</b> Measurements of ILD at the times when the signals were sampled, colored according to weights that were attributed to them by the envelopes of the left and right signals. <b>(C)</b> Distributions of weighted (solid line) and non-weighted (dashed line) ILDs measured when the leading and lagging noises pairs were superposed for 500 ms. <b>(D)</b> Measurements of ITD at the times when the signals were sampled, colored as in (B). <b>(E)</b> Distributions of weighted (solid line) and non-weighted (dashed line) ITDs measured when the leading and lagging noises pairs were superposed for 500 ms.</p

    Signal-to-noise ratios of half-maximal activity patterns.

    No full text
    <p>(<b>A-D</b>) Signal-to-noise ratios (SNR) when noise-pairs were (A) 200 ms, (B) 30 ms, (C) 10 ms, or (D) 30 ms but lacking alone segments. Distributions of ITD (black bars), for the 500 Hz band, when a single pair of independent noises was superposed for (<b>E</b>) 200 ms, (<b>F</b>) 30 ms, or (<b>G</b>) 10 ms. Red lines show how ITDs were distributed, on average, across 100 different noise-pairs.</p

    Distributions of angles indicated by the subjects on the top (left distribution) and bottom (right distribution) arcs.

    No full text
    <p>Distributions were averaged across subjects and are plotted as if the lead always came from the right (+250 μs ITD) and the lag from the left (-250 μs ITD) [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0137900#pone.0137900.ref017" target="_blank">17</a>]. For each distribution, the frontal hemifield is represented on the abscissa with values ranging from -90 (left) to +90 (right). Solid lines and filled circles show the results for the identical noise-pairs. Dashed lines and open circles show the results for the independent noise-pairs. Arrows indicate the means of the individual distributions. <b>(A- D)</b> Distributions when there was no delay (0 ms) and the noise-pairs were 200 ms <b>(A-B)</b> or 30 ms <b>(C-D)</b>. <b>(E- H)</b> Distributions when the delay was 0.5 ms and the noise-pairs were 200 ms <b>(E-F)</b> or 30 ms <b>(G-H)</b>. <b>(I- L)</b> Distributions when the delay was 16 ms and the noise-pairs were 200 ms <b>(I-J)</b> or 30 ms <b>(K-L)</b>. <b>(M- N)</b> Distributions when there was no delay (0 ms) and the noise-pairs were 10 ms.</p

    Contributions to fusion from the alone and superposed segments.

    No full text
    <p><b>(A-D)</b> Markers indicate the proportions of trials in which subjects indicated a second auditory image at a given delay (abscissa) when the noise-pairs were 200 ms <b>(A)</b>, 30 ms <b>(B)</b>, 10 ms <b>(C)</b>, or 30 ms with synchronized onsets and offsets <b>(D)</b>. Error bars indicate variation across subjects (±1 s.d.). Solid lines and filled circles show the results for the identical noise-pairs. Dashed lines and open circles show the results for the independent noise-pairs. <b>(E-H)</b> Markers indicate how the proportions differed for the independent noise-pairs relative to when there was no delay and thus the contributions to fusion from the alone segments. <b>(I-L)</b> Markers indicate how the proportions differed for the independent and identical noise-pairs and thus the contributions to fusion from the superposed segment.</p

    Frequency-specific distributions of ITD when the lead-lag delay was 4 ms.

    No full text
    <p><b>(A-F)</b> Distributions of envelope-weighted ITDs attributed to the superposed and alone segments when identical noise-pairs were 200 or 10 ms (<b>A</b>-<b>B</b> and <b>C</b>-<b>D</b>, respectively) or when the noise-pairs were 30 ms and independent (<b>E</b>-<b>F</b>). Each distribution shows the average of 100 different noise-pairs and reflects cues observed only during the superposed segment <b>(A, C,</b> and <b>E)</b> or only during the alone segments <b>(B, D,</b> and <b>F)</b>. Note that the same gray-scale is applied to each stimulus and pair of plots. <b>(G-L)</b> Distributions of <i>unweighted</i> ITDs generated by the same noise-pairs and segments as in <b>(A-F)</b> (see labels above and to the left of the plots).</p

    Measurement of spatial discrimination using the PDR.

    No full text
    <p>(A) A pupillometer, consisting of an IR detector and emitter (marked), is placed close to the cornea of the owl. The detector is placed about 6 mm from the eye, while the emitter is about 20 mm away. The owl is held immobile in a stereotaxic apparatus, allowing us to position the owl, repeatedly, in the same orientation <i>vis-à-vis</i> the pupillometer as well as the external sound sources. (B) Sound sources are placed on an array of two aluminum arms at right angles to each other, curved such that the center of curvature is a spot between the two ears of the owl. Speakers separated along the horizon of the owl were used to assess discrimination in azimuth (red), and speakers separated along the midline of the owl were used to assess discrimination in elevation (blue). The array is positioned such that the intersection of the arms is directly in front of the bird (azimuth 0°; elevation 0°). Each degree of angular displacement is marked on the arms of the array, and speakers can be moved to change angular separation. The subject is monitored using the IR camera, indicated here, during the experimental run. As far as possible, wiring is located so that it is behind the speaker array.</p

    Azimuthal and elevation tuning and discrimination functions for a single space-specific neuron.

    No full text
    <p>(a) The SRF of the cell is shown; lighter shades correspond to higher response rates. The red and blue lines represent the locations used to estimate the azimuthal and elevational response functions, respectively. (b) Response profiles in azimuth (red) and elevation (blue) show that tuning in azimuth is finer than tuning in elevation. (c) Discrimination functions for a 5° separation were computed using data shown in (b), as per Eqn. 1. Response profiles for both azimuth and elevation are shown for reference. Note that maximal discrimination, especially as seen for azimuth, was achieved where rate of change of firing rate was maximal.</p
    corecore