17 research outputs found

    A Mechanism for Detecting Coincidence of Auditory and Visual Spatial Signals

    Get PDF
    Abstract Information about the world is captured by our separate senses, and must be integrated to yield a unified representation. This raises the issue of which signals should be integrated and which should remain separate, as inappropriate integration will lead to misrepresentation and distortions. One strong cue suggesting that separate signals arise from a single source is coincidence, in space and in time. We measured increment thresholds for discriminating spatial intervals defined by pairs of simultaneously presented targets, one flash and one auditory sound, for various separations. We report a 'dipper function', in which thresholds follow a 'U-shaped' curve, with thresholds initially decreasing with spatial interval, and then increasing for larger separations. The presence of a dip in the audiovisual increment-discrimination function is evidence that the auditory and visual signals both input to a common mechanism encoding spatial separation, and a simple filter model with a sigmoidal transduction function simulated the results well. The function of an audiovisual spatial filter may be to detect coincidence, a fundamental cue guiding whether to integrate or segregate

    Fludarabine, cytarabine, granulocyte colony-stimulating factor, and idarubicin with gemtuzumab ozogamicin improves event-free survival in younger patients with newly diagnosed aml and overall survival in patients with npm1 and flt3 mutations

    Get PDF
    Purpose To determine the optimal induction chemotherapy regimen for younger adults with newly diagnosed AML without known adverse risk cytogenetics. Patients and Methods One thousand thirty-three patients were randomly assigned to intensified (fludarabine, cytarabine, granulocyte colony-stimulating factor, and idarubicin [FLAG-Ida]) or standard (daunorubicin and Ara-C [DA]) induction chemotherapy, with one or two doses of gemtuzumab ozogamicin (GO). The primary end point was overall survival (OS). Results There was no difference in remission rate after two courses between FLAG-Ida + GO and DA + GO (complete remission [CR] + CR with incomplete hematologic recovery 93% v 91%) or in day 60 mortality (4.3% v 4.6%). There was no difference in OS (66% v 63%; P = .41); however, the risk of relapse was lower with FLAG-Ida + GO (24% v 41%; P < .001) and 3-year event-free survival was higher (57% v 45%; P < .001). In patients with an NPM1 mutation (30%), 3-year OS was significantly higher with FLAG-Ida + GO (82% v 64%; P = .005). NPM1 measurable residual disease (MRD) clearance was also greater, with 88% versus 77% becoming MRD-negative in peripheral blood after cycle 2 (P = .02). Three-year OS was also higher in patients with a FLT3 mutation (64% v 54%; P = .047). Fewer transplants were performed in patients receiving FLAG-Ida + GO (238 v 278; P = .02). There was no difference in outcome according to the number of GO doses, although NPM1 MRD clearance was higher with two doses in the DA arm. Patients with core binding factor AML treated with DA and one dose of GO had a 3-year OS of 96% with no survival benefit from FLAG-Ida + GO. Conclusion Overall, FLAG-Ida + GO significantly reduced relapse without improving OS. However, exploratory analyses show that patients with NPM1 and FLT3 mutations had substantial improvements in OS. By contrast, in patients with core binding factor AML, outcomes were excellent with DA + GO with no FLAG-Ida benefit

    The role of crossmodal correspondences in human vision

    No full text
    As we interact with the world, the unified percept generated from our different senses is almost effortless and yet it is a substantial challenge for the brain. Multisensory research has shown that people typically pair particular features from one modality with those of another. These mappings have been termed crossmodal correspondences. In this thesis I examine three main empirical questions relating to crossmodal correspondences and their effects on human vision, using psychophysical methods. The first aim was to determine if crossmodal correspondences can reflect stimulus-specific mappings between featural dimensions of different modalities. This was investigated using a recently reported correspondence, between auditory modulation rate and visual spatial frequency, and the correspondence between tactile and visual spatial frequency. The experiments in Chapters 4 and 5 used a series of visual search tasks to demonstrate that the mapping in these correspondences is broad and relative, rather than stimulus-specific. The second aim was to investigate whether correspondences can produce attentional capture. This question was also examined using a series of visual search tasks. The results clearly demonstrate that even when participants' top-down goals match any existing bottom-up capture, stimulus-specific effects do not occur. Furthermore, when the crossmodal matches are made irrelevant to the task, a matching sound or tactile stimulus has no effect on search performance. This indicates that crossmodal matching affects visual selection through top-down guidance not bottom-up capture. The final aim was to determine whether correspondences can affect crossmodal binding. In Chapter 6, a well-known multisensory binding effect `temporal ventriloquism' was used to show that incongruent crossmodal pairings of auditory pitch and visual elevation impair crossmodal binding. However, when the auditory stimuli were made predictable the effect was eliminated, showing that the effect of crossmodal congruency is dependent on the saliency of the crossmodal mapping. In the last chapter, the results of the three experimental chapters are discussed together in the context of the role of crossmodal correspondences in unisensory and multisensory processing. Directions for future research into crossmodal correspondences are suggested and the role of causality in multisensory processing is discussed in relation to crossmodal correspondences

    Rapid Audiovisual Temporal Recalibration Generalises Across Spatial Location

    No full text
    Recent exposure to asynchronous multisensory signals has been shown to shift perceived timing between the sensory modalities, a phenomenon known as â € temporal recalibration'. Recently, Van der Burg et al. (2013, J Neurosci, 33, pp. 14633-14637) reported results showing that recalibration to asynchronous audiovisual events can happen extremely rapidly. In an extended series of variously asynchronous trials, simultaneity judgements were analysed based on the modality order in the preceding trial and showed that shifts in the point of subjective synchrony occurred almost instantaneously, shifting from one trial to the next. Here we replicate the finding that shifts in perceived timing occur following exposure to a single, asynchronous audiovisual stimulus and by manipulating the spatial location of the audiovisual events we demonstrate that recalibration occurs even when the adapting stimulus is presented in a different location. Timing shifts were also observed when the adapting audiovisual pair were defined only by temporal proximity, with the auditory component presented over headphones rather than being collocated with the visual stimulus. Combined with previous findings showing that timing shifts are independent of stimulus features such as colour and pitch, our finding that recalibration is not spatially specific provides strong evidence for a rapid recalibration process that is solely dependent on recent temporal information, regardless of feature or location. These rapid and automatic shifts in perceived synchrony may allow our sensory systems to flexibly adjust to the variation in timing of neural signals occurring as a result of delayed environmental transmission and differing neural latencies for processing vision and audition

    Discrimination contours for moving sounds reveal duration and distance cues dominate auditory speed perception.

    Get PDF
    Evidence that the auditory system contains specialised motion detectors is mixed. Many psychophysical studies confound speed cues with distance and duration cues and present sound sources that do not appear to move in external space. Here we use the 'discrimination contours' technique to probe the probabilistic combination of speed, distance and duration for stimuli moving in a horizontal arc around the listener in virtual auditory space. The technique produces a set of motion discrimination thresholds that define a contour in the distance-duration plane for different combination of the three cues, based on a 3-interval oddity task. The orientation of the contour (typically elliptical in shape) reveals which cue or combination of cues dominates. If the auditory system contains specialised motion detectors, stimuli moving over different distances and durations but defining the same speed should be more difficult to discriminate. The resulting discrimination contours should therefore be oriented obliquely along iso-speed lines within the distance-duration plane. However, we found that over a wide range of speeds, distances and durations, the ellipses aligned with distance-duration axes and were stretched vertically, suggesting that listeners were most sensitive to duration. A second experiment showed that listeners were able to make speed judgements when distance and duration cues were degraded by noise, but that performance was worse. Our results therefore suggest that speed is not a primary cue to motion in the auditory system, but that listeners are able to use speed to make discrimination judgements when distance and duration cues are unreliable

    Evidence for a Mechanism Encoding Audiovisual Spatial Separation

    No full text
    Auditory and visual spatial representations are produced by distinct processes, drawing on separate neural inputs and occurring in different regions of the brain. We tested for a bimodal spatial representation using a spatial increment discrimination task. Discrimination thresholds for synchronously presented but spatially separated audiovisual stimuli were measured for base separations ranging from 0° to 45°. In a dark anechoic chamber, the spatial interval was defined by azimuthal separation of a white-noise burst from a speaker on a movable robotic arm and a checkerboard patch 5° wide projected onto an acoustically transparent screen. When plotted as a function of base interval, spatial increment thresholds exhibited a J-shaped pattern. Thresholds initially declined, the minimum occurring at base separations approximately equal to the individual observer's detection threshold and thereafter rose log-linearly according to Weber's law. This pattern of results, known as the ‘dipper function’, would be expected if the auditory and visual signals defining the spatial interval converged onto an early sensory filter encoding audiovisual space. This mechanism could be used to encode spatial separation of auditory and visual stimuli

    Example psychometric function for single observer.

    No full text
    <p>Performance in an 3-interval oddity task follows a Gaussian when error rate is plotted against the test's radial distance (r) along a given orientation θ<sub>i</sub> and its complement θ<sub>i</sub>+π. Gaussian functions were fit to the data using a maximum likelihood procedure. Any radial test distance containing two or fewer trials was excluded from the fit (examples shown in open red symbols).</p

    Motion discrimination contours for a single naïve observer for the 9 standards investigated in Experiment 1.

    No full text
    <p>The results for each individual standard value follow the conventions defined in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0102864#pone-0102864-g001" target="_blank">Figure 1</a>. Error bars for each threshold were obtained using a bootstrapping technique and correspond to 95% CIs. Ellipses were fit according to a non-linear least-squares technique.</p

    Summary of best-fitting ellipses across the four listeners (L1-L4) studied in Experiment 1.

    No full text
    <p>Two observers (L1, L2) completed all 9 conditions; two others (L3, L4) completed the 3 conditions lying on the major negative diagonal. The horizontal grey lines have length  = ±1 Weber fraction. All ellipses are oriented parallel to the axes of the distance-duration plane. Thus, one-sample t-tests for the mean ellipse orientations associated with the three standards on the major negative diagonal did not differ significantly from vertical (top-left: t(3) = 1.84, p>.10; middle: t(3) = .45, p>.50; bottom-right: t(3) = −0.81,p>.40). The results therefore provide no evidence that speed is used to discriminate test from standard; performance for all observers appears to be governed by separate estimates of distance and duration. The ellipses are stretched parallel to the Y axes, showing that duration discrimination was superior to distance discrimination.</p

    Results of Experiment 2, in which distance and duration noise were added to the standards to force discrimination based on speed.

    No full text
    <p>Each column corresponds to a different listener (L1-L4); each row is a different standard “mean”, corresponding to the standard values given along the major negative diagonal of <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0102864#pone-0102864-g003" target="_blank">Figures 3</a> and <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0102864#pone-0102864-g004" target="_blank">4</a>. The results show that the auditory system is sensitive to speed: when distance and duration cues are made uninformative, listeners are able to discriminate stimuli based on speed alone.</p
    corecore