52 research outputs found

    Auditory Cue Suppresses Visual Detection in Extreme-Periphery

    Get PDF
    Several studies found cross-modal cueing can enhance perceptual tasks; visual stimulus, for example, can be better detected with auditory cue than without it. Most studies, however, focused on a target within foveal or peripheral visual field (e.g., 20°–50° eccentricity). Neurological and behavioral studies showed auditory can complement visual perception in the periphery, but such cross-modal cueing in the extreme-periphery has been unexplored. In the present study, participants detected a dot appeared randomly in either left/right extreme-periphery (from 60°to 90°, with 5° distance). In a half of the trials, the dot was presented with a simultaneous beep as an auditory cue. The results counterintuitively indicated that auditory cue significantly decreased the visual detection in the extreme-periphery. Further pilot study implied auditory cue may be more reckoned on with widespread visual attention and produced false alarms, resulting decreased sensitivity in the extreme-periphery

    Falling Pitch Imitating Doppler Shift Facilitates Detection of Visual Motion in The Extreme-Periphery

    Get PDF
    Previous studies demonstrated that concurrent auditory stimuli can bias visual motion perception in the periphery more than in the fovea (e.g., Takeshima & Gyoba, 2013), and auditory becomes crucial when reliability of vision is reduced (e.g., Schmiedchen et al., 2012). We investigated if auditory affects detecting extreme-peripheral visual motion from behind, which is possibly one of the most salient situations since visual ambiguity is very high and detecting such motion can be ecologically critical to survive. In the experiment, a sequence of three 204 ms dots (255 ms SOA) was presented in the extreme-periphery (individually set by the largest eccentricity with 75% detection); each dot was presented at 3 adjacent locations with 2° distance so as to have apparent motion forward, or at the same location. As auditory stimuli, we employed concurrent beep with falling pitch, which roughly imitated Doppler pitch shift for passing-by object. We employed concurrent beep with rising pitch as a control, in addition to another no sound control. The results showed the concurrent beep with falling pitch increased the hit rate for motion detection, relative to that with no sound and rising pitch beep. Underlying mechanism was discussed with signal detection analysis

    Auditory Cue Suppresses Visual Detection in Extreme-Periphery

    Get PDF
    Several studies found cross-modal cueing can enhance perceptual tasks; visual stimulus, for example, can be better detected with auditory cue than without it. Most studies, however, focused on a target within foveal or peripheral visual field (e.g., 20°–50° eccentricity). Neurological and behavioral studies showed auditory can complement visual perception in the periphery, but such cross-modal cueing in the extreme-periphery has been unexplored. In the present study, participants detected a dot appeared randomly in either left/right extreme-periphery (from 60°to 90°, with 5° distance). In a half of the trials, the dot was presented with a simultaneous beep as an auditory cue. The results counterintuitively indicated that auditory cue significantly decreased the visual detection in the extreme-periphery. Further pilot study implied auditory cue may be more reckoned on with widespread visual attention and produced false alarms, resulting decreased sensitivity in the extreme-periphery

    The trade-off between speed and complexity

    No full text

    Economically organized hierarchies in WordNet and the Oxford English Dictionary

    No full text
    Good definitions consist of words that are more basic than the defined word. There are, however, many ways of satisfying this desideratum. For example, at one extreme, there could be a small set of atomic words that are used to define all other words; i.e., there would be just two hierarchical levels. Alternatively, there could be very many hierarchical levels, where a small set of atomic words is used to define a larger set of words, and these are, in turn, used to define the next hierarchically higher set of words, and so on to the top level of very specific, complex words. Importantly, some possible organizations are more economical than others in the amount of space required to record all the definitions. Here I ask, How economical are dictionaries? Here I present a simple model for an optimal set of definitions, predicting on the order of 7 hierarchical levels. I test the model via measurements from WordNet and the Oxford English Dictionary, and find that the organization of each possesses the signature features expected for an economical dictionary. Key index words: vocabulary, hierarchy, optimality, wordnet, definition, dictionary, number of levels

    The Optimal Human Ventral Stream from Estimates of the Complexity of Visual Objects

    No full text
    The part of the primate visual cortex responsible for the recognition of objects is parcelled into about a dozen areas organized somewhat hierarchically (the region is called the ventral stream). Why are there approximately this many hierarchical levels? Here I put forth a generic information-processing hierarchical model, and show how the total number of neurons required depends on the number of hierarchical levels and on the complexity of visual objects that must be recognized. Because the recognition of written words appears to occur in a similar part of inferotemporal cortex as other visual objects, the complexity of written words may be similar to that of other visual objects for humans; for this reason, I measure the complexity of written words, and use it as an approximate estimate of the complexity more generally of visual objects. I then show that the information-processing hierarchy that accommodates visual objects of that complexity possesses the minimum number of neurons when the number of hierarchical levels is approximately 15
    corecore