2,280 research outputs found

    Visual onset expands subjective time

    Get PDF
    We report a distortion of subjective time perception in which the duration of a first interval is perceived to be longer than the succeeding interval of the same duration. The amount of time expansion depends on the onset type defining the first interval. When a stimulus appears abruptly, its duration is perceived to be longer than when it appears following a stationary array. The difference in the processing time for the stimulus onset and motion onset, measured as reaction times, agrees with the difference in time expansion. Our results suggest that initial transient responses for a visual onset serve as a temporal marker for time estimation, and a systematic change in the processing time for onsets affects perceived time

    Evidence against global attention filters selective for absolute bar-orientation in human vision

    Full text link
    The finding that an item of type A pops out from an array of distractors of type B typically is taken to support the inference that human vision contains a neural mechanism that is activated by items of type A but not by items of type B. Such a mechanism might be expected to yield a neural image in which items of type A produce high activation and items of type B low (or zero) activation. Access to such a neural image might further be expected to enable accurate estimation of the centroid of an ensemble of items of type A intermixed with to-be-ignored items of type B. Here, it is shown that as the number of items in stimulus displays is increased, performance in estimating the centroids of horizontal (vertical) items amid vertical (horizontal) distractors degrades much more quickly and dramatically than does performance in estimating the centroids of white (black) items among black (white) distractors. Together with previous findings, these results suggest that, although human vision does possess bottom-up neural mechanisms sensitive to abrupt local changes in bar-orientation, and although human vision does possess and utilize top-down global attention filters capable of selecting multiple items of one brightness or of one color from among others, it cannot use a top-down global attention filter capable of selecting multiple bars of a given absolute orientation and filtering bars of the opposite orientation in a centroid task

    A novel approach to data collection for difficult structures: data management for large numbers of crystals with the BLEND software

    Get PDF
    The present article describes how to use the computer program BLEND to help assemble complete datasets for the solution of macromolecular structures, starting from partial or complete datasets, derived from data collection from multiple crystals. The program is demonstrated on more than two hundred X-ray diffraction datasets obtained from 50 crystals of a complex formed between the SRF transcription factor, its cognate DNA, and a peptide from the SRF cofactor MRTF-A. This structure is currently in the process of being fully solved. While full details of the structure are not yet available, the repeated application of BLEND on data from this structure, as they have become available, has made it possible to produce electron density maps clear enough to visualise the potential location of MRTF sequences

    Binding - a proposed experiment and a model

    Get PDF
    The binding problem is regarded as one of today's key questions about brain function. Several solutions have been proposed, yet the issue is still controversial. The goal of this article is twofold. Firstly, we propose a new experimental paradigm requiring feature binding, the "delayed binding response task". Secondly, we propose a binding mechanism employing fast reversible synaptic plasticity to express the binding between concepts. We discuss the experimental predictions of our model for the delayed binding response task

    Visual saliency and semantic incongruency influence eye movements when inspecting pictures

    Get PDF
    Models of low-level saliency predict that when we first look at a photograph our first few eye movements should be made towards visually conspicuous objects. Two experiments investigated this prediction by recording eye fixations while viewers inspected pictures of room interiors that contained objects with known saliency characteristics. Highly salient objects did attract fixations earlier than less conspicuous objects, but only in a task requiring general encoding of the whole picture. When participants were required to detect the presence of a small target, then the visual saliency of nontarget objects did not influence fixations. These results support modifications of the model that take the cognitive override of saliency into account by allowing task demands to reduce the saliency weights of task-irrelevant objects. The pictures sometimes contained incongruent objects that were taken from other rooms. These objects were used to test the hypothesis that previous reports of the early fixation of congruent objects have not been consistent because the effect depends upon the visual conspicuity of the incongruent object. There was an effect of incongruency in both experiments, with earlier fixation of objects that violated the gist of the scene, but the effect was only apparent for inconspicuous objects, which argues against the hypothesis

    Haptic pop-out of movable stimuli

    Get PDF
    When, in visual and haptic search, a target is easily found among distractors, this is called a pop-out effect. The target feature is then believed to be salient, and the search is performed in a parallel way. We investigated this effect with movable stimuli in a haptic search task. The task was to find a movable ball among anchored distractors or the other way round. Results show that reaction times were independent of the number of distractors if the movable ball was the target but increased with the number of items if the anchored ball was the target. Analysis of hand movements revealed a parallel search strategy, shorter movement paths, a higher average movement speed, and a narrower direction distribution with the movable target, as compared with a more detailed search for an anchored target. Taken together, these results show that a movable object pops out between anchored objects and this indicates that movability is a salient object feature. Vibratory signals resulting from the movable ball were found to be a reasonable explanation regarding the sensation responsible for the pop-out of movability

    Negative emotional stimuli reduce contextual cueing but not response times in inefficient search

    Get PDF
    In visual search, previous work has shown that negative stimuli narrow the focus of attention and speed reaction times (RTs). This paper investigates these two effects by first asking whether negative emotional stimuli narrow the focus of attention to reduce the learning of a display context in a contextual cueing task and, second, whether exposure to negative stimuli also reduces RTs in inefficient search tasks. In Experiment 1, participants viewed either negative or neutral images (faces or scenes) prior to a contextual cueing task. In a typical contextual cueing experiment, RTs are reduced if displays are repeated across the experiment compared with novel displays that are not repeated. The results showed that a smaller contextual cueing effect was obtained after participants viewed negative stimuli than when they viewed neutral stimuli. However, in contrast to previous work, overall search RTs were not faster after viewing negative stimuli (Experiments 2 to 4). The findings are discussed in terms of the impact of emotional content on visual processing and the ability to use scene context to help facilitate search
    corecore