912 research outputs found
The brightness clustering transform and locally contrasting keypoints
In recent years a new wave of feature descriptors has been presented to the computer vision community, ORB, BRISK and FREAK amongst others. These new descriptors allow reduced time and memory consumption on the processing and storage stages of tasks such as image matching or visual odometry, enabling real time applications. The problem is now the lack of fast interest point detectors with good repeatability to use with these new descriptors. We present a new blob- detector which can be implemented in real time and is faster than most of the currently used feature-detectors. The detection is achieved with an innovative non-deterministic low-level operator called the Brightness Clustering Transform (BCT). The BCT can be thought as a coarse-to- fine search through scale spaces for the true derivative of the image; it also mimics trans-saccadic perception of human vision. We call the new algorithm Locally Contrasting Keypoints detector or LOCKY. Showing good repeatability and robustness to image transformations included in the Oxford dataset, LOCKY is amongst the fastest affine-covariant feature detectors
Recognition of the stimulus suffix
Recall of the final items in a spoken list is hindered by the presentation of a to-be-ignored item. The magnitude of this interference (the stimulus suffix effect) is reduced if the suffix is perceptually distinct from the other list items. Several experiments examine this effect of perceptual distinctiveness. The experiments involve later recognition of stimulus suffixes from lists presented for serial recall. Suffixes which differ from the list items tend to be recognized at least as well as list-similar suffixes. This supports the view that reduction of the suffix effect can be traced to decreased interitem interference in memory rather than to attentional selection.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/26029/1/0000102.pd
Influence of hand position on the near-effect in 3D attention
Voluntary reorienting of attention in real depth situations is characterized by an attentional bias to locations near the viewer once attention is deployed to a spatially cued object in depth. Previously this effect (initially referred to as the ânear-effectâ) was attributed to access of a 3D viewer-centred spatial representation for guiding attention in 3D space. The aim of this study was to investigate whether the near-bias could have been associated with the position of the response-hand, always near the viewer in previous studies investigating endogenous attentional shifts in real depth. In Experiment 1, the response-hand was placed at either the near or far target depth in a depth cueing task. Placing the response-hand at the far target depth abolished the near-effect, but failed to bias spatial attention to the far location. Experiment 2 showed that the response-hand effect was not modulated by the presence of an additional passive hand, whereas Experiment 3 confirmed that attentional prioritization of the passive hand was not masked by the influence of the responding hand on spatial attention in Experiment 2. The pattern of results is most consistent with the idea that response preparation can modulate spatial attention within a 3D viewer-centred spatial representation
Top-down control is not lost in the attentional blink: evidence from intact endogenous cuing.
The attentional blink (AB) refers to the finding that performance on the second of two targets (T1 and T2) is impaired when the targets are presented at a target onset asynchrony (TOA) of less than 500 ms. One account of the AB assumes that the processing load of T1 leads to a loss of top-down control over stimulus selection. The present study tested this account by examining whether an endogenous spatial cue that indicates the location of a following T2 can facilitate T2 report even when the cue and T2 occur within the time window of the AB. Results from three experiments showed that endogenous cuing had a significant effect on T2 report, both during and outside of the AB; this cuing effect was modulated by both the cue-target onset asynchrony and by cue validity, while it was invariant to the AB. These results suggest that top-down control over target selection is not lost during the AB. © 2007 Springer-Verlag
Visual similarity in masking and priming: The critical role of task relevance
Cognitive scientists use rapid image sequences to study both the emergence of
conscious perception (visual masking) and the unconscious processes involved in
response preparation (masked priming). The present study asked two questions:
(1) Does image similarity influence masking and priming in the same way? (2) Are
similarity effects in both tasks governed by the extent of feature overlap in
the images or only by task-relevant features? Participants in Experiment 1
classified human faces using a single dimension even though the faces varied in
three dimensions (emotion, race, sex). Abstract geometric shapes and colors were
tested in the same way in Experiment 2. Results showed that similarity
reduced the visibility of the target in the masking task
and increased response speed in the priming task, pointing to a
double-dissociation between the two tasks. Results also showed that only
task-relevant (not objective) similarity influenced masking and priming,
implying that both tasks are influenced from the beginning by intentions of the
participant. These findings are interpreted within the framework of a reentrant
theory of visual perception. They imply that intentions can influence object
formation prior to the separation of vision for perception and vision for
action
- âŠ