3,342 research outputs found
Insensitivity of visual short-term memory to irrelevant visual information
Several authors have hypothesised that visuo-spatial working memory is functionally analogous to verbal working memory. Irrelevant background speech impairs verbal short-term memory. We investigated whether irrelevant visual information has an analogous effect on visual short-term memory, using a dynamic visual noise (DVN) technique known to disrupt visual imagery (Quinn & McConnell, 1996a). Experiment 1 replicated the effect of DVN on pegword imagery. Experiments 2 and 3 showed no effect of DVN on recall of static matrix patterns, despite a significant effect of a concurrent spatial tapping task. Experiment 4 showed no effect of DVN on encoding or maintenance of arrays of matrix patterns, despite testing memory by a recognition procedure to encourage visual rather than spatial processing. Serial position curves showed a one-item recency effect typical of visual short-term memory. Experiment 5 showed no effect of DVN on short-term recognition of Chinese characters, despite effects of visual similarity and a concurrent colour memory task that confirmed visual processing of the characters. We conclude that irrelevant visual noise does not impair visual short-term memory. Visual working memory may not be functionally analogous to verbal working memory, and different cognitive processes may underlie visual short-term memory and visual imagery
Integrated cross-domain object storage in working memory: Evidence from a verbal-spatial memory task
Working-memory theories often include domain-specific verbal and visual stores (e.g., the phonological and visuospatial buffers of Baddeley, 1986), and some also posit more general stores thought to be capable of holding verbal or visuospatial materials (Baddeley, 2000; Cowan, 2005). However, it is currently unclear which type of store is primarily responsible for maintaining objects that include components from multiple domains. In these studies, a spatial array of letters was followed by a single probe identical to an item in the array or differing systematically in spatial location, letter identity, or their combination. Concurrent verbal rehearsal suppression impaired memory in each of these trial types in a task that required participants to remember verbal-spatial binding, but did not impair memory for spatial locations if the task did not require verbal-spatial binding for a correct response. Thus, spatial information might be stored differently when it must be bound to verbal information. This suggests that a cross-domain store such as the episodic buffer of Baddeley (2000) or the focus of attention of Cowan (2001) might be used for integrated object storage, rather than the maintenance of associations between features stored in separate domain-specific buffers
Associating object names with descriptions of shape that distinguish possible from impossible objects.
Five experiments examine the proposal that object names are closely linked torepresentations of global, 3D shape by comparing memory for simple line drawings of structurally possible and impossible novel objects.Objects were rendered impossible through local edge violations to global coherence (cf. Schacter, Cooper, & Delaney, 1990) and supplementary observations confirmed that the sets of possible and impossible objects were matched for their distinctiveness. Employing a test of explicit recognition memory, Experiment 1 confirmed that the possible and impossible objects were equally memorable. Experiments 2–4 demonstrated that adults learn names (single-syllable non-words presented as count nouns, e.g., “This is a dax”) for possible objectsmore easily than for impossible objects, and an item-based analysis showed that this effect was unrelated to either the memorability or the distinctiveness of the individual objects. Experiment 3 indicated that the effects of object possibility on name learning were long term (spanning at least 2months), implying that the cognitive processes being revealed can support the learning of object names in everyday life. Experiment 5 demonstrated that hearing someone else name an object at presentation improves recognition memory for possible objects, but not for impossible objects. Taken together, the results indicate that object names are closely linked to the descriptions of global, 3D shape that can be derived for structurally possible objects but not for structurally impossible objects. In addition, the results challenge the view that object decision and explicit recognition necessarily draw on separate memory systems,with only the former being supported by these descriptions of global object shape. It seems that recognition also can be supported by these descriptions, provided the original encoding conditions encourage their derivation. Hearing an object named at encoding appears to be just such a condition. These observations are discussed in relation to the effects of naming in other visual tasks, and to the role of visual attention in object identification
Memory for pitch in congenital amusia: Beyond a fine-grained pitch discrimination problem
Congenital amusia is a disorder that affects the perception and production of music. While amusia has been associated with deficits in pitch discrimination, several reports suggest that memory deficits also play a role. The present study investigated short-term memory span for pitch-based and verbal information in 14 individuals with amusia and matched controls. Analogous adaptive-tracking procedures were used to generate tone and digit spans using stimuli that exceeded psychophysically measured pitch perception thresholds. Individuals with amusia had significantly smaller tone spans, whereas their digits spans were a similar size to those of controls. An automated operation span task was used to determine working memory capacity. Working memory deficits were seen in only a small subgroup of individuals with amusia. These findings support the existence of a pitch-specific component within short-term memory and suggest that congenital amusia is more than a disorder of fine-grained pitch discrimination
Using non-speech sounds to provide navigation cues
This article describes 3 experiments that investigate the possibiity of using structured nonspeech audio messages called earcons to provide navigational cues in a menu hierarchy. A hierarchy of 27 nodes and 4 levels was created with an earcon for each node. Rules were defined for the creation of hierarchical earcons at each node. Participants had to identify their location in the hierarchy by listening to an earcon. Results of the first experiment showed that participants could identify their location with 81.5% accuracy, indicating that earcons were a powerful method of communicating hierarchy information. One proposed use for such navigation cues is in telephone-based interfaces (TBIs) where navigation is a problem. The first experiment did not address the particular problems of earcons in TBIs such as “does the lower quality of sound over the telephone lower recall rates,” “can users remember earcons over a period of time.” and “what effect does training type have on recall?” An experiment was conducted and results showed that sound quality did lower the recall of earcons. However; redesign of the earcons overcame this problem with 73% recalled correctly. Participants could still recall earcons at this level after a week had passed. Training type also affected recall. With personal training participants recalled 73% of the earcons, but with purely textual training results were significantly lower. These results show that earcons can provide good navigation cues for TBIs. The final experiment used compound, rather than hierarchical earcons to represent the hierarchy from the first experiment. Results showed that with sounds constructed in this way participants could recall 97% of the earcons. These experiments have developed our general understanding of earcons. A hierarchy three times larger than any previously created was tested, and this was also the first test of the recall of earcons over time
Further evidence that not all executive functions are equal
The current study presents a comparison of 2 structural equation models
describing the relationship between the executive functions of updating and
inhibiting. Although it has been argued that working memory capacity is defined
by one’s ability to control the focus of attention, the findings of the current
study support a view of the executive control of attention that reflects
updating and inhibiting as not entirely dependent on the same resources
What is the best strategy for retaining gestures in working memory?
This study aimed to determine whether the recall of gestures in working memory could be enhanced by verbal or gestural strategies. We also attempted to examine whether these strategies could help resist verbal or gestural interference. Fifty-four participants were divided into three groups according to the content of the training session. This included a control group, a verbal strategy group (where gestures were associated with labels) and a gestural strategy group (where participants repeated gestures and were told to imagine reproducing the movements). During the experiment, the participants had to reproduce a series of gestures under three conditions: "no interference", gestural interference (gestural suppression) and verbal interference (articulatory suppression). The results showed that task performance was enhanced in the verbal strategy group, but there was no significant difference between the gestural strategy and control groups. Moreover, compared to the "no interference" condition, performance decreased in the presence of gestural interference, except within the verbal strategy group. Finally, verbal interference hindered performance in all groups. The discussion focuses on the use of labels to recall gestures and differentiates the induced strategies from self-initiated strategies
Demographic quantification of carbon and nitrogen dynamics associated with root turnover in white clover
ACKNOWLEDGEMENTS This work formed part of Gavin Scott’s PhD at the University of Aberdeen, funded by the Scottish Executive Rural Affairs Department (now the Scottish Government's Rural and Environment Science and Analytical Services Division). We thank Prof Ian Bingham and two anonymous reviewers for their helpful comments.Peer reviewedPostprin
Multiscale Discriminant Saliency for Visual Attention
The bottom-up saliency, an early stage of humans' visual attention, can be
considered as a binary classification problem between center and surround
classes. Discriminant power of features for the classification is measured as
mutual information between features and two classes distribution. The estimated
discrepancy of two feature classes very much depends on considered scale
levels; then, multi-scale structure and discriminant power are integrated by
employing discrete wavelet features and Hidden markov tree (HMT). With wavelet
coefficients and Hidden Markov Tree parameters, quad-tree like label structures
are constructed and utilized in maximum a posterior probability (MAP) of hidden
class variables at corresponding dyadic sub-squares. Then, saliency value for
each dyadic square at each scale level is computed with discriminant power
principle and the MAP. Finally, across multiple scales is integrated the final
saliency map by an information maximization rule. Both standard quantitative
tools such as NSS, LCC, AUC and qualitative assessments are used for evaluating
the proposed multiscale discriminant saliency method (MDIS) against the
well-know information-based saliency method AIM on its Bruce Database wity
eye-tracking data. Simulation results are presented and analyzed to verify the
validity of MDIS as well as point out its disadvantages for further research
direction.Comment: 16 pages, ICCSA 2013 - BIOCA sessio
- …