77 research outputs found

    The Bytown Gunners: The History of Ottawa’s Artillery, 1855-2015 (Book Review) by Kenneth W. Reynolds

    Get PDF
    Review of The Bytown Gunners: The History of Ottawa’s Artillery, 1855-2015 by Kenneth W. Reynolds

    Longitudinal Task-Related Functional Connectivity Changes Predict Reading Development

    Get PDF
    Longitudinal studies suggest developmentally dependent changes in lexical processing during reading development, implying a change in inter-regional functional connectivity over this period. The current study used functional magnetic resonance imaging (fMRI) to explore developmental changes in functional connectivity across multiple runs of a rhyming judgment task in young readers (8–14 years) over an average 2.5-year span. Changes in functional segregation are correlated with and predict changes in the skill with which typically developing children learn to apply the alphabetic principle, as measured by pseudoword decoding. This indicates a developmental shift in the proportion of specialized functional clusters is associated with changes in reading skill and suggests a dependency of reading development on changes of particular neural pathways, specifically decreases in transitivity is indicative of greater network integration. This work provides evidence that characteristics of these pathways, quantified using graph-theoretic metrics, can be used to predict individual differences in reading development

    Assessing Vividness of Mental Imagery: The Plymouth Sensory Imagery Questionnaire

    Get PDF
    Publisher allows archiving of submitted msMental imagery may occur in any sensory modality, although visual imagery has been most studied. A sensitive measure of the vividness of imagery across a range of modalities is needed: the shorter version of Bett’s QMI (Sheehan, 1967) uses outdated items and has an unreliable factor structure. We report the development and initial validation of the Plymouth Sensory Imagery Questionnaire (Psi-Q) comprising items for each of the following modalities: Vision, Sound, Smell, Taste, Touch, Bodily Sensation and Emotional Feeling. An Exploratory Factor Analysis on a 35-item form indicated that these modalities formed separate factors, rather than a single imagery factor, and this was replicated by confirmatory factor analysis. The Psi-Q was validated against the Spontaneous Use of Imagery Scale (Reisberg, Pearson & Kosslyn, 2003) and Marks’ (1995) Vividness of Visual Imagery Questionnaire-2. A short 21-item form comprising the best three items from the seven factors correlated with the total score and subscales of the full form, and with the VVIQ-2. Inspection of the data shows that while visual and sound imagery is most often rated as vivid, individuals who rate one modality as strong and the other as weak are not uncommon. Findings are interpreted within a working memory framework and point to the need for further research to identify the specific cognitive processes underlying the vividness of imagery across sensory modalities

    Modelling concept prototype competencies using a developmental memory model

    Get PDF
    The use of concepts is fundamental to human-level cognition, but there remain a number of open questions as to the structures supporting this competence. Specifically, it has been shown that humans use concept prototypes, a flexible means of representing concepts such that it can be used both for categorisation and for similarity judgements. In the context of autonomous robotic agents, the processes by which such concept functionality could be acquired would be particularly useful, enabling flexible knowledge representation and application. This paper seeks to explore this issue of autonomous concept acquisition. By applying a set of structural and operational principles, that support a wide range of cognitive competencies, within a developmental framework, the intention is to explicitly embed the development of concepts into a wider framework of cognitive processing. Comparison with a benchmark concept modelling system shows that the proposed approach can account for a number of features, namely concept-based classification, and its extension to prototype-like functionality

    Semantic Memory

    Get PDF
    How is it that we know what a dog and a tree are, or, for that matter, what knowledge is? Our semantic memory consists of knowledge about the world, including concepts, facts and beliefs. This knowledge is essential for recognizing entities and objects, and for making inferences and predictions about the world. In essence, our semantic knowledge determines how we understand and interact with the world around us. In this chapter, we examine semantic memory from cognitive, sensorimotor, cognitive neuroscientific, and computational perspectives. We consider the cognitive and neural processes (and biases) that allow people to learn and represent concepts, and discuss how and where in the brain sensory and motor information may be integrated to allow for the perception of a coherent “concept”. We suggest that our understanding of semantic memory can be enriched by considering how semantic knowledge develops across the lifespan within individuals

    Sensory attenuation is modulated by the contrasting effects of predictability and control.

    Get PDF
    Self-generated stimuli have been found to elicit a reduced sensory response compared with externally-generated stimuli. However, much of the literature has not adequately controlled for differences in the temporal predictability and temporal control of stimuli. In two experiments, we compared the N1 (and P2) components of the auditory-evoked potential to self- and externally-generated tones that differed with respect to these two factors. In Experiment 1 (n = 42), we found that increasing temporal predictability reduced N1 amplitude in a manner that may often account for the observed reduction in sensory response to self-generated sounds. We also observed that reducing temporal control over the tones resulted in a reduction in N1 amplitude. The contrasting effects of temporal predictability and temporal control on N1 amplitude meant that sensory attenuation prevailed when controlling for each. Experiment 2 (n = 38) explored the potential effect of selective attention on the results of Experiment 1 by modifying task requirements such that similar levels of attention were allocated to the visual stimuli across conditions. The results of Experiment 2 replicated those of Experiment 1, and suggested that the observed effects of temporal control and sensory attenuation were not driven by differences in attention. Given that self- and externally-generated sensations commonly differ with respect to both temporal predictability and temporal control, findings of the present study may necessitate a re-evaluation of the experimental paradigms used to study sensory attenuation

    A meta-analytic review of multisensory imagery identifies the neural correlates of modality-specific and modality-general imagery

    Get PDF
    The relationship between imagery and mental representations induced through perception has been the subject of philosophical discussion since antiquity and of vigorous scientific debate in the last century. The relatively recent advent of functional neuroimaging has allowed neuroscientists to look for brain-based evidence for or against the argument that perceptual processes underlie mental imagery. Recent investigations of imagery in many new domains and the parallel development of new meta-analytic techniques now afford us a clearer picture of the relationship between the neural processes underlying imagery and perception, and indeed between imagery and other cognitive processes. This meta-analysis surveyed 65 studies investigating modality-specific imagery in auditory, tactile, motor, gustatory, olfactory, and three visual sub-domains: form, color and motion. Activation likelihood estimate (ALE) analyses of activation foci reported within- and across sensorimotor modalities were conducted. The results indicate that modality-specific imagery activations generally overlap with—but are not confined to—corresponding somatosensory processing and motor execution areas, and suggest that there is a core network of brain regions recruited during imagery, regardless of task. These findings have important implications for investigations of imagery and theories of cognitive processes, such as perceptually-based representational systems

    Cross-modal integration in the brain is related to phonological awareness only in typical readers, not in those with reading difficulty

    Get PDF
    Fluent reading requires successfully mapping between visual orthographic and auditory phonological representations and is thus an intrinsically cross-modal process, though reading difficulty has often been characterized as a phonological deficit. However, recent evidence suggests that orthographic information influences phonological processing in typical developing (TD) readers, but that this effect may be blunted in those with reading difficulty (RD), suggesting that the core deficit underlying reading difficulties may be a failure to integrate orthographic and phonological information. Twenty-six (13 TD and 13 RD) children between 8 and 13 years of age participated in a functional magnetic resonance imaging (fMRI) experiment designed to assess the role of phonemic awareness in cross-modal processing. Participants completed a rhyme judgment task for word pairs presented unimodally (auditory only) and cross-modally (auditory followed by visual). For typically developing children, correlations between elision and neural activation were found for the cross-modal but not unimodal task, whereas in children with RD, no correlation was found. The results suggest that elision taps both phonemic awareness and cross-modal integration in typically developing readers, and that these processes are decoupled in children with reading difficulty

    Operation and Management of Aging Gas Distribution Systems

    No full text

    An Attractor Model of Lexical Conceptual Processing: Simulating Semantic Priming

    No full text
    An attractor network was trained to compute from word form to semantic representations that were based on subject-generated features. The model was driven largely by higher-order semantic structure. The network simu-lated two recent experiments that employed items included in its training set (McRae and Boisvert, 1998). In Simulation 1, short stimulus onset asyn-chrony priming was demonstrated for semantically similar items. Simulation 2 reproduced subtle effects obtained by varying degree of similarity. Two predictions from the model were then tested on human subjects. In Simula-tion 3 and Experiment 1, the items from Simulation 1 were reversed, and both the network and subjects showed minimally different priming effects in the two directions. In Experiment 2, consistent with attractor networks but con-trary to a key aspect of hierarchical spreading activation accounts priming was determined by featural similarity rather than shared superordinate cat-egory. It is concluded that semantic-similarity priming is due to featural overlap that is a natural consequence of distributed representations of word meaning. I
    corecore