28,380 research outputs found

    Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Full text link
    How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    Reading aloud boosts connectivity through the putamen

    Get PDF
    Functional neuroimaging and lesion studies have frequently reported thalamic and putamen activation during reading and speech production. However, it is currently unknown how activity in these structures interacts with that in other reading and speech production areas. This study investigates how reading aloud modulates the neuronal interactions between visual recognition and articulatory areas, when both the putamen and thalamus are explicitly included. Using dynamic causal modeling in skilled readers who were reading regularly spelled English words, we compared 27 possible pathways that might connect the ventral anterior occipito-temporal sulcus (aOT) to articulatory areas in the precentral cortex (PrC). We focused on whether the neuronal interactions within these pathways were increased by reading relative to picture naming and other visual and articulatory control conditions. The results provide strong evidence that reading boosts the aOT–PrC pathway via the putamen but not the thalamus. However, the putamen pathway was not exclusive because there was also evidence for another reading pathway that did not involve either the putamen or the thalamus. We conclude that the putamen plays a special role in reading but this is likely to vary with individual reading preferences and strategies

    Computational modelling of neural mechanisms underlying natural speech perception

    Get PDF
    Humans are highly skilled at the analysis of complex auditory scenes. In particular, the human auditory system is characterized by incredible robustness to noise and can nearly effortlessly isolate the voice of a specific talker from even the busiest of mixtures. However, neural mechanisms underlying these remarkable properties remain poorly understood. This is mainly due to the inherent complexity of speech signals and multi-stage, intricate processing performed in the human auditory system. Understanding these neural mechanisms underlying speech perception is of interest for clinical practice, brain-computer interfacing and automatic speech processing systems. In this thesis, we developed computational models characterizing neural speech processing across different stages of the human auditory pathways. In particular, we studied the active role of slow cortical oscillations in speech-in-noise comprehension through a spiking neural network model for encoding spoken sentences. The neural dynamics of the model during noisy speech encoding reflected speech comprehension of young, normal-hearing adults. The proposed theoretical model was validated by predicting the effects of non-invasive brain stimulation on speech comprehension in an experimental study involving a cohort of volunteers. Moreover, we developed a modelling framework for detecting the early, high-frequency neural response to the uninterrupted speech in non-invasive neural recordings. We applied the method to investigate top-down modulation of this response by the listener's selective attention and linguistic properties of different words from a spoken narrative. We found that in both cases, the detected responses of predominantly subcortical origin were significantly modulated, which supports the functional role of feedback, between higher- and lower levels stages of the auditory pathways, in speech perception. The proposed computational models shed light on some of the poorly understood neural mechanisms underlying speech perception. The developed methods can be readily employed in future studies involving a range of experimental paradigms beyond these considered in this thesis.Open Acces

    Hemispheric specialization in selective attention and short-term memory: a fine-coarse model of left- and right-ear disadvantages.

    Get PDF
    Serial short-term memory is impaired by irrelevant sound, particularly when the sound changes acoustically. This acoustic effect is larger when the sound is presented to the left compared to the right ear (a left-ear disadvantage). Serial memory appears relatively insensitive to distraction from the semantic properties of a background sound. In contrast, short-term free recall of semantic-category exemplars is impaired by the semantic properties of background speech and is relatively insensitive to the sound’s acoustic properties. This semantic effect is larger when the sound is presented to the right compared to the left ear (a right-ear disadvantage). In this paper, we outline a speculative neurocognitive fine-coarse model of these hemispheric differences in relation to short-term memory and selective attention, and explicate empirical directions in which this model can be critically evaluated

    Music Therapy Techniques for Memory Stabilization in Diverse Dementias

    Get PDF
    Music contains certain unmistakable healing properties pertaining specifically to the matured body and soul affected by various types of dementia. Music therapy aids in memory retention or the retarding of the loss of mental function as a result of Alzheimer\u27s disease, Dementia with Lewy bodies, and Senile Dementia. Music can help subjects access lost memories through interaction with a music therapist. Certain music therapy techniques have been shown to yield additional physical, communicative, and psychological benefits. The disease progress of Alzheimer\u27s disease, Dementia with Lewy bodies, and Senile Dementia may be further delayed by music therapy when paired with pharmaceutical interventions such as previously established memory enhancing medications

    Nägemistaju automaatsete protsesside eksperimentaalne uurimine

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsiooneVäitekiri keskendub nägemistaju protsesside eksperimentaalsele uurimisele, mis on suuremal või vähemal määral automaatsed. Uurimistöös on kasutatud erinevaid eksperimentaalseid katseparadigmasid ja katsestiimuleid ning nii käitumuslikke- kui ka ajukuvamismeetodeid. Esimesed kolm empiirilist uurimust käsitlevad liikumisinformatsiooni töötlust, mis on evolutsiooni käigus kujunenud üheks olulisemaks baasprotsessiks nägemistajus. Esmalt huvitas meid, kuidas avastatakse liikuva objekti suunamuutusi, kui samal ajal toimub ka taustal liikumine (Uurimus I). Nägemistaju uurijad on pikka aega arvanud, et liikumist arvutatakse alati mõne välise objekti või tausta suhtes. Meie uurimistulemused ei kinnitanud taolise suhtelise liikumise printsiibi paikapidavust ning toetavad pigem seisukohta, et eesmärkobjekti liikumisinformatsiooni töötlus on automaatne protsess, mis tuvastab silma põhjas toimuvaid nihkeid, ja taustal toimuv seda eriti ei mõjuta. Teise uurimuse tulemused (Uurimus II) näitasid, et nägemissüsteem töötleb väga edukalt ka seda liikumisinformatsiooni, millele vaatleja teadlikult tähelepanu ei pööra. See tähendab, et samal ajal, kui inimene on mõne tähelepanu hõlmava tegevusega ametis, suudab tema aju taustal toimuvaid sündmusi automaatselt registreerida. Igapäevaselt on inimese nägemisväljas alati palju erinevaid objekte, millel on erinevad omadused, mistõttu järgmiseks huvitas meid (Uurimus III), kuidas ühe tunnuse (antud juhul värvimuutuse) töötlemist mõjutab mõne teise tunnusega toimuv (antud juhul liikumiskiiruse) muutus. Näitasime, et objekti liikumine parandas sama objekti värvimuutuse avastamist, mis viitab, et nende kahe omaduse töötlemine ajus ei ole päris eraldiseisev protsess. Samuti tähendab taoline tulemus, et hoolimata ühele tunnusele keskendumisest ei suuda inimene ignoreerida teist tähelepanu tõmbavat tunnust (liikumine), mis viitab taas kord automaatsetele töötlusprotsessidele. Neljas uurimus keskendus emotsionaalsete näoväljenduste töötlusele, kuna need kannavad keskkonnas hakkamasaamiseks vajalikke sotsiaalseid signaale, mistõttu on alust arvata, et nende töötlus on kujunenud suuresti automaatseks protsessiks. Näitasime, et emotsiooni väljendavaid nägusid avastati kiiremini ja kergemini kui neutraalse ilmega nägusid ning et vihane nägu tõmbas rohkem tähelepanu kui rõõmus (Uurimus IV). Väitekirja viimane osa puudutab visuaalset lahknevusnegatiivsust (ingl Visual Mismatch Negativity ehk vMMN), mis näitab aju võimet avastada automaatselt erinevusi enda loodud mudelist ümbritseva keskkonna kohta. Selle automaatse erinevuse avastamise mehhanismi uurimisse andsid oma panuse nii Uurimus II kui Uurimus IV, mis mõlemad pakuvad välja tõendusi vMMN tekkimise kohta eri tingimustel ja katseparadigmades ning ka vajalikke metodoloogilisi täiendusi. Uurimus V on esimene kogu siiani ilmunud temaatilist teadustööd hõlmav ülevaateartikkel ja metaanalüüs visuaalsest lahknevusnegatiivsusest psühhiaatriliste ja neuroloogiliste haiguste korral, mis panustab oluliselt visuaalse lahknevusnegatiivsuse valdkonna arengusse.The research presented and discussed in the thesis is an experimental exploration of processes in visual perception, which all display a considerable amount of automaticity. These processes are targeted from different angles using different experimental paradigms and stimuli, and by measuring both behavioural and brain responses. In the first three empirical studies, the focus is on motion detection that is regarded one of the most basic processes shaped by evolution. Study I investigated how motion information of an object is processed in the presence of background motion. Although it is widely believed that no motion can be perceived without establishing a frame of reference with other objects or motion on the background, our results found no support for relative motion principle. This finding speaks in favour of a simple and automatic process of detecting motion, which is largely insensitive to the surrounding context. Study II shows that the visual system is built to automatically process motion information that is outside of our attentional focus. This means that even if we are concentrating on some task, our brain constantly monitors the surrounding environment. Study III addressed the question of what happens when multiple stimulus qualities (motion and colour) are present and varied, which is the everyday reality of our visual input. We showed that velocity facilitated the detection of colour changes, which suggests that processing motion and colour is not entirely isolated. These results also indicate that it is hard to ignore motion information, and processing it is rather automatically initiated. The fourth empirical study focusses on another example of visual input that is processed in a rather automatic way and carries high survival value – emotional expressions. In Study IV, participants detected emotional facial expressions faster and more easily compared with neutral facial expressions, with a tendency towards more automatic attention to angry faces. In addition, we investigated the emergence of visual mismatch negativity (vMMN) that is one of the most objective and efficient methods for analysing automatic processes in the brain. Study II and Study IV proposed several methodological gains for registering this automatic change-detection mechanism. Study V is an important contribution to the vMMN research field as it is the first comprehensive review and meta-analysis of the vMMN studies in psychiatric and neurological disorders

    Solving the Mind-Body Problem through Two Distinct Concepts: Internal-Mental Existence and Internal Mental Reality

    Get PDF
    In a previous published paper, we initiated in this journal discussion about new perspectives regarding the organization and functioning of the mind, as a premise for addressing the mind-body problem. In this article, we continue focussing discussion on two distinct but interrelated concepts, internal-mental existence/ entity and internal-mental reality. These two psycho-physiological subunits of the mind interact each other in the form of an internal-mental interaction, having no sense if one is isolated/ studied separately from the other. In other words, the mind (as a dynamic psycho-physiological construction) has no sense in the absence of this internal mental interaction that which takes places between internal-mental existence and internal-mental reality. In the case of the `mind-body problem`, the tendency until now was to assign extremely complex functions of the mind (abstract ideas, consciousness, colors) to simplistic physiological/ neuronal structures. We hope that this paper opens a new perspective, in respect to complex/ interrelated neuronal structures that construct the mind through their interaction, a process that is both physiologically (transmission of neural impulses) and psychologically (transmission of information), and that requires time (an immaterial component) to occurs

    Sparse visual models for biologically inspired sensorimotor control

    Get PDF
    Given the importance of using resources efficiently in the competition for survival, it is reasonable to think that natural evolution has discovered efficient cortical coding strategies for representing natural visual information. Sparse representations have intrinsic advantages in terms of fault-tolerance and low-power consumption potential, and can therefore be attractive for robot sensorimotor control with powerful dispositions for decision-making. Inspired by the mammalian brain and its visual ventral pathway, we present in this paper a hierarchical sparse coding network architecture that extracts visual features for use in sensorimotor control. Testing with natural images demonstrates that this sparse coding facilitates processing and learning in subsequent layers. Previous studies have shown how the responses of complex cells could be sparsely represented by a higher-order neural layer. Here we extend sparse coding in each network layer, showing that detailed modeling of earlier stages in the visual pathway enhances the characteristics of the receptive fields developed in subsequent stages. The yield network is more dynamic with richer and more biologically plausible input and output representation
    corecore