1,000 research outputs found

    Spatio-temporal wavelet regularization for parallel MRI reconstruction: application to functional MRI

    Get PDF
    Parallel MRI is a fast imaging technique that enables the acquisition of highly resolved images in space or/and in time. The performance of parallel imaging strongly depends on the reconstruction algorithm, which can proceed either in the original k-space (GRAPPA, SMASH) or in the image domain (SENSE-like methods). To improve the performance of the widely used SENSE algorithm, 2D- or slice-specific regularization in the wavelet domain has been deeply investigated. In this paper, we extend this approach using 3D-wavelet representations in order to handle all slices together and address reconstruction artifacts which propagate across adjacent slices. The gain induced by such extension (3D-Unconstrained Wavelet Regularized -SENSE: 3D-UWR-SENSE) is validated on anatomical image reconstruction where no temporal acquisition is considered. Another important extension accounts for temporal correlations that exist between successive scans in functional MRI (fMRI). In addition to the case of 2D+t acquisition schemes addressed by some other methods like kt-FOCUSS, our approach allows us to deal with 3D+t acquisition schemes which are widely used in neuroimaging. The resulting 3D-UWR-SENSE and 4D-UWR-SENSE reconstruction schemes are fully unsupervised in the sense that all regularization parameters are estimated in the maximum likelihood sense on a reference scan. The gain induced by such extensions is illustrated on both anatomical and functional image reconstruction, and also measured in terms of statistical sensitivity for the 4D-UWR-SENSE approach during a fast event-related fMRI protocol. Our 4D-UWR-SENSE algorithm outperforms the SENSE reconstruction at the subject and group levels (15 subjects) for different contrasts of interest (eg, motor or computation tasks) and using different parallel acceleration factors (R=2 and R=4) on 2x2x3mm3 EPI images.Comment: arXiv admin note: substantial text overlap with arXiv:1103.353

    Spatio-Temporal Brain Dynamic Differences in Fluid Intelligence

    Get PDF
    Human fluid intelligence is closely linked to the sequential solving of complex problems. It has been associated with a distributed cognitive control or multiple-demand (MD) network, comprising regions of lateral frontal, insular, dorsomedial frontal, and parietal cortex. Previous neuroimaging research suggests that the MD network may orchestrate the allocation of attentional resources to individual parts of a complex task: in a complex target detection task with multiple independent rules, applied one at a time, reduced response to rule-critical events across the MD network in lower fluid intelligence was observed. This was in particular the case with increasing task complexity (i.e., larger sets of rules), and was accompanied by impairment in performance. Here, we examined the early spatiotemporal neural dynamics of this process in electroencephalography (EEG) source analyses using a similar task paradigm. Levels of fluid intelligence specifically predicted early neural responses in a left inferiorparietal MD region around 200-300 ms post stimulus onset. Evoked source amplitudes in left parietal cortex within this early time window also correlated with behavioural performance measures. Like in previous research, we observed impaired performance in lower fluid intelligence with increasing number of task rules. This links fluid intelligence to a process of attentional focus on those parts of a task that are most critical for the current behaviour. Within the MD system, our time re-resolved measures suggest that the left parietal cortex specifically impacts on early processes of attentional focus on task critical features. This is novel evidence on the neurocognitive correlates of fluid intelligence suggesting that individual differences are critically linked to an early process of attentional focus on task-relevant information, which is supported by left parietal MD regions

    Unraveling the spatiotemporal brain dynamics during a simulated reach-to-eat task

    Get PDF
    The reach-to-eat task involves a sequence of action components including looking, reaching, grasping, and feeding. While cortical representations of individual action components have been mapped in human functional magnetic resonance imaging (fMRI) studies, little is known about the continuous spatiotemporal dynamics among these representations during the reach-to-eat task. In a periodic event-related fMRI experiment, subjects were scanned while they reached toward a food image, grasped the virtual food, and brought it to their mouth within each 16-s cycle. Fourier-based analysis of fMRI time series revealed periodic signals and noise distributed across the brain. Independent component analysis was used to remove periodic or aperiodic motion artifacts. Timefrequency analysis was used to analyze the temporal characteristics of periodic signals in each voxel. Circular statistics was then used to estimate mean phase angles of periodic signals and select voxels based on the distribution of phase angles. By sorting mean phase angles across regions, we were able to show the real-time spatiotemporal brain dynamics as continuous traveling waves over the cortical surface. The activation sequence consisted of approximately the following stages: (1) stimulus related activations in occipital and temporal cortices; (2) movement planning related activations in dorsal premotor and superior parietal cortices; (3) reaching related activations in primary sensorimotor cortex and supplementary motor area; (4) grasping related activations in postcentral gyrus and sulcus; (5) feeding related activations in orofacial areas. These results suggest that phase-encoded design and analysis can be used to unravel sequential activations among brain regions during a simulated reach-to-eat task

    The cognitive neuroscience of visual working memory

    Get PDF
    Visual working memory allows us to temporarily maintain and manipulate visual information in order to solve a task. The study of the brain mechanisms underlying this function began more than half a century ago, with Scoville and Milner’s (1957) seminal discoveries with amnesic patients. This timely collection of papers brings together diverse perspectives on the cognitive neuroscience of visual working memory from multiple fields that have traditionally been fairly disjointed: human neuroimaging, electrophysiological, behavioural and animal lesion studies, investigating both the developing and the adult brain

    An fMRI-investigation on the neural correlates of tool use in young and elderly adults

    Get PDF

    Positive emotion broadens attention focus through decreased position-specific spatial encoding in early visual cortex: evidence from ERPs

    Get PDF
    Recent evidence has suggested that not only stimulus-specific attributes or top-down expectations can modulate attention selection processes, but also the actual mood state of the participant. In this study, we tested the prediction that the induction of positive mood can dynamically influence attention allocation and, in turn, modulate early stimulus sensory processing in primary visual cortex (V1). High-density visual event-related potentials (ERPs) were recorded while participants performed a demanding task at fixation and were presented with peripheral irrelevant visual textures, whose position was systematically varied in the upper visual field (close, medium, or far relative to fixation). Either a neutral or a positive mood was reliably induced and maintained throughout the experimental session. The ERP results showed that the earliest retinotopic component following stimulus onset (C1) strongly varied in topography as a function of the position of the peripheral distractor, in agreement with a near-far spatial gradient. However, this effect was altered for participants in a positive relative to a neutral mood. On the contrary, positive mood did not modulate attention allocation for the central (task-relevant) stimuli, as reflected by the P300 component. We ran a control behavioral experiment confirming that positive emotion selectively impaired attention allocation to the peripheral distractors. These results suggest a mood-dependent tuning of position-specific encoding in V1 rapidly following stimulus onset. We discuss these results against the dominant broaden-and-build theory

    EEG source-space synchrostate transitions and Markov modeling in the math-gifted brain during a long-chain reasoning task

    Get PDF
    To reveal transition dynamics of global neuronal networks of math‐gifted adolescents in handling long‐chain reasoning, this study explores momentary phase‐synchronized patterns, that is, electroencephalogram (EEG) synchrostates, of intracerebral sources sustained in successive 50 ms time windows during a reasoning task and non‐task idle process. Through agglomerative hierarchical clustering for functional connectivity graphs and nested iterative cosine similarity tests, this study identifies seven general and one reasoning‐specific prototypical functional connectivity patterns from all synchrostates. Markov modeling is performed for the time‐sequential synchrostates of each trial to characterize the interstate transitions. The analysis reveals that default mode network, central executive network (CEN), dorsal attention network, cingulo‐opercular network, left/right ventral frontoparietal network, and ventral visual network aperiodically recur over non‐task or reasoning process, exhibiting high predictability in interactively reachable transitions. Compared to non‐gifted subjects, math‐gifted adolescents show higher fractional occupancy and mean duration in CEN and reasoning‐triggered transient right frontotemporal network (rFTN) in the time course of the reasoning process. Statistical modeling of Markov chains reveals that there are more self‐loops in CEN and rFTN of the math‐gifted brain, suggesting robust state durability in temporally maintaining the topological structures. Besides, math‐gifted subjects show higher probabilities in switching from the other types of synchrostates to CEN and rFTN, which represents more adaptive reconfiguration of connectivity pattern in the large‐scale cortical network for focused task‐related information processing, which underlies superior executive functions in controlling goal‐directed persistence and high predictability of implementing imagination and creative thinking during long‐chain reasoning

    Computational Study of Multisensory Gaze-Shift Planning

    Get PDF
    In response to appearance of multimodal events in the environment, we often make a gaze-shift in order to focus the attention and gather more information. Planning such a gaze-shift involves three stages: 1) to determine the spatial location for the gaze-shift, 2) to find out the time to initiate the gaze-shift, 3) to work out a coordinated eye-head motion to execute the gaze-shift. There have been a large number of experimental investigations to inquire the nature of multisensory and oculomotor information processing in any of these three levels separately. Here in this thesis, we approach this problem as a single executive program and propose computational models for them in a unified framework. The first spatial problem is viewed as inferring the cause of cross-modal stimuli, whether or not they originate from a common source (chapter 2). We propose an evidence-accumulation decision-making framework, and introduce a spatiotemporal similarity measure as the criterion to choose to integrate the multimodal information or not. The variability of report of sameness, observed in experiments, is replicated as functions of the spatial and temporal patterns of target presentations. To solve the second temporal problem, a model is built upon the first decision-making structure (chapter 3). We introduce an accumulative measure of confidence on the chosen causal structure, as the criterion for initiation of action. We propose that gaze-shift is implemented when this confidence measure reaches a threshold. The experimentally observed variability of reaction time is simulated as functions of spatiotemporal and reliability features of the cross-modal stimuli. The third motor problem is considered to be solved downstream of the two first networks (chapter 4). We propose a kinematic strategy that coordinates eye-in-head and head-on-shoulder movements, in both spatial and temporal dimensions, in order to shift the line of sight towards the inferred position of the goal. The variabilities in contributions of eyes and head movements to gaze-shift are modeled as functions of the retinal error and the initial orientations of eyes and head. The three models should be viewed as parts of a single executive program that integrates perceptual and motor processing across time and space
    • 

    corecore