53 research outputs found
Recommended from our members
Network approaches to understanding the functional effects of focal brain lesions
Complex network models of functional connectivity have emerged as a paradigm shift in brain mapping over the past decade. Despite significant attention within the neuroimaging and cognitive neuroscience communities, these approaches have hitherto not been extensively explored in neurosurgery. The aim of this thesis is to investigate how the field of connectomics can contribute to understanding the effects of focal brain lesions and to functional brain mapping in neurosurgery.
This datasets for this thesis include a clinical population with focal brain tumours and a cohort focused on healthy adolescent brain development. Multiple network analyses of increasing complexity are performed based upon resting state functional MRI.
In patients with focal brain tumours, the full complement of resting state networks were apparent, while also suggesting putative patterns of network plasticity. Connectome analysis was able to identify potential signatures of node robustness and connections at risk that could be used to individually plan surgery. Focal lesions induced the formation of new hubs while down regulating previously established hubs. Overall these data are consistent with a dynamic rather than a static response to the presence of focal lesions.
Adolescent brain development demonstrated discrete dynamics with distinct gender specific and age-gender interactions. Network architecture also became more robust, particularly to random removal of nodes and edges. Overall these data provide evidence for the early vulnerability rather than enhanced plasticity of brain networks.
In summary, this thesis presents a combined analysis of pathological and healthy development datasets focused on understanding the functional effects of focal brain lesions at a network level. The coda serves as an introduction to a forthcoming study, known as Connectomics and Electrical Stimulation for Augmenting Resection (CAESAR), which is an evolution of the results and methods herein.MGH is funded by the Wellcome Trust Neuroscience in Psychiatry Network with additional support from the National Institute for Health Research Cambridge Biomedical Research Centre
Geodesic Active Fields:A Geometric Framework for Image Registration
Image registration is the concept of mapping homologous points in a pair of images. In other words, one is looking for an underlying deformation field that matches one image to a target image. The spectrum of applications of image registration is extremely large: It ranges from bio-medical imaging and computer vision, to remote sensing or geographic information systems, and even involves consumer electronics. Mathematically, image registration is an inverse problem that is ill-posed, which means that the exact solution might not exist or not be unique. In order to render the problem tractable, it is usual to write the problem as an energy minimization, and to introduce additional regularity constraints on the unknown data. In the case of image registration, one often minimizes an image mismatch energy, and adds an additive penalty on the deformation field regularity as smoothness prior. Here, we focus on the registration of the human cerebral cortex. Precise cortical registration is required, for example, in statistical group studies in functional MR imaging, or in the analysis of brain connectivity. In particular, we work with spherical inflations of the extracted hemispherical surface and associated features, such as cortical mean curvature. Spatial mapping between cortical surfaces can then be achieved by registering the respective spherical feature maps. Despite the simplified spherical geometry, inter-subject registration remains a challenging task, mainly due to the complexity and inter-subject variability of the involved brain structures. In this thesis, we therefore present a registration scheme, which takes the peculiarities of the spherical feature maps into particular consideration. First, we realize that we need an appropriate hierarchical representation, so as to coarsely align based on the important structures with greater inter-subject stability, before taking smaller and more variable details into account. Based on arguments from brain morphogenesis, we propose an anisotropic scale-space of mean-curvature maps, built around the Beltrami framework. Second, inspired by concepts from vision-related elements of psycho-physical Gestalt theory, we hypothesize that anisotropic Beltrami regularization better suits the requirements of image registration regularization, compared to traditional Gaussian filtering. Different objects in an image should be allowed to move separately, and regularization should be limited to within the individual Gestalts. We render the regularization feature-preserving by limiting diffusion across edges in the deformation field, which is in clear contrast to the indifferent linear smoothing. We do so by embedding the deformation field as a manifold in higher-dimensional space, and minimize the associated Beltrami energy which represents the hyperarea of this embedded manifold as measure of deformation field regularity. Further, instead of simply adding this regularity penalty to the image mismatch in lieu of the standard penalty, we propose to incorporate the local image mismatch as weighting function into the Beltrami energy. The image registration problem is thus reformulated as a weighted minimal surface problem. This approach has several appealing aspects, including (1) invariance to re-parametrization and ability to work with images defined on non-flat, Riemannian domains (e.g., curved surfaces, scalespaces), and (2) intrinsic modulation of the local regularization strength as a function of the local image mismatch and/or noise level. On a side note, we show that the proposed scheme can easily keep up with recent trends in image registration towards using diffeomorphic and inverse consistent deformation models. The proposed registration scheme, called Geodesic Active Fields (GAF), is non-linear and non-convex. Therefore we propose an efficient optimization scheme, based on splitting. Data-mismatch and deformation field regularity are optimized over two different deformation fields, which are constrained to be equal. The constraint is addressed using an augmented Lagrangian scheme, and the resulting optimization problem is solved efficiently using alternate minimization of simpler sub-problems. In particular, we show that the proposed method can easily compete with state-of-the-art registration methods, such as Demons. Finally, we provide an implementation of the fast GAF method on the sphere, so as to register the triangulated cortical feature maps. We build an automatic parcellation algorithm for the human cerebral cortex, which combines the delineations available on a set of atlas brains in a Bayesian approach, so as to automatically delineate the corresponding regions on a subject brain given its feature map. In a leave-one-out cross-validation study on 39 brain surfaces with 35 manually delineated gyral regions, we show that the pairwise subject-atlas registration with the proposed spherical registration scheme significantly improves the individual alignment of cortical labels between subject and atlas brains, and, consequently, that the estimated automatic parcellations after label fusion are of better quality
Dynamical systems applied to consciousness and brain rhythms in a neural network
This thesis applies the great advances of modern dynamical systems theory
(DST) to consciousness. Consciousness, or subjective experience, is faced
here in two different ways: from the global dynamics of the human brain and
from the integrated information theory (IIT), one of the currently most prestigious theories on consciousness. Before that, a study of a numerical simulation of a network of individual neurons justifies the use of the Lotka-Volterra
model for neurons assemblies in both applications. All these proposals are
developed following this scheme:
• First, summarizing the structure, methods and goal of the thesis.
• Second, introducing a general background in neuroscience and the global
dynamics of the human brain to better understand those applications.
• Third, conducting a study of a numerically simulated network of neurons. This network, which displays brain rhythms, can be employed,
among other objectives, to justify the use of the Lotka-Volterra model
for applications.
• Fourth, summarizing concepts from the mathematical DST such as
the global attractor and its informational structure, in addition to its
particularization to a Lotka-Volterra system.
• Fifth, introducing the new mathematical concepts of model transform
and instantaneous parameters that allow the application of simple mathematical models such as Lotka-Volterra to complex empirical systems
as the human brain.
• Sixth, using the model transform, and specifically the Lotka-Volterra
transform, to calculate global attractors and informational structures
in global dynamics of the human brain.
• Seventh, knowing the probably most prestigious theory on consciousness, the IIT developed by G. Tononi.
• Eighth, using informational structures to develop a continuous version of IIT.
And ninth, establishing some final conclusions and commenting on new
open questions from this work.
These nine points of this scheme correspond to the nine chapters of this thesis
Recommended from our members
Brain network mechanisms in learning behavior
The study of learning has been a central focus of psychology and neuroscience since their inception. Cognitive neuroscience’s traditional approach to understanding learn-ing has been to decompose it into discrete cognitive processes with separable and localized underlying neural systems. While this focus on modular cognitive functions for individual brain areas has led to considerable progress, there is increasing evidence that much of learn-ing behavior relies on overlapping cognitive and neural systems, which may be harder to disentangle than previously envisioned. This is not surprising, as the processes underlying learning must involve widespread integration of information from sensory, affective, and motor sources. The standard tools of cognitive neuroscience limit our ability to describe processes that rely on widespread coordination of brain activity. To understand learning, it will be necessary to characterize dynamic co-activation at the circuit level.
In this dissertation, I present three studies that seek to describe the roles of distrib-uted brain networks in learning. I begin by giving an overview of our current understand-ing of multiple forms of learning, describing the neural and computational mechanisms thought to underlie incremental feedback-based learning and flexible episodic memory. I will focus in particular on the difficulties in separating these processes at the cognitive level and in localizing them to individual regions at the neural level. I will then describe recent findings that have begun to characterize the brain’s large-scale network structure, emphasiz-ing the potential roles that distributed networks could play in understanding learning and cognition more generally. I will end the introduction by reviewing current attempts to char-acterize the dynamics of large-scale brain networks, which will be essential for providing a mechanistic link to learning behavior.
Chapter 2 is a study demonstrating that intrinsic connectivity between the hippo-campus and the ventromedial prefrontal cortex, as well as between these regions and dis-tributed brain networks, is related to individual differences in the transfer of learning on a sensory preconditioning task. The hippocampus and ventromedial prefrontal cortex have both been shown to be involved in this type of learning, and this study represents an early attempt to link connectivity between individual regions and broader networks to learning processes.
Chapter 3 is a study that takes advantage of recent developments in mathematical modeling of temporal networks to demonstrate a relationship between large-scale network dynamics and reinforcement learning within individuals. This study shows that the flexibil-ity of network connectivity in the striatum is related to learning performance over time, as well as to individual differences in parameters estimated from computational models of re-inforcement learning. Notably, connectivity between the striatum and visual as well as or-bitofrontal regions increased over the course of the task, which is consistent with an inte-grative role for the region in learning value-based associations. Network flexibility in a dis-tinct set of regions is associated with episodic memory for object images presented during the learning task.
Chapter 4 examines the role of dopamine, a neurotransmitter strongly linked to val-ue updating in reinforcement learning, in the dynamic network changes occurring during learning. Patients with Parkinson’s disease, who experience a loss of dopaminergic neu-rons in the substantia nigra, performed a reversal-learning task while undergoing functional magnetic resonance imaging. Patients were scanned on and off of a dopamine precursor medication (levodopa) in a within-subject design in order to examine the impact of dopa-mine on brain network dynamics during learning. The reversal provided an experimental manipulation of dynamic connectivity, and patients on medication showed greater modula-tion of striatal-cortical connectivity. Similar results were found in a number of regions re-ceiving midbrain projections including the prefrontal cortex and medial temporal lobe. This study indicates that dopamine inputs from the midbrain modulate large-scale network dy-namics during learning, providing a direct link between reinforcement learning theories of value updating and network neuroscience accounts of dynamic connectivity.
Together, these results indicate that large-scale networks play a critical role in multi-ple forms of learning behavior. Each highlights the potential importance of understanding dynamic routing and integration of information across large-scale circuits for our concep-tion of learning and other cognitive processes. Understanding the when, where, and how of this information flow in the brain may provide an alternative or compliment to traditional theories of distinct learning systems. These studies also illustrate challenges in integrating this perspective with established theories in cognitive neuroscience. Chapter 5 will situate the studies in a broader discussion of how brain activity relates to cognition in general, while pointing out current roadblocks and potential ways forward for a cognitive network neuroscience of learning
Cortical Dynamics of Language
The human capability for fluent speech profoundly directs inter-personal communication and, by extension, self-expression. Language is lost in millions of people each year due to trauma, stroke, neurodegeneration, and neoplasms with devastating impact to social interaction and quality of life. The following investigations were designed to elucidate the neurobiological foundation of speech production, building towards a universal cognitive model of language in the brain. Understanding the dynamical mechanisms supporting cortical network behavior will significantly advance the understanding of how both focal and disconnection injuries yield neurological deficits, informing the development of therapeutic approaches
Networked Data Analytics: Network Comparison And Applied Graph Signal Processing
Networked data structures has been getting big, ubiquitous, and pervasive. As our day-to-day activities become more incorporated with and influenced by the digital world, we rely more on our intuition to provide us a high-level idea and subconscious understanding of the encountered data. This thesis aims at translating the qualitative intuitions we have about networked data into quantitative and formal tools by designing rigorous yet reasonable algorithms. In a nutshell, this thesis constructs models to compare and cluster networked data, to simplify a complicated networked structure, and to formalize the notion of smoothness and variation for domain-specific signals on a network. This thesis consists of two interrelated thrusts which explore both the scenarios where networks have intrinsic value and are themselves the object of study, and where the interest is for signals defined on top of the networks, so we leverage the information in the network to analyze the signals. Our results suggest that the intuition we have in analyzing huge data can be transformed into rigorous algorithms, and often the intuition results in superior performance, new observations, better complexity, and/or bridging two commonly implemented methods. Even though different in the principles they investigate, both thrusts are constructed on what we think as a contemporary alternation in data analytics: from building an algorithm then understanding it to having an intuition then building an algorithm around it.
We show that in order to formalize the intuitive idea to measure the difference between a pair of networks of arbitrary sizes, we could design two algorithms based on the intuition to find mappings between the node sets or to map one network into the subset of another network. Such methods also lead to a clustering algorithm to categorize networked data structures. Besides, we could define the notion of frequencies of a given network by ordering features in the network according to how important they are to the overall information conveyed by the network. These proposed algorithms succeed in comparing collaboration histories of researchers, clustering research communities via their publication patterns, categorizing moving objects from uncertain measurmenets, and separating networks constructed from different processes.
In the context of data analytics on top of networks, we design domain-specific tools by leveraging the recent advances in graph signal processing, which formalizes the intuitive notion of smoothness and variation of signals defined on top of networked structures, and generalizes conventional Fourier analysis to the graph domain. In specific, we show how these tools can be used to better classify the cancer subtypes by considering genetic profiles as signals on top of gene-to-gene interaction networks, to gain new insights to explain the difference between human beings in learning new tasks and switching attentions by considering brain activities as signals on top of brain connectivity networks, as well as to demonstrate how common methods in rating prediction are special graph filters and to base on this observation to design novel recommendation system algorithms
Changes in psychological and biological signals after completing an adaptive training program requiring working memory related cognitive processes
Tesis doctoral inĂ©dita leĂda en la Universidad AutĂłnoma de Madrid, Facultad de PsicologĂa, Departamento de PsicologĂa BiolĂłgica y de la Salud. Fecha de lectura: 11-12-201
- …