95 research outputs found

    Encoding cortical dynamics in sparse features.

    Get PDF
    Distributed cortical solutions of magnetoencephalography (MEG) and electroencephalography (EEG) exhibit complex spatial and temporal dynamics. The extraction of patterns of interest and dynamic features from these cortical signals has so far relied on the expertise of investigators. There is a definite need in both clinical and neuroscience research for a method that will extract critical features from high-dimensional neuroimaging data in an automatic fashion. We have previously demonstrated the use of optical flow techniques for evaluating the kinematic properties of motion field projected on non-flat manifolds like in a cortical surface. We have further extended this framework to automatically detect features in the optical flow vector field by using the modified and extended 2-Riemannian Helmholtz-Hodge decomposition (HHD). Here, we applied these mathematical models on simulation and MEG data recorded from a healthy individual during a somatosensory experiment and an epilepsy pediatric patient during sleep. We tested whether our technique can automatically extract salient dynamical features of cortical activity. Simulation results indicated that we can precisely reproduce the simulated cortical dynamics with HHD; encode them in sparse features and represent the propagation of brain activity between distinct cortical areas. Using HHD, we decoded the somatosensory N20 component into two HHD features and represented the dynamics of brain activity as a traveling source between two primary somatosensory regions. In the epilepsy patient, we displayed the propagation of the epileptic activity around the margins of a brain lesion. Our findings indicate that HHD measures computed from cortical dynamics can: (i) quantitatively access the cortical dynamics in both healthy and disease brain in terms of sparse features and dynamic brain activity propagation between distinct cortical areas, and (ii) facilitate a reproducible, automated analysis of experimental and clinical MEG/EEG source imaging data

    Computing with functions in the ball

    Full text link
    A collection of algorithms in object-oriented MATLAB is described for numerically computing with smooth functions defined on the unit ball in the Chebfun software. Functions are numerically and adaptively resolved to essentially machine precision by using a three-dimensional analogue of the double Fourier sphere method to form "ballfun" objects. Operations such as function evaluation, differentiation, integration, fast rotation by an Euler angle, and a Helmholtz solver are designed. Our algorithms are particularly efficient for vector calculus operations, and we describe how to compute the poloidal-toroidal and Helmholtz--Hodge decomposition of a vector field defined on the ball.Comment: 23 pages, 9 figure

    Characterization of Interictal Epileptiform Discharges with Time-Resolved Cortical Current Maps Using the Helmholtz–Hodge Decomposition

    Get PDF
    Source estimates performed using a single equivalent current dipole (ECD) model for interictal epileptiform discharges (IEDs) which appear unifocal have proven highly accurate in neocortical epilepsies, falling within millimeters of that demonstrated by electrocorticography. Despite this success, the single ECD solution is limited, best describing sources which are temporally stable. Adapted from the field of optics, optical flow analysis of distributed source models of MEG or EEG data has been proposed as a means to estimate the current motion field of cortical activity, or “cortical flow.” The motion field so defined can be used to identify dynamic features of interest such as patterns of directional flow, current sources, and sinks. The Helmholtz–Hodge Decomposition (HHD) is a technique frequently applied in fluid dynamics to separate a flow pattern into three components: (1) a non-rotational scalar potential U describing sinks and sources, (2) a non-diverging scalar potential A accounting for vortices, and (3) an harmonic vector field H. As IEDs seem likely to represent periods of highly correlated directional flow of cortical currents, the U component of the HHD suggests itself as a way to characterize spikes in terms of current sources and sinks. In a series of patients with refractory epilepsy who were studied with magnetoencephalography as part of their evaluation for possible resective surgery, spike localization with ECD was compared to HHD applied to an optical flow analysis of the same spike. Reasonable anatomic correlation between the two techniques was seen in the majority of patients, suggesting that this method may offer an additional means of characterization of epileptic discharges

    Doctor of Philosophy

    Get PDF
    dissertationWith modern computational resources rapidly advancing towards exascale, large-scale simulations useful for understanding natural and man-made phenomena are becoming in- creasingly accessible. As a result, the size and complexity of data representing such phenom- ena are also increasing, making the role of data analysis to propel science even more integral. This dissertation presents research on addressing some of the contemporary challenges in the analysis of vector fields--an important type of scientific data useful for representing a multitude of physical phenomena, such as wind flow and ocean currents. In particular, new theories and computational frameworks to enable consistent feature extraction from vector fields are presented. One of the most fundamental challenges in the analysis of vector fields is that their features are defined with respect to reference frames. Unfortunately, there is no single ""correct"" reference frame for analysis, and an unsuitable frame may cause features of interest to remain undetected, thus creating serious physical consequences. This work develops new reference frames that enable extraction of localized features that other techniques and frames fail to detect. As a result, these reference frames objectify the notion of ""correctness"" of features for certain goals by revealing the phenomena of importance from the underlying data. An important consequence of using these local frames is that the analysis of unsteady (time-varying) vector fields can be reduced to the analysis of sequences of steady (time- independent) vector fields, which can be performed using simpler and scalable techniques that allow better data management by accessing the data on a per-time-step basis. Nevertheless, the state-of-the-art analysis of steady vector fields is not robust, as most techniques are numerical in nature. The residing numerical errors can violate consistency with the underlying theory by breaching important fundamental laws, which may lead to serious physical consequences. This dissertation considers consistency as the most fundamental characteristic of computational analysis that must always be preserved, and presents a new discrete theory that uses combinatorial representations and algorithms to provide consistency guarantees during vector field analysis along with the uncertainty visualization of unavoidable discretization errors. Together, the two main contributions of this dissertation address two important concerns regarding feature extraction from scientific data: correctness and precision. The work presented here also opens new avenues for further research by exploring more-general reference frames and more-sophisticated domain discretizations

    Doctor of Philosophy

    Get PDF
    dissertationThe statistical study of anatomy is one of the primary focuses of medical image analysis. It is well-established that the appropriate mathematical settings for such analyses are Riemannian manifolds and Lie group actions. Statistically defined atlases, in which a mean anatomical image is computed from a collection of static three-dimensional (3D) scans, have become commonplace. Within the past few decades, these efforts, which constitute the field of computational anatomy, have seen great success in enabling quantitative analysis. However, most of the analysis within computational anatomy has focused on collections of static images in population studies. The recent emergence of large-scale longitudinal imaging studies and four-dimensional (4D) imaging technology presents new opportunities for studying dynamic anatomical processes such as motion, growth, and degeneration. In order to make use of this new data, it is imperative that computational anatomy be extended with methods for the statistical analysis of longitudinal and dynamic medical imaging. In this dissertation, the deformable template framework is used for the development of 4D statistical shape analysis, with applications in motion analysis for individualized medicine and the study of growth and disease progression. A new method for estimating organ motion directly from raw imaging data is introduced and tested extensively. Polynomial regression, the staple of curve regression in Euclidean spaces, is extended to the setting of Riemannian manifolds. This polynomial regression framework enables rigorous statistical analysis of longitudinal imaging data. Finally, a new diffeomorphic model of irrotational shape change is presented. This new model presents striking practical advantages over standard diffeomorphic methods, while the study of this new space promises to illuminate aspects of the structure of the diffeomorphism group

    Perfusion Imaging via Advection-Diffusion

    Get PDF
    The goal of perfusion imaging (PI) is to quantify blood flow through the brain parenchyma by serial imaging (Demeestere et al. (2020)). Widely-used perfusion measurement techniques include injecting an intravascular tracer (e.g., in computed tomography (CT) perfusion, Dynamic Susceptibility Contrast-enhanced (DSC) and Dynamic Contrast-Enhanced (DCE) magnetic resonance (MR) perfusion) (Fieselmann et al. (2011)), using magnetically-labeled arterial blood water protons as an endogenous tracer (Arterial spin labeling (ASL)) (Petcharunpaisan et al. (2010)), or using positron emission tomography (PET) (Grüner et al. (2011)). The resulting quantitative measures help clinical diagnosis and clinical decision-making, for example, to assess acute strokes and brain tumors. These measures also help to facilitate individualized treatment of stroke patients based on brain tissue status (Demeestere et al. (2020)). Despite its benefits, the widespread use of PI still faces many challenges. First, current existing perfusion analysis approaches mostly depend on the arterial input function (AIF) (Mouridsen et al. (2006)), while the selection procedure for AIF is not unified and is only a coarse approximation of the actual input tracer (Mouridsen et al. (2006); Schmainda et al. (2019a,b)). Second, these approaches are performed on individual voxels, thereby disregarding the spatial dependencies of tracer dynamics. This thesis, therefore, aims to model tracer transport in a variable-coefficient advection-diffusion PDE (partial differential equations) system, from both optimization-based and learning-based perspectives, to better understand the relations between the spatial-temporal transport of tracer and strokes, while avoiding the need for approximating the AIF. To help with identifiability, this thesis builds an advection-diffusion brain perfusion simulator that allows the pre-training of the learning-based models under the supervision of the ground truth velocity and diffusion tensor fields. Instead of directly learning these velocity and diffusion tensor fields, the developed models resort to the introduced representations that assure incompressible flow and symmetric positive semi-definite diffusion fields and demonstrate the additional benefits of these representations in improving estimation accuracy. Further, this thesis presents approaches for stroke lesion detection and segmentation, based on the fitted advection-diffusion model and its velocity and diffusion measures.Doctor of Philosoph
    corecore