21 research outputs found

    PDE-betinga optimering : prekondisjonerarar og metodar for diffuse domene

    Get PDF
    This thesis is mainly concerned with the efficient numerical solution of optimization problems subject to linear PDE-constraints, with particular focus on robust preconditioners and diffuse domain methods. Associated with such constrained optimization problems are the famous first-order KarushKuhn-Tucker (KKT) conditions. For certain minimization problems, the functions satisfying the KKT conditions are also optimal solutions of the original optimization problem, implying that we can solve the KKT system to obtain the optimum; the so-called “all-at-once” approach. We propose and analyze preconditioners for the different KKT systems we derive in this thesis.Denne avhandlinga ser i hovudsak på effektive numeriske løysingar av PDE-betinga optimeringsproblem, med eit særskilt fokus på robuste prekondisjonerar og “diffuse domain”-metodar. Assosiert med slike optimeringsproblem er dei velkjende Karush-Kuhn-Tucker (KKT)-føresetnadane. For mange betinga optimeringsproblem, vil funksjonar som tilfredstillar KKT-vilkåra samstundes vere ei optimal løysing på det opprinnelege optimeringsproblemet. Dette impliserar at vi kan løyse KKT-likningane for å finne optimum. Vi konstruerar og analyserar prekondisjonerar for dei forskjellige KKT-systema vi utleiar i denne avhandlinga

    Contributions To The Methodology Of Electrocardiographic Imaging (ECGI) And Application Of ECGI To Study Mechanisms Of Atrial Arrhythmia, Post Myocardial Infarction Electrophysiological Substrate, And Ventricular Tachycardia In Patients

    Get PDF
    ABSTRACT OF THE DISSERTATION Contributions to the Methodology of Electrocardiographic Imaging: ECGI) and Application of ECGI to Study Mechanisms of Atrial Arrhythmia, Post Myocardial Infarction Electrophysiological Substrate, and Ventricular Tachycardia in Patients by Yong Wang Doctor of Philosophy in Biomedical Engineering Washington University in St. Louis, 2009 Professor Yoram Rudy, Chair Electrocardiographic Imaging: ECGI) is a noninvasive imaging modality for cardiac electrophysiology and arrhythmia. ECGI reconstructs epicardial potentials, electrograms and isochrones from body-surface electrocardiograms combined with heart-torso geometry from computed tomography: CT). The application of a new meshless method, the Method of Fundamental Solutions: MFS) is introduced to ECGI with the following major advantages: 1. Elimination of meshing and manual mesh optimization processes, thereby enhancing automation and speeding the ECGI procedure. 2. Elimination of mesh-induced artifacts. 3. Simpler implementation. These properties of MFS enhance the practical application of ECGI as a clinical diagnostic tool. The current ECGI mode of operation is offline with generation of epicardial potential maps delayed to data acquisition. A real time ECGI procedure is proposed, by which the epicardial potentials can be reconstructed while the body surface potential data are acquired: \u3c 1msec/frame) during a clinical procedure. This development enables real-time monitoring, diagnosis, and interactive guidance of intervention for arrhythmia therapy. ECGI is applied to map noninvasively the electrophysiological substrate in eight post-MI patients during sinus rhythm: SR). Contrast-enhanced MRI: ceMRI) is conducted to determine anatomical scar. ECGI imaged regions of electrical scar corresponded closely in location, extent, and morphology to the anatomical scars. In three patients, late diastolic potentials are imaged in the scar epicardial border zone during SR. Scar-related ventricular tachycardia: VT) in two patients are imaged, showing the VT activation sequence in relation to the abnormal electrophysiological substrate. ECGI imaging the substrate in a beat-by-beat fashion could potentially help in noninvasive risk stratification for post-MI arrhythmias and facilitate substrate-based catheter ablation of these arrhythmias. ECGI is applied to eleven consecutive patients referred for VT catheter ablation procedure. ECGI is performed either before: 8 patients) or during: 3 patients) the ablation procedure. Blinded ECGI and invasive electrophysiology: EP) study results are compared. Over a wide range of VT types and locations, ECGI results are consistent with EP data regarding localization of the arrhythmia origin: including myocardial depth) and mechanism: focal, reentrant, fascicular). ECGI also provides mechanistic electrophysiological insights, relating arrhythmia patterns to the myocardial substrate. The study shows ECGI has unique potential clinical advantages, especially for hemodynamically intolerant VT or VT that is difficult to induce. Because it provides local cardiac information, ECGI may aid in better understanding of mechanisms of ventricular arrhythmia. Further prospective trials of ECGI with clinical endpoints are warranted. Many mechanisms for the initiation and perpetuation of atrial fibrillation: AF) have been demonstrated over the last several decades. The tools to study these mechanisms in humans have limitations, the most common being invasiveness of a mapping procedure. In this paper, we present simultaneous noninvasive biatrial epicardial activation sequences of AF in humans, obtained using the Electrocardiographic Imaging: ECGI) system, and analyzed in terms of mechanisms and complexity of activation patterns. We performed ECGI in 36 patients with a diagnosis of AF. To determine ECGI atrial accuracy, atrial pacing from different sites was performed in six patients: 37 pacing events), and ECGI was compared to registered CARTO images. Then, ECGI was performed on all 36 patients during AF and ECGI epicardial maps were analyzed for mechanisms and complexity. ECGI noninvasively imaged the low-amplitude signals of AF in a wide range of patients: 97% procedural success). The spatial accuracy in determining initiation sites as simulated by atrial pacing was ~ 6mm. ECGI imaged many activation patterns of AF, most commonly multiple wavelets: 92%), with pulmonary vein: 69%) and non-pulmonary vein: 62%) trigger sites. Rotor activity was seen rarely: 15%). AF complexity increased with longer clinical history of AF, though the degree of complexity of nonparoxysmal AF varied and overlapped. ECGI offers a way to identify unique epicardial activation patterns of AF in a patient-specific manner. The results are consistent with contemporary animal models of AF mechanisms and highlight the coexistence of a variety of mechanisms among patients

    The Application of Computer Techniques to ECG Interpretation

    Get PDF
    This book presents some of the latest available information on automated ECG analysis written by many of the leading researchers in the field. It contains a historical introduction, an outline of the latest international standards for signal processing and communications and then an exciting variety of studies on electrophysiological modelling, ECG Imaging, artificial intelligence applied to resting and ambulatory ECGs, body surface mapping, big data in ECG based prediction, enhanced reliability of patient monitoring, and atrial abnormalities on the ECG. It provides an extremely valuable contribution to the field

    On Learning and Generalization to Solve Inverse Problem of Electrophysiological Imaging

    Get PDF
    In this dissertation, we are interested in solving a linear inverse problem: inverse electrophysiological (EP) imaging, where our objective is to computationally reconstruct personalized cardiac electrical signals based on body surface electrocardiogram (ECG) signals. EP imaging has shown promise in the diagnosis and treatment planning of cardiac dysfunctions such as atrial flutter, atrial fibrillation, ischemia, infarction and ventricular arrhythmia. Towards this goal, we frame it as a problem of learning a function from the domain of measurements to signals. Depending upon the assumptions, we present two classes of solutions: 1) Bayesian inference in a probabilistic graphical model, 2) Learning from samples using deep networks. In both of these approaches, we emphasize on learning the inverse function with good generalization ability, which becomes a main theme of the dissertation. In a Bayesian framework, we argue that this translates to appropriately integrating different sources of knowledge into a common probabilistic graphical model framework and using it for patient specific signal estimation through Bayesian inference. In learning from samples setting, this translates to designing a deep network with good generalization ability, where good generalization refers to the ability to reconstruct inverse EP signals in a distribution of interest (which could very well be outside the sample distribution used during training). By drawing ideas from different areas like functional analysis (e.g. Fenchel duality), variational inference (e.g. Variational Bayes) and deep generative modeling (e.g. variational autoencoder), we show how we can incorporate different prior knowledge in a principled manner in a probabilistic graphical model framework to obtain a good inverse solution with generalization ability. Similarly, to improve generalization of deep networks learning from samples, we use ideas from information theory (e.g. information bottleneck), learning theory (e.g. analytical learning theory), adversarial training, complexity theory and functional analysis (e.g. RKHS). We test our algorithms on synthetic data and real data of the patients who had undergone through catheter ablation in clinics and show that our approach yields significant improvement over existing methods. Towards the end of the dissertation, we investigate general questions on generalization and stabilization of adversarial training of deep networks and try to understand the role of smoothness and function space complexity in answering those questions. We conclude by identifying limitations of the proposed methods, areas of further improvement and open questions that are specific to inverse electrophysiological imaging as well as broader, encompassing theory of learning and generalization

    Preventing premature convergence and proving the optimality in evolutionary algorithms

    Get PDF
    http://ea2013.inria.fr//proceedings.pdfInternational audienceEvolutionary Algorithms (EA) usually carry out an efficient exploration of the search-space, but get often trapped in local minima and do not prove the optimality of the solution. Interval-based techniques, on the other hand, yield a numerical proof of optimality of the solution. However, they may fail to converge within a reasonable time due to their inability to quickly compute a good approximation of the global minimum and their exponential complexity. The contribution of this paper is a hybrid algorithm called Charibde in which a particular EA, Differential Evolution, cooperates with a Branch and Bound algorithm endowed with interval propagation techniques. It prevents premature convergence toward local optima and outperforms both deterministic and stochastic existing approaches. We demonstrate its efficiency on a benchmark of highly multimodal problems, for which we provide previously unknown global minima and certification of optimality

    Statistical and Graph-Based Signal Processing: Fundamental Results and Application to Cardiac Electrophysiology

    Get PDF
    The goal of cardiac electrophysiology is to obtain information about the mechanism, function, and performance of the electrical activities of the heart, the identification of deviation from normal pattern and the design of treatments. Offering a better insight into cardiac arrhythmias comprehension and management, signal processing can help the physician to enhance the treatment strategies, in particular in case of atrial fibrillation (AF), a very common atrial arrhythmia which is associated to significant morbidities, such as increased risk of mortality, heart failure, and thromboembolic events. Catheter ablation of AF is a therapeutic technique which uses radiofrequency energy to destroy atrial tissue involved in the arrhythmia sustenance, typically aiming at the electrical disconnection of the of the pulmonary veins triggers. However, recurrence rate is still very high, showing that the very complex and heterogeneous nature of AF still represents a challenging problem. Leveraging the tools of non-stationary and statistical signal processing, the first part of our work has a twofold focus: firstly, we compare the performance of two different ablation technologies, based on contact force sensing or remote magnetic controlled, using signal-based criteria as surrogates for lesion assessment. Furthermore, we investigate the role of ablation parameters in lesion formation using the late-gadolinium enhanced magnetic resonance imaging. Secondly, we hypothesized that in human atria the frequency content of the bipolar signal is directly related to the local conduction velocity (CV), a key parameter characterizing the substrate abnormality and influencing atrial arrhythmias. Comparing the degree of spectral compression among signals recorded at different points of the endocardial surface in response to decreasing pacing rate, our experimental data demonstrate a significant correlation between CV and the corresponding spectral centroids. However, complex spatio-temporal propagation pattern characterizing AF spurred the need for new signals acquisition and processing methods. Multi-electrode catheters allow whole-chamber panoramic mapping of electrical activity but produce an amount of data which need to be preprocessed and analyzed to provide clinically relevant support to the physician. Graph signal processing has shown its potential on a variety of applications involving high-dimensional data on irregular domains and complex network. Nevertheless, though state-of-the-art graph-based methods have been successful for many tasks, so far they predominantly ignore the time-dimension of data. To address this shortcoming, in the second part of this dissertation, we put forth a Time-Vertex Signal Processing Framework, as a particular case of the multi-dimensional graph signal processing. Linking together the time-domain signal processing techniques with the tools of GSP, the Time-Vertex Signal Processing facilitates the analysis of graph structured data which also evolve in time. We motivate our framework leveraging the notion of partial differential equations on graphs. We introduce joint operators, such as time-vertex localization and we present a novel approach to significantly improve the accuracy of fast joint filtering. We also illustrate how to build time-vertex dictionaries, providing conditions for efficient invertibility and examples of constructions. The experimental results on a variety of datasets suggest that the proposed tools can bring significant benefits in various signal processing and learning tasks involving time-series on graphs. We close the gap between the two parts illustrating the application of graph and time-vertex signal processing to the challenging case of multi-channels intracardiac signals

    Reconstruction de l'activité corticale à partir de données MEG à l'aide de réseaux cérébraux et de délais de transmission estimés à partir d'IRMd

    Get PDF
    White matter fibers transfer information between brain regions with delays that are observable with magnetoencephalography and electroencephalography (M/EEG) due to their millisecond temporal resolution. We can represent the brain as a graph where nodes are the cortical sources or areas and edges are the physical connections between them: either local (between adjacent vertices on the cortical mesh) or non-local (long-range white matter fibers). Long-range anatomical connections can be obtained with diffusion MRI (dMRI) tractography which yields a set of streamlines representing white matter fiber bundles. Given the streamlines’ lengths and the information conduction speed, transmission delays can be estimated for each connection. dMRI can thus give an insight into interaction delays of the macroscopicbrain network.Localizing and recovering electrical activity of the brain from M/EEG measurements is known as the M/EEG inverse problem. Generally, there are more unknowns (brain sources) than the number of sensors, so the solution is non-unique and the problem ill-posed. To obtain a unique solution, prior constraints on the characteristics of source distributions are needed. Traditional linear inverse methods deploy different constraints which can favour solutions with minimum norm, impose smoothness constraints in space and/or time along the cortical surface, etc. Yet, structural connectivity is rarely considered and transmission delays almost always neglected.The first contribution of this thesis consists of a multimodal preprocessing pipeline used to integrate structural MRI, dMRI and MEG data into a same framework, and of a simulation procedure of source-level brain activity that was used as a synthetic dataset to validate the proposed reconstruction approaches.In the second contribution, we proposed a new framework to solve the M/EEG inverse problem called Connectivity-Informed M/EEG Inverse Problem (CIMIP), where prior transmission delays supported by dMRI were included to enforce temporal smoothness between time courses of connected sources. This was done by incorporating a Laplacian operator into the regularization, that operates on a time-dependent connectivity graph. Nonetheless, some limitations of the CIMIP approach arised, mainly due to the nature of the Laplacian, which acts on the whole graph, favours smooth solutions across all connections, for all delays, and it is agnostic to directionality.In this thesis, we aimed to investigate patterns of brain activity during visuomotor tasks, during which only a few regions typically get significantly activated, as shown by previous studies. This led us to our third contribution, an extension of the CIMIP approach that addresses the aforementioned limitations, named CIMIP_OML (“Optimal Masked Laplacian”). We restricted the full source space network (the whole cortical mesh) to a network of regions of interest and tried to find how the information is transferred between its nodes. To describe the interactions between nodes in a directed graph, we used the concept of network motifs. We proposed an algorithm that (1) searches for an optimal network motif – an optimal pattern of interaction between different regions and (2) reconstructs source activity given the found motif. Promising results are shown for both simulated and real MEG data for a visuomotor task and compared with 3 different state-of-the-art reconstruction methods.To conclude, we tackled a difficult problem of exploiting delays supported by dMRI for the reconstruction of brain activity, while also considering the directionality in the information transfer, and provided new insights into the complex patterns of brain activity.Les fibres de la matière blanche permettent le transfert d’information dans le cerveau avec des délais observables en Magnétoencéphalographie et Électroencéphalographie (M/EEG) grâce à leur haute résolution temporelle. Le cerveau peut être représenté comme un graphe où les nœuds sont les régions corticales et les liens sont les connexions physiques entre celles-ci: soit locales (entre sommets adjacents sur le maillage cortical), soit non locales (fibres de la matière blanche). Les connexions non-locales peuvent être reconstruites avec la tractographie de l’IRM de diffusion (IRMd) qui génère un ensemble de courbes («streamlines») représentant des fibres de la matière blanche. Sachant les longueurs des fibres et la vitesse de conduction de l’information, les délais de transmission peuvent être estimés. L’IRMd peut donc donner un aperçu des délais d’interaction du réseau cérébral macroscopique.La localisation et la reconstruction de l’activité électrique cérébrale à partir des mesures M/EEG est un problème inverse. En général, il y a plus d’inconnues (sources cérébrales) que de capteurs. La solution n’est donc pas unique et le problème est dit mal posé. Pour obtenir une solution unique, des hypothèses sur les caractéristiques des distributions de sources sont requises. Les méthodes inverses linéaires traditionnelles utilisent différentes hypothèses qui peuvent favoriser des solutions de norme minimale, imposer des contraintes de lissage dans l’espace et/ou dans le temps, etc. Pourtant, la connectivité structurelle est rarement prise en compte et les délais de transmission sont presque toujours négligés.La première contribution de cette thèse est un pipeline de prétraitement multimodal utilisé pour l’intégration des données d’IRM, IRMd et MEG dans un même cadre, et d’une méthode de simulation de l’activité corticale qui a été utilisée comme jeu de données synthétiques pour valider les approches de reconstruction proposées. Nous proposons également une nouvelle approche pour résoudre le problème inverse M/EEG appelée «Problème Inverse M/EEG Informé par la Connectivité» (CIMIP pour Connectivity-Informed M/EEG Inverse Problem), où des délais de transmission provenant de l’IRMd sont inclus pour renforcer le lissage temporel entre les décours des sources connectées. Pour cela, un opérateur Laplacien, basé sur un graphe de connectivité en fonction du temps, a été intégré dans la régularisation. Cependant, certaines limites de l’approche CIMIP sont apparues en raison de la nature du Laplacien qui agit sur le graphe entier et favorise les solutions lisses sur toutes les connexions, pour tous les délais, et indépendamment de la directionnalité.Lors de tâches visuo-motrices, seules quelques régions sont généralement activées significativement. Notre troisième contribution est une extension de CIMIP pour ce type de tâches qui répond aux limitations susmentionnées, nommée CIMIP_OML («Optimal Masked Laplacian») ou Laplacien Masqué Optimal. Nous essayons de trouver comment l’information est transférée entre les nœuds d’un sous-réseau de régions d’intérêt du réseau complet de l’espace des sources. Pour décrire les interactions entre nœuds dans un graphe orienté, nous utilisons le concept de motifs de réseau. Nous proposons un algorithme qui 1) cherche un motif de réseau optimal- un modèle optimal d’interaction entre régions et 2) reconstruit l’activité corticale avec le motif trouvé. Des résultats prometteurs sont présentés pour des données MEG simulées et réelles (tâche visuo-motrice) et comparés avec 3 méthodes de l’état de l’art. Pour conclure, nous avons abordé un problème difficile d’exploitation des délais de l’IRMd lors l’estimation de l’activité corticale en tenant compte de la directionalité du transfert d’information, fournissant ainsi de nouvelles perspectives sur les patterns complexes de l’activité cérébrale

    Predictive decoding of neural data

    Get PDF
    In the last five decades the number of techniques available for non-invasive functional imaging has increased dramatically. Researchers today can choose from a variety of imaging modalities that include EEG, MEG, PET, SPECT, MRI, and fMRI. This doctoral dissertation offers a methodology for the reliable analysis of neural data at different levels of investigation. By using statistical learning algorithms the proposed approach allows single-trial analysis of various neural data by decoding them into variables of interest. Unbiased testing of the decoder on new samples of the data provides a generalization assessment of decoding performance reliability. Through consecutive analysis of the constructed decoder\u27s sensitivity it is possible to identify neural signal components relevant to the task of interest. The proposed methodology accounts for covariance and causality structures present in the signal. This feature makes it more powerful than conventional univariate methods which currently dominate the neuroscience field. Chapter 2 describes the generic approach toward the analysis of neural data using statistical learning algorithms. Chapter 3 presents an analysis of results from four neural data modalities: extracellular recordings, EEG, MEG, and fMRI. These examples demonstrate the ability of the approach to reveal neural data components which cannot be uncovered with conventional methods. A further extension of the methodology, Chapter 4 is used to analyze data from multiple neural data modalities: EEG and fMRI. The reliable mapping of data from one modality into the other provides a better understanding of the underlying neural processes. By allowing the spatial-temporal exploration of neural signals under loose modeling assumptions, it removes potential bias in the analysis of neural data due to otherwise possible forward model misspecification. The proposed methodology has been formalized into a free and open source Python framework for statistical learning based data analysis. This framework, PyMVPA, is described in Chapter 5
    corecore