70 research outputs found

    Optimization of magnetic flux density for fast MREIT conductivity imaging using multi-echo interleaved partial fourier acquisitions

    Get PDF
    BACKGROUND: Magnetic resonance electrical impedance tomography (MREIT) has been introduced as a non-invasive method for visualizing the internal conductivity and/or current density of an electrically conductive object by externally injected currents. The injected current through a pair of surface electrodes induces a magnetic flux density distribution inside the imaging object, which results in additional magnetic flux density. To measure the magnetic flux density signal in MREIT, the phase difference approach in an interleaved encoding scheme cancels out the systematic artifacts accumulated in phase signals and also reduces the random noise effect by doubling the measured magnetic flux density signal. For practical applications of in vivo MREIT, it is essential to reduce the scan duration maintaining spatial-resolution and sufficient contrast. In this paper, we optimize the magnetic flux density by using a fast gradient multi-echo MR pulse sequence. To recover the one component of magnetic flux density B(z), we use a coupled partial Fourier acquisitions in the interleaved sense. METHODS: To prove the proposed algorithm, we performed numerical simulations using a two-dimensional finite-element model. For a real experiment, we designed a phantom filled with a calibrated saline solution and located a rubber balloon inside the phantom. The rubber balloon was inflated by injecting the same saline solution during the MREIT imaging. We used the multi-echo fast low angle shot (FLASH) MR pulse sequence for MRI scan, which allows the reduction of measuring time without a substantial loss in image quality. RESULTS: Under the assumption of a priori phase artifact map from a reference scan, we rigorously investigated the convergence ratio of the proposed method, which was closely related with the number of measured phase encode set and the frequency range of the background field inhomogeneity. In the phantom experiment with a partial Fourier acquisition, the total scan time was less than 6 seconds to measure the magnetic flux density B(z) data with 128×128 spacial matrix size, where it required 10.24 seconds to fill the complete k-space region. CONCLUSION: Numerical simulation and experimental results demonstrated that the proposed method reduces the scanning time and provides the recovered B(z) data comparable to what we obtained by measuring complete k-space data

    Statistical and Graph-Based Signal Processing: Fundamental Results and Application to Cardiac Electrophysiology

    Get PDF
    The goal of cardiac electrophysiology is to obtain information about the mechanism, function, and performance of the electrical activities of the heart, the identification of deviation from normal pattern and the design of treatments. Offering a better insight into cardiac arrhythmias comprehension and management, signal processing can help the physician to enhance the treatment strategies, in particular in case of atrial fibrillation (AF), a very common atrial arrhythmia which is associated to significant morbidities, such as increased risk of mortality, heart failure, and thromboembolic events. Catheter ablation of AF is a therapeutic technique which uses radiofrequency energy to destroy atrial tissue involved in the arrhythmia sustenance, typically aiming at the electrical disconnection of the of the pulmonary veins triggers. However, recurrence rate is still very high, showing that the very complex and heterogeneous nature of AF still represents a challenging problem. Leveraging the tools of non-stationary and statistical signal processing, the first part of our work has a twofold focus: firstly, we compare the performance of two different ablation technologies, based on contact force sensing or remote magnetic controlled, using signal-based criteria as surrogates for lesion assessment. Furthermore, we investigate the role of ablation parameters in lesion formation using the late-gadolinium enhanced magnetic resonance imaging. Secondly, we hypothesized that in human atria the frequency content of the bipolar signal is directly related to the local conduction velocity (CV), a key parameter characterizing the substrate abnormality and influencing atrial arrhythmias. Comparing the degree of spectral compression among signals recorded at different points of the endocardial surface in response to decreasing pacing rate, our experimental data demonstrate a significant correlation between CV and the corresponding spectral centroids. However, complex spatio-temporal propagation pattern characterizing AF spurred the need for new signals acquisition and processing methods. Multi-electrode catheters allow whole-chamber panoramic mapping of electrical activity but produce an amount of data which need to be preprocessed and analyzed to provide clinically relevant support to the physician. Graph signal processing has shown its potential on a variety of applications involving high-dimensional data on irregular domains and complex network. Nevertheless, though state-of-the-art graph-based methods have been successful for many tasks, so far they predominantly ignore the time-dimension of data. To address this shortcoming, in the second part of this dissertation, we put forth a Time-Vertex Signal Processing Framework, as a particular case of the multi-dimensional graph signal processing. Linking together the time-domain signal processing techniques with the tools of GSP, the Time-Vertex Signal Processing facilitates the analysis of graph structured data which also evolve in time. We motivate our framework leveraging the notion of partial differential equations on graphs. We introduce joint operators, such as time-vertex localization and we present a novel approach to significantly improve the accuracy of fast joint filtering. We also illustrate how to build time-vertex dictionaries, providing conditions for efficient invertibility and examples of constructions. The experimental results on a variety of datasets suggest that the proposed tools can bring significant benefits in various signal processing and learning tasks involving time-series on graphs. We close the gap between the two parts illustrating the application of graph and time-vertex signal processing to the challenging case of multi-channels intracardiac signals

    Improved Wavelet Threshold for Image De-noising

    Get PDF
    With the development of communication technology and network technology, as well as the rising popularity of digital electronic products, an image has become an important carrier of access to outside information. However, images are vulnerable to noise interference during collection, transmission and storage, thereby decreasing image quality. Therefore, image noise reduction processing is necessary to obtain higher-quality images. For the characteristics of its multi-analysis, relativity removal, low entropy, and flexible bases, the wavelet transform has become a powerful tool in the field of image de-noising. The wavelet transform in application mathematics has a rapid development. De-noising methods based on wavelet transform is proposed and achieved with good results, but shortcomings still remain. Traditional threshold functions have some deficiencies in image de-noising. A hard threshold function is discontinuous, whereas a soft threshold function causes constant deviation. To address these shortcomings, a method for removing image noise is proposed in this paper. First, the method decomposes the noise image to determine the wavelet coefficients. Second, the wavelet coefficient is applied on the high-frequency part of the threshold processing by using the improved threshold function. Finally, the de-noised images are obtained to rebuild the images in accordance with the estimation in the wavelet-based conditions. Experiment results show that this method, discussed in this paper, is better than traditional hard threshold de-noising and soft threshold de-noising methods, in terms of objective effects and subjective visual effects

    Mathematical methods for magnetic resonance based electric properties tomography

    Get PDF
    Magnetic resonance-based electric properties tomography (MREPT) is a recent quantitative imaging technique that could provide useful additional information to the results of magnetic resonance imaging (MRI) examinations. Precisely, MREPT is a collective name that gathers all the techniques that elaborate the radiofrequency (RF) magnetic field B1 generated and measured by a MRI scanner in order to map the electric properties inside a human body. The range of uses of MREPT in clinical oncology, patient-specific treatment planning and MRI safety motivates the increasing scientific interest in its development. The main advantage of MREPT with respect to other techniques for electric properties imaging is the knowledge of the input field inside the examined body, which guarantees the possibility of achieving high-resolution. On the other hand, MREPT techniques rely on just the incomplete information that MRI scanners can measure of the RF magnetic field, typically limited to the transmit sensitivity B1+. In this thesis, the state of art is described in detail by analysing the whole bibliography of MREPT, started few years ago but already rich of contents. With reference to the advantages and drawbacks of each technique proposed for MREPT, the particular implementation based on the contrast source inversion method is selected as the most promising approach for MRI safety applications and is denoted by the symbol csiEPT. Motivated by this observation, a substantial part of the thesis is devoted to a thoroughly study of csiEPT. Precisely, a generalised framework based on a functional point of view is proposed for its implementation. In this way, it is possible to adapt csiEPT to various physical situations. In particular, an original formulation, specifically developed to take into account the effects of the conductive shield always employed in RF coils, shows how an accurate modelling of the measurement system leads to more precise estimations of the electric properties. In addition, a preliminary study for the uncertainty assessment of csiEPT, an imperative requirement in order to make the method reliable for in vivo applications, is performed. The uncertainty propagation through csiEPT is studied using the Monte Carlo method as prescribed by the Supplement 1 to GUM (Guide to the expression of Uncertainty in Measurement). The robustness of the method when measurements are performed by multi-channel TEM coils for parallel transmission confirms the eligibility of csiEPT for MRI safety applications

    Contributions en optimisation topologique : extension de la méthode adjointe et applications au traitement d'images

    Get PDF
    De nos jours, l'optimisation topologique a été largement étudiée en optimisation de structure, problème majeur en conception de systèmes mécaniques pour l'industrie et dans les problèmes inverses avec la détection de défauts et d'inclusions. Ce travail se concentre sur les approches de dérivées topologiques et propose une généralisation plus flexible de cette méthode rendant possible l'investigation de nouvelles applications. Dans une première partie, nous étudions des problèmes classiques en traitement d'images (restauration, inpainting), et exposons une formulation commune à ces problèmes. Nous nous concentrons sur la diffusion anisotrope et considérons un nouveau problème : la super-résolution. Notre approche semble meilleure comparée aux autres méthodes. L'utilisation des dérivées topologiques souffre d'inconvénients : elle est limitée à des problèmes simples, nous ne savons pas comment remplir des trous ... Dans une seconde partie, une nouvelle méthode visant à surmonter ces difficultés est présentée. Cette approche, nommée voûte numérique, est une extension de la méthode adjointe. Ce nouvel outil nous permet de considérer de nouveaux champs d'application et de réaliser de nouvelles investigations théoriques dans le domaine des dérivées topologiques.Nowadays, topology optimization has been extensively studied in structural optimization which is a major interest in the design of mechanical systems in the industry and in inverse problems with the detection of defects or inclusions. This work focuses on the topological derivative approach and proposes a more flexible generalization of this method making it possible to address new applications. In a first part, we study classical image processing problems (restoration, inpainting), and give a common framework to theses problems. We focus on anisotropic diffusion and consider a new problem: super-resolution. Our approach seems to be powerful in comparison with other methods. Topological derivative method has some drawbacks: it is limited to simple problems, we do not know how to fill holes, ... In a second part, to overcome these difficulties, an extension of the adjoint method is presented. Named the numerical vault, it allows us to consider new fields of applications and to explore new theoretical investigations in the area of topological derivative

    Level Set Methods for MRE Image Processing and Analysis

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore