63 research outputs found

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Advanced acquisition and reconstruction techniques in magnetic resonance imaging

    Get PDF
    Mención Internacional en el título de doctorMagnetic Resonance Imaging (MRI) is a biomedical imaging modality with outstanding features such as excellent soft tissue contrast and very high spatial resolution. Despite its great properties, MRI suffers from some drawbacks, such as low sensitivity and long acquisition times. This thesis focuses on providing solutions for the second MR drawback, through the use of compressed sensing methodologies. Compressed sensing is a novel technique that enables the reduction of acquisition times and can also improve spatiotemporal resolution and image quality. Compressed sensing surpasses the traditional limits of Nyquist sampling theories by enabling the reconstruction of images from an incomplete number of acquired samples, provided that 1) the images to reconstruct have a sparse representation in a certain domain, 2) the undersampling applied is random and 3) specific non-linear reconstruction algorithms are used. Cardiovascular MRI has to overcome many limitations derived from the respiratory and cardiac cycles, and has very strict requirements in terms of spatiotemporal resolution. Hence, any improvement in terms of reducing acquisition times or increasing image quality by means of compressed sensing will be highly beneficial. This thesis aims to investigate the benefits that compressed sensing may provide in two cardiovascular MR applications: The acquisition of small-animal cardiac cine images and the visualization of human coronary atherosclerotic plaques. Cardiac cine in small-animals is a widely used approach to assess cardiovascular function. In this work we proposed a new compressed sensing methodology to reduce acquisition times in self-gated cardiac cine sequences. This methodology was developed as a modification of the Split Bregman reconstruction algorithm to include the minimization of Total Variation across both spatial and temporal dimensions. We simulated compressed sensing acquisitions by retrospectively undersampling complete acquisitions. The accuracy of the results was evaluated with functional measurements in both healthy animals and animals with myocardial infarction. The method reached accelerations rates of 10-14 for healthy animals and acceleration rates of 10 in the case of unhealthy animals. We verified these theoretically-feasible acceleration factors in practice with the implementation of a real compressed sensing acquisition in a 7 T small-animal MR scanner. We demonstrated that acceleration factors around 10 are achievable in practice, close to those obtained in the previous simulations. However, we found some small differences in image quality between simulated and real undersampled compressed sensing reconstructions at high acceleration rates; this might be explained by differences in their sensitivity to motion contamination during acquisition. The second cardiovascular application explored in this thesis is the visualization of atherosclerotic plaques in coronary arteries in humans. Nowadays, in vivo visualization and classification of plaques by MRI is not yet technically feasible. Acceleration techniques such as compressed sensing may greatly contribute to the feasibility of the application in vivo. However, it is advisable to carry out a systematic study of the basic technical requirements for the coronary plaque visualization prior to designing specific acquisition techniques. On simulation studies we assessed spatial resolution, SNR and motion limits required for the proper visualization of coronary plaques and we proposed a new hybrid acquisition scheme that reduces sensitivity to motion. In order to evaluate the benefits that acceleration techniques might provide, we evaluated different parallel imaging algorithms and we also implemented a compressed sensing methodology that incorporates information from the coil sensitivity profile of the phased-array coil used. We found that, with the coil setup analyzed, acceleration benefits were greatly limited by the small size of the FOV of interest. Thus, dedicated phased-arrays need to be designed to enhance the benefits that accelerating techniques may provide on coronary artery plaque imaging in vivo.La Imagen por Resonancia Magnética (IRM) es una modalidad de imagen biomédica con notables características tales como un excelente contraste en tejidos blandos y una muy alta resolución espacial. Sin embargo, a pesar de estas importantes propiedades, la IRM tiene algunos inconvenientes, como una baja sensibilidad y tiempos de adquisición muy largos. Esta tesis se centra en buscar soluciones para el segundo inconveniente mencionado a través del uso de metodologías de compressed sensing. Compressed sensing es una técnica novedosa que permite la reducción de los tiempos de adquisición y también la mejora de la resolución espacio-temporal y la calidad de las imágenes. La teoría de compressed sensing va más allá los límites tradicionales de la teoría de muestreo de Nyquist, permitiendo la reconstrucción de imágenes a partir de un número incompleto de muestras siempre que se cumpla que 1) las imágenes a reconstruir tengan una representación dispersa (sparse) en un determinado dominio, 2) el submuestreo aplicado sea aleatorio y 3) se usen algoritmos de reconstrucción no lineales específicos. La resonancia magnética cardiovascular tiene que superar muchas limitaciones derivadas de los ciclos respiratorios y cardiacos, y además tiene que cumplir unos requisitos de resolución espacio-temporal muy estrictos. De ahí que cualquier mejora que se pueda conseguir bien reduciendo tiempos de adquisición o bien aumentando la calidad de las imágenes resultaría altamente beneficiosa. Esta tesis tiene como objetivo investigar los beneficios que la técnica de compressed sensing puede proporcionar a dos aplicaciones punteras en RM cardiovascular, la adquisición de cines cardiacos de pequeño animal y la visualización de placas ateroscleróticas en arterias coronarias en humano. La adquisición de cines cardiacos en pequeño animal es una aplicación ampliamente usada para evaluar función cardiovascular. En esta tesis, proponemos una metodología de compressed sensing para reducir los tiempos de adquisición de secuencias de cine cardiaco denominadas self-gated. Desarrollamos esta metodología modificando el algoritmo de reconstrucción de Split-Bregman para incluir la minimización de la Variación Total a través de la dimensión temporal además de la espacial. Para ello, simulamos adquisiciones de compressed sensing submuestreando retrospectivamente adquisiciones completas. La calidad de los resultados se evaluó con medidas funcionales tanto en animales sanos como en animales a los que se les produjo un infarto cardiaco. El método propuesto mostró que factores de aceleración de 10-14 son posibles para animales sanos y en torno a 10 para animales infartados. Estos factores de aceleración teóricos se verificaron en la práctica mediante la implementación de una adquisición submuestreada en un escáner de IRM de pequeño animal de 7 T. Se demostró que aceleraciones en torno a 10 son factibles en la práctica, valor muy cercano a los obtenidos en las simulaciones previas. Sin embargo para factores de aceleración muy altos, se apreciaron algunas diferencias entre la calidad de las imágenes con submuestreo simulado y las realmente submuestreadas; esto puede ser debido a una mayor sensibilidad a la contaminación por movimiento durante la adquisición. La segunda aplicación cardiovascular explorada en esta tesis es la visualización de placas ateroscleróticas en arterias coronarias en humanos. Hoy en día, la visualización y clasificación in vivo de es te tipo de placas mediante IRM aún no es técnicamente posible. Pero no hay duda de que técnicas de aceleración, como compressed sensing, pueden contribuir enormemente a la consecución de la aplicación in vivo. Sin embargo, como paso previo a la evaluación de las técnicas de aceleración, es conveniente hacer un estudio sistemático de los requerimientos técnicos necesarios para la correcta visualización y caracterización de las placas coronarias. Mediante simulaciones establecimos los límites de señal a ruido, resolución espacial y movimiento requeridos para la correcta visualización de las placas y propusimos un nuevo esquema de adquisición híbrido que reduce la sensibilidad al movimiento. Para valorar los beneficios que las técnicas de aceleración pueden aportar, evaluamos diferentes algoritmos de imagen en paralelo e implementamos una metodología de compresed sensing que tiene en cuenta la información de los mapas de sensibilidad de las antenas utilizadas. En este estudio se encontró, que para la configuración de antenas analizadas, los beneficios de la aceleración están muy limitados por el pequeño campo de visón utilizado. Por tanto, para incrementar los beneficios que estas técnicas de aceleración pueden aportar la imagen de placas coronarias in vivo, es necesario diseñar antenas específicas para esta aplicación.Programa Oficial de Doctorado en Multimedia y ComunicacionesPresidente: Elfar Adalsteinsson.- Secretario: Juan Miguel Parra Robles.- Vocal: Pedro Ramos Cabre

    Recommended Implementation of Quantitative Susceptibility Mapping for Clinical Research in The Brain: A Consensus of the ISMRM Electro-Magnetic Tissue Properties Study Group

    Get PDF
    This article provides recommendations for implementing quantitative susceptibility mapping (QSM) for clinical brain research. It is a consensus of the ISMRM Electro-Magnetic Tissue Properties Study Group. While QSM technical development continues to advance rapidly, the current QSM methods have been demonstrated to be repeatable and reproducible for generating quantitative tissue magnetic susceptibility maps in the brain. However, the many QSM approaches available give rise to the need in the neuroimaging community for guidelines on implementation. This article describes relevant considerations and provides specific implementation recommendations for all steps in QSM data acquisition, processing, analysis, and presentation in scientific publications. We recommend that data be acquired using a monopolar 3D multi-echo GRE sequence, that phase images be saved and exported in DICOM format and unwrapped using an exact unwrapping approach. Multi-echo images should be combined before background removal, and a brain mask created using a brain extraction tool with the incorporation of phase-quality-based masking. Background fields should be removed within the brain mask using a technique based on SHARP or PDF, and the optimization approach to dipole inversion should be employed with a sparsity-based regularization. Susceptibility values should be measured relative to a specified reference, including the common reference region of whole brain as a region of interest in the analysis, and QSM results should be reported with - as a minimum - the acquisition and processing specifications listed in the last section of the article. These recommendations should facilitate clinical QSM research and lead to increased harmonization in data acquisition, analysis, and reporting

    Adaptive Nonlocal Signal Restoration and Enhancement Techniques for High-Dimensional Data

    Get PDF
    The large number of practical applications involving digital images has motivated a significant interest towards restoration solutions that improve the visual quality of the data under the presence of various acquisition and compression artifacts. Digital images are the results of an acquisition process based on the measurement of a physical quantity of interest incident upon an imaging sensor over a specified period of time. The quantity of interest depends on the targeted imaging application. Common imaging sensors measure the number of photons impinging over a dense grid of photodetectors in order to produce an image similar to what is perceived by the human visual system. Different applications focus on the part of the electromagnetic spectrum not visible by the human visual system, and thus require different sensing technologies to form the image. In all cases, even with the advance of technology, raw data is invariably affected by a variety of inherent and external disturbing factors, such as the stochastic nature of the measurement processes or challenging sensing conditions, which may cause, e.g., noise, blur, geometrical distortion and color aberration. In this thesis we introduce two filtering frameworks for video and volumetric data restoration based on the BM3D grouping and collaborative filtering paradigm. In its general form, the BM3D paradigm leverages the correlation present within a nonlocal emph{group} composed of mutually similar basic filtering elements, e.g., patches, to attain an enhanced sparse representation of the group in a suitable transform domain where the energy of the meaningful part of the signal can be thus separated from that of the noise through coefficient shrinkage. We argue that the success of this approach largely depends on the form of the used basic filtering elements, which in turn define the subsequent spectral representation of the nonlocal group. Thus, the main contribution of this thesis consists in tailoring specific basic filtering elements to the the inherent characteristics of the processed data at hand. Specifically, we embed the local spatial correlation present in volumetric data through 3-D cubes, and the local spatial and temporal correlation present in videos through 3-D spatiotemporal volumes, i.e. sequences of 2-D blocks following a motion trajectory. The foundational aspect of this work is the analysis of the particular spectral representation of these elements. Specifically, our frameworks stack mutually similar 3-D patches along an additional fourth dimension, thus forming a 4-D data structure. By doing so, an effective group spectral description can be formed, as the phenomena acting along different dimensions in the data can be precisely localized along different spectral hyperplanes, and thus different filtering shrinkage strategies can be applied to different spectral coefficients to achieve the desired filtering results. This constitutes a decisive difference with the shrinkage traditionally employed in BM3D-algorithms, where different hyperplanes of the group spectrum are shrunk subject to the same degradation model. Different image processing problems rely on different observation models and typically require specific algorithms to filter the corrupted data. As a consequent contribution of this thesis, we show that our high-dimensional filtering model allows to target heterogeneous noise models, e.g., characterized by spatial and temporal correlation, signal-dependent distributions, spatially varying statistics, and non-white power spectral densities, without essential modifications to the algorithm structure. As a result, we develop state-of-the-art methods for a variety of fundamental image processing problems, such as denoising, deblocking, enhancement, deflickering, and reconstruction, which also find practical applications in consumer, medical, and thermal imaging

    Joint Reconstruction for Multi-Modality Imaging with Common Structure

    Get PDF
    Imaging is a powerful tool being used in many disciplines such as engineering, physics, biology and medicine to name a few. Recent years have seen a trend that imaging modalities have been combined to create multi-modality imaging tools where different modalities acquire complementary information. For example, in medical imaging, positron emission tomography (PET) and magnetic resonance imaging (MRI) are combined to image structure and function of the human body. Another example is spectral imaging where each channel provides information about a different wave length, e.g. information about red, green and blue (RGB). Most imaging modalities do not acquire images directly but measure a quantity from which we can reconstruct an image. These inverse problems require a priori information in order to give meaningful solutions. Assumptions are often on the smoothness of the solution but other information is sometimes available, too. Many multi-modality images show a strong inter-channel correlation as they are acquired from the same anatomy in medical imaging or the same scenery in spectral imaging. However, images from different modalities are usually reconstructed separately. In this thesis we aim to exploit this correlation using the data from all modalities, that are present in the acquisition, in a joint reconstruction process with the assumption that similar structures in all channels are more likely. We propose a framework for joint reconstruction where modalities are coupled by additional information about the solution we seek. A family of priors -- called parallel level sets -- allows us to incorporate structural a priori knowledge into the reconstruction. We analyse the parallel level set priors in several aspects including their convexity and the diffusive flow generated by their variation. Several numerical examples in RGB colour imaging and in PET-MRI illustrate the gain of joint reconstruction and in particular of the parallel level set priors

    Generalized averaged Gaussian quadrature and applications

    Get PDF
    A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described

    Foundations, Inference, and Deconvolution in Image Restoration

    Get PDF
    Image restoration is a critical preprocessing step in computer vision, producing images with reduced noise, blur, and pixel defects. This enables precise higher-level reasoning as to the scene content in later stages of the vision pipeline (e.g., object segmentation, detection, recognition, and tracking). Restoration techniques have found extensive usage in a broad range of applications from industry, medicine, astronomy, biology, and photography. The recovery of high-grade results requires models of the image degradation process, giving rise to a class of often heavily underconstrained, inverse problems. A further challenge specific to the problem of blur removal is noise amplification, which may cause strong distortion by ringing artifacts. This dissertation presents new insights and problem solving procedures for three areas of image restoration, namely (1) model foundations, (2) Bayesian inference for high-order Markov random fields (MRFs), and (3) blind image deblurring (deconvolution). As basic research on model foundations, we contribute to reconciling the perceived differences between probabilistic MRFs on the one hand, and deterministic variational models on the other. To do so, we restrict the variational functional to locally supported finite elements (FE) and integrate over the domain. This yields a sum of terms depending locally on FE basis coefficients, and by identifying the latter with pixels, the terms resolve to MRF potential functions. In contrast with previous literature, we place special emphasis on robust regularizers used commonly in contemporary computer vision. Moreover, we draw samples from the derived models to further demonstrate the probabilistic connection. Another focal issue is a class of high-order Field of Experts MRFs which are learned generatively from natural image data and yield best quantitative results under Bayesian estimation. This involves minimizing an integral expression, which has no closed form solution in general. However, the MRF class under study has Gaussian mixture potentials, permitting expansion by indicator variables as a technical measure. As approximate inference method, we study Gibbs sampling in the context of non-blind deblurring and obtain excellent results, yet at the cost of high computing effort. In reaction to this, we turn to the mean field algorithm, and show that it scales quadratically in the clique size for a standard restoration setting with linear degradation model. An empirical study of mean field over several restoration scenarios confirms advantageous properties with regard to both image quality and computational runtime. This dissertation further examines the problem of blind deconvolution, beginning with localized blur from fast moving objects in the scene, or from camera defocus. Forgoing dedicated hardware or user labels, we rely only on the image as input and introduce a latent variable model to explain the non-uniform blur. The inference procedure estimates freely varying kernels and we demonstrate its generality by extensive experiments. We further present a discriminative method for blind removal of camera shake. In particular, we interleave discriminative non-blind deconvolution steps with kernel estimation and leverage the error cancellation effects of the Regression Tree Field model to attain a deblurring process with tightly linked sequential stages
    • …
    corecore