30 research outputs found

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Fast diffusion MRI based on sparse acquisition and reconstruction for long-term population imaging

    Get PDF
    Diffusion weighted magnetic resonance imaging (dMRI) is a unique MRI modality to probe the diffusive molecular transport in biological tissue. Due to its noninvasiveness and its ability to investigate the living human brain at submillimeter scale, dMRI is frequently performed in clinical and biomedical research to study the brain’s complex microstructural architecture. Over the last decades large prospective cohort studies have been set up with the aim to gain new insights into the development and progression of brain diseases across the life span and to discover biomarkers for disease prediction and potentially prevention. To allow for diverse brain imaging using different MRI modalities, stringent scan time limits are typically imposed in population imaging. Nevertheless, population studies aim to apply advanced and thereby time consuming dMRI protocols that deliver high quality data with great potential for future analysis. To allow for time-efficient but also versatile diffusion imaging, this thesis contributes to the investigation of accelerating diffusion spectrum imaging (DSI), an advanced dMRI technique that acquires imaging data with high intra-voxel resolution of tissue microstructure. Combining state-of-the-art parallel imaging and the theory of compressed sensing (CS) enables the acceleration of spatial encoding and diffusion encoding in dMRI. In this way, the otherwise long acquisition times in DSI can be reduced significantly. In this thesis, first, suitable q-space sampling strategies and basis functions are explored that fulfill the requirements of CS theory for accurate sparse DSI reconstruction. Novel 3D q-space sample distributions are investigated for CS-DSI. Moreover, conventional CS-DSI based on the discrete Fourier transform is compared for the first time to CS-DSI based on the continuous SHORE (simple harmonic oscillator based reconstruction and estimation) basis functions. Based on these findings, a CS-DSI protocol is proposed for application in a prospective cohort study, the Rhineland Study. A pilot study was designed and conducted to evaluate the CS-DSI protocol in comparison with state-of-the-art 3-shell dMRI and dedicated protocols for diffusion tensor imaging (DTI) and for the combined hindered and restricted model of diffusion (CHARMED). Population imaging requires processing techniques preferably with low computational cost to process and analyze the acquired big data within a reasonable time frame. Therefore, a pipeline for automated processing of CS-DSI acquisitions was implemented including both in-house developed and existing state-of-the-art processing tools. The last contribution of this thesis is a novel method for automatic detection and imputation of signal dropout due to fast bulk motion during the diffusion encoding in dMRI. Subject motion is a common source of artifacts, especially when conducting clinical or population studies with children, the elderly or patients. Related artifacts degrade image quality and adversely affect data analysis. It is, thus, highly desired to detect and then exclude or potentially impute defective measurements prior to dMRI analysis. Our proposed method applies dMRI signal modeling in the SHORE basis and determines outliers based on the weighted model residuals. Signal imputation reconstructs corrupted and therefore discarded measurements from the sparse set of inliers. This approach allows for fast and robust correction of imaging artifacts in dMRI which is essential to estimate accurate and precise model parameters that reflect the diffusive transport of water molecules and the underlying microstructural environment in brain tissue.Die diffusionsgewichtete Magnetresonanztomographie (dMRT) ist ein einzigartiges MRTBildgebungsverfahren, um die Diffusionsbewegung von Wassermolekülen in biologischem Gewebe zu messen. Aufgrund der Möglichkeit Schichtbilder nicht invasiv aufzunehmen und das lebende menschliche Gehirn im Submillimeter-Bereich zu untersuchen, ist die dMRT ein häufig verwendetes Bildgebungsverfahren in klinischen und biomedizinischen Studien zur Erforschung der komplexen mikrostrukturellen Architektur des Gehirns. In den letzten Jahrzehnten wurden große prospektive Kohortenstudien angelegt, um neue Einblicke in die Entwicklung und den Verlauf von Gehirnkrankheiten über die Lebenspanne zu erhalten und um Biomarker zur Krankheitserkennung und -vorbeugung zu bestimmen. Um durch die Verwendung unterschiedlicher MRT-Verfahren verschiedenartige Schichtbildaufnahmen des Gehirns zu ermöglich, müssen Scanzeiten typischerweise stark begrenzt werden. Dennoch streben Populationsstudien die Anwendung von fortschrittlichen und daher zeitintensiven dMRT-Protokollen an, um Bilddaten in hoher Qualität und mit großem Potential für zukünftige Analysen zu akquirieren. Um eine zeiteffizente und gleichzeitig vielseitige Diffusionsbildgebung zu ermöglichen, leistet diese Dissertation Beiträge zur Untersuchung von Beschleunigungsverfahren für die Bildgebung mittels diffusion spectrum imaging (DSI). DSI ist ein fortschrittliches dMRT-Verfahren, das Bilddaten mit hoher intra-voxel Auflösung der Gewebestruktur erhebt. Werden modernste Verfahren zur parallelen MRT-Bildgebung mit der compressed sensing (CS) Theorie kombiniert, ermöglicht dies eine Beschleunigung der räumliche Kodierung und der Diffusionskodierung in der dMRT. Dadurch können die ansonsten langen Aufnahmezeiten für DSI erheblich reduziert werden. In dieser Arbeit werden zuerst geeigenete Strategien zur Abtastung des q-space sowie Basisfunktionen untersucht, welche die Anforderungen der CS-Theorie für eine korrekte Signalrekonstruktion der dünnbesetzten DSI-Daten erfüllen. Neue 3D-Verteilungen von Messpunkten im q-space werden für die Verwendung in CS-DSI untersucht. Außerdem wird konventionell auf der diskreten Fourier-Transformation basierendes CS-DSI zum ersten Mal mit einem CS-DSI Verfahren verglichen, welches kontinuierliche SHORE (simple harmonic oscillator based reconstruction and estimation) Basisfunktionen verwendet. Aufbauend auf diesen Ergebnissen wird ein CS-DSI-Protokoll zur Anwendung in einer prospektiven Kohortenstudie, der Rheinland Studie, vorgestellt. Eine Pilotstudie wurde entworfen und durchgeführt, um das CS-DSI-Protokoll im Vergleich mit modernster 3-shell-dMRT und mit dedizierten Protokollen für diffusion tensor imaging (DTI) und für das combined hindered and restricted model of diffusion (CHARMED) zu evaluieren. Populationsbildgebung erfordert Prozessierungsverfahren mit möglichst geringem Rechenaufwand, um große akquirierte Datenmengen in einem angemessenen Zeitrahmen zu verarbeiten und zu analysieren. Dafür wurde eine Pipeline zur automatisierten Verarbeitung von CS-DSI-Daten implementiert, welche sowohl eigenentwickelte als auch bereits existierende moderene Verarbeitungsprogramme enthält. Der letzte Beitrag dieser Arbeit ist eine neue Methode zur automatischen Detektion und Imputation von Signalabfall, welcher durch schnelle Bewegungen während der Diffusionskodierung in der dMRT entsteht. Bewegungen der Probanden während der dMRT-Aufnahme sind eine häufige Ursache für Bildfehler, vor allem in klinischen oder Populationsstudien mit Kindern, alten Menschen oder Patienten. Diese Artefakte vermindern die Datenqualität und haben einen negativen Einfluss auf die Datenanalyse. Daher ist es das Ziel, fehlerhafte Messungen vor der dMRI-Analyse zu erkennen und dann auszuschließen oder wenn möglich zu ersetzen. Die vorgestellte Methode verwendet die SHORE-Basis zur dMRT-Signalmodellierung und bestimmt Ausreißer mit Hilfe von gewichteten Modellresidualen. Die Datenimputation rekonstruiert die unbrauchbaren und daher verworfenen Messungen mit Hilfe der verbleibenden, dünnbesetzten Menge an Messungen. Dieser Ansatz ermöglicht eine schnelle und robuste Korrektur von Bildartefakten in der dMRT, welche erforderlich ist, um korrekte und präzise Modellparameter zu schätzen, die die Diffusionsbewegung von Wassermolekülen und die zugrundeliegende Mikrostruktur des Gehirngewebes reflektieren

    Improving the image quality in compressed sensing MRI by the exploitation of data properties

    Get PDF

    Variable Splitting as a Key to Efficient Image Reconstruction

    Get PDF
    The problem of reconstruction of digital images from their degraded measurements has always been a problem of central importance in numerous applications of imaging sciences. In real life, acquired imaging data is typically contaminated by various types of degradation phenomena which are usually related to the imperfections of image acquisition devices and/or environmental effects. Accordingly, given the degraded measurements of an image of interest, the fundamental goal of image reconstruction is to recover its close approximation, thereby "reversing" the effect of image degradation. Moreover, the massive production and proliferation of digital data across different fields of applied sciences creates the need for methods of image restoration which would be both accurate and computationally efficient. Developing such methods, however, has never been a trivial task, as improving the accuracy of image reconstruction is generally achieved at the expense of an elevated computational burden. Accordingly, the main goal of this thesis has been to develop an analytical framework which allows one to tackle a wide scope of image reconstruction problems in a computationally efficient manner. To this end, we generalize the concept of variable splitting, as a tool for simplifying complex reconstruction problems through their replacement by a sequence of simpler and therefore easily solvable ones. Moreover, we consider two different types of variable splitting and demonstrate their connection to a number of existing approaches which are currently used to solve various inverse problems. In particular, we refer to the first type of variable splitting as Bregman Type Splitting (BTS) and demonstrate its applicability to the solution of complex reconstruction problems with composite, cross-domain constraints. As specific applications of practical importance, we consider the problem of reconstruction of diffusion MRI signals from sub-critically sampled, incomplete data as well as the problem of blind deconvolution of medical ultrasound images. Further, we refer to the second type of variable splitting as Fuzzy Clustering Splitting (FCS) and show its application to the problem of image denoising. Specifically, we demonstrate how this splitting technique allows us to generalize the concept of neighbourhood operation as well as to derive a unifying approach to denoising of imaging data under a variety of different noise scenarios

    Deep learning for fast and robust medical image reconstruction and analysis

    Get PDF
    Medical imaging is an indispensable component of modern medical research as well as clinical practice. Nevertheless, imaging techniques such as magnetic resonance imaging (MRI) and computational tomography (CT) are costly and are less accessible to the majority of the world. To make medical devices more accessible, affordable and efficient, it is crucial to re-calibrate our current imaging paradigm for smarter imaging. In particular, as medical imaging techniques have highly structured forms in the way they acquire data, they provide us with an opportunity to optimise the imaging techniques holistically by leveraging data. The central theme of this thesis is to explore different opportunities where we can exploit data and deep learning to improve the way we extract information for better, faster and smarter imaging. This thesis explores three distinct problems. The first problem is the time-consuming nature of dynamic MR data acquisition and reconstruction. We propose deep learning methods for accelerated dynamic MR image reconstruction, resulting in up to 10-fold reduction in imaging time. The second problem is the redundancy in our current imaging pipeline. Traditionally, imaging pipeline treated acquisition, reconstruction and analysis as separate steps. However, we argue that one can approach them holistically and optimise the entire pipeline jointly for a specific target goal. To this end, we propose deep learning approaches for obtaining high fidelity cardiac MR segmentation directly from significantly undersampled data, greatly exceeding the undersampling limit for image reconstruction. The final part of this thesis tackles the problem of interpretability of the deep learning algorithms. We propose attention-models that can implicitly focus on salient regions in an image to improve accuracy for ultrasound scan plane detection and CT segmentation. More crucially, these models can provide explainability, which is a crucial stepping stone for the harmonisation of smart imaging and current clinical practice.Open Acces

    Compressive sensing for signal ensembles

    Get PDF
    Compressive sensing (CS) is a new approach to simultaneous sensing and compression that enables a potentially large reduction in the sampling and computation costs for acquisition of signals having a sparse or compressible representation in some basis. The CS literature has focused almost exclusively on problems involving single signals in one or two dimensions. However, many important applications involve distributed networks or arrays of sensors. In other applications, the signal is inherently multidimensional and sensed progressively along a subset of its dimensions; examples include hyperspectral imaging and video acquisition. Initial work proposed joint sparsity models for signal ensembles that exploit both intra- and inter-signal correlation structures. Joint sparsity models enable a reduction in the total number of compressive measurements required by CS through the use of specially tailored recovery algorithms. This thesis reviews several different models for sparsity and compressibility of signal ensembles and multidimensional signals and proposes practical CS measurement schemes for these settings. For joint sparsity models, we evaluate the minimum number of measurements required under a recovery algorithm with combinatorial complexity. We also propose a framework for CS that uses a union-of-subspaces signal model. This framework leverages the structure present in certain sparse signals and can exploit both intra- and inter-signal correlations in signal ensembles. We formulate signal recovery algorithms that employ these new models to enable a reduction in the number of measurements required. Additionally, we propose the use of Kronecker product matrices as sparsity or compressibility bases for signal ensembles and multidimensional signals to jointly model all types of correlation present in the signal when each type of correlation can be expressed using sparsity. We compare the performance of standard global measurement ensembles, which act on all of the signal samples; partitioned measurements, which act on a partition of the signal with a given measurement depending only on a piece of the signal; and Kronecker product measurements, which can be implemented in distributed measurement settings. The Kronecker product formulation in the sparsity and measurement settings enables the derivation of analytical bounds for transform coding compression of signal ensembles and multidimensional signals. We also provide new theoretical results for performance of CS recovery when Kronecker product matrices are used, which in turn motivates new design criteria for distributed CS measurement schemes

    Compressive sensing based image processing and energy-efficient hardware implementation with application to MRI and JPG 2000

    Get PDF
    In the present age of technology, the buzzwords are low-power, energy-efficient and compact systems. This directly leads to the date processing and hardware techniques employed in the core of these devices. One of the most power-hungry and space-consuming schemes is that of image/video processing, due to its high quality requirements. In current design methodologies, a point has nearly been reached in which physical and physiological effects limit the ability to just encode data faster. These limits have led to research into methods to reduce the amount of acquired data without degrading image quality and increasing the energy consumption. Compressive sensing (CS) has emerged as an efficient signal compression and recovery technique, which can be used to efficiently reduce the data acquisition and processing. It exploits the sparsity of a signal in a transform domain to perform sampling and stable recovery. This is an alternative paradigm to conventional data processing and is robust in nature. Unlike the conventional methods, CS provides an information capturing paradigm with both sampling and compression. It permits signals to be sampled below the Nyquist rate, and still allowing optimal reconstruction of the signal. The required measurements are far less than those of conventional methods, and the process is non-adaptive, making the sampling process faster and universal. In this thesis, CS methods are applied to magnetic resonance imaging (MRI) and JPEG 2000, which are popularly used imaging techniques in clinical applications and image compression, respectively. Over the years, MRI has improved dramatically in both imaging quality and speed. This has further revolutionized the field of diagnostic medicine. However, imaging speed, which is essential to many MRI applications still remains a major challenge. The specific challenge addressed in this work is the use of non-Fourier based complex measurement-based data acquisition. This method provides the possibility of reconstructing high quality MRI data with minimal measurements, due to the high incoherence between the two chosen matrices. Similarly, JPEG2000, though providing a high compression, can be further improved upon by using compressive sampling. In addition, the image quality is also improved. Moreover, having a optimized JPEG 2000 architecture reduces the overall processing, and a faster computation when combined with CS. Considering the requirements, this thesis is presented in two parts. In the first part: (1) A complex Hadamard matrix (CHM) based 2D and 3D MRI data acquisition with recovery using a greedy algorithm is proposed. The CHM measurement matrix is shown to satisfy the necessary condition for CS, known as restricted isometry property (RIP). The sparse recovery is done using compressive sampling matching pursuit (CoSaMP); (2) An optimized matrix and modified CoSaMP is presented, which enhances the MRI performance when compared with the conventional sampling; (3) An energy-efficient, cost-efficient hardware design based on field programmable gate array (FPGA) is proposed, to provide a platform for low-cost MRI processing hardware. At every stage, the design is proven to be superior with other commonly used MRI-CS methods and is comparable with the conventional MRI sampling. In the second part, CS techniques are applied to image processing and is combined with JPEG 2000 coder. While CS can reduce the encoding time, the effect on the overall JPEG 2000 encoder is not very significant due to some complex JPEG 2000 algorithms. One problem encountered is the big-level operations in JPEG 2000 arithmetic encoding (AE), which is completely based on bit-level operations. In this work, this problem is tackled by proposing a two-symbol AE with an efficient FPGA based hardware design. Furthermore, this design is energy-efficient, fast and has lower complexity when compared to conventional JPEG 2000 encoding

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    Advanced sparse optimization algorithms for interferometric imaging inverse problems in astronomy

    Get PDF
    In the quest to produce images of the sky at unprecedented resolution with high sensitivity, new generation of astronomical interferometers have been designed. To meet the sensing capabilities of these instruments, techniques aiming to recover the sought images from the incompletely sampled Fourier domain measurements need to be reinvented. This goes hand-in-hand with the necessity to calibrate the measurement modulating unknown effects, which adversely affect the image quality, limiting its dynamic range. The contribution of this thesis consists in the development of advanced optimization techniques tailored to address these issues, ranging from radio interferometry (RI) to optical interferometry (OI). In the context of RI, we propose a novel convex optimization approach for full polarization imaging relying on sparsity-promoting regularizations. Unlike standard RI imaging algorithms, our method jointly solves for the Stokes images by enforcing the polarization constraint, which imposes a physical dependency between the images. These priors are shown to enhance the imaging quality via various performed numerical studies. The proposed imaging approach also benefits from its scalability to handle the huge amounts of data expected from the new instruments. When it comes to deal with the critical and challenging issues of the direction-dependent effects calibration, we further propose a non-convex optimization technique that unifies calibration and imaging steps in a global framework, in which we adapt the earlier developed imaging method for the imaging step. In contrast to existing RI calibration modalities, our method benefits from well-established convergence guarantees even in the non-convex setting considered in this work and its efficiency is demonstrated through several numerical experiments. Last but not least, inspired by the performance of these methodologies and drawing ideas from them, we aim to solve image recovery problem in OI that poses its own set of challenges primarily due to the partial loss of phase information. To this end, we propose a sparsity regularized non-convex optimization algorithm that is equipped with convergence guarantees and is adaptable to both monochromatic and hyperspectral OI imaging. We validate it by presenting the simulation results
    corecore