240 research outputs found

    CT Image Reconstruction by Spatial-Radon Domain Data-Driven Tight Frame Regularization

    Full text link
    This paper proposes a spatial-Radon domain CT image reconstruction model based on data-driven tight frames (SRD-DDTF). The proposed SRD-DDTF model combines the idea of joint image and Radon domain inpainting model of \cite{Dong2013X} and that of the data-driven tight frames for image denoising \cite{cai2014data}. It is different from existing models in that both CT image and its corresponding high quality projection image are reconstructed simultaneously using sparsity priors by tight frames that are adaptively learned from the data to provide optimal sparse approximations. An alternative minimization algorithm is designed to solve the proposed model which is nonsmooth and nonconvex. Convergence analysis of the algorithm is provided. Numerical experiments showed that the SRD-DDTF model is superior to the model by \cite{Dong2013X} especially in recovering some subtle structures in the images

    A Review on Deep Learning in Medical Image Reconstruction

    Full text link
    Medical imaging is crucial in modern clinics to guide the diagnosis and treatment of diseases. Medical image reconstruction is one of the most fundamental and important components of medical imaging, whose major objective is to acquire high-quality medical images for clinical usage at the minimal cost and risk to the patients. Mathematical models in medical image reconstruction or, more generally, image restoration in computer vision, have been playing a prominent role. Earlier mathematical models are mostly designed by human knowledge or hypothesis on the image to be reconstructed, and we shall call these models handcrafted models. Later, handcrafted plus data-driven modeling started to emerge which still mostly relies on human designs, while part of the model is learned from the observed data. More recently, as more data and computation resources are made available, deep learning based models (or deep models) pushed the data-driven modeling to the extreme where the models are mostly based on learning with minimal human designs. Both handcrafted and data-driven modeling have their own advantages and disadvantages. One of the major research trends in medical imaging is to combine handcrafted modeling with deep modeling so that we can enjoy benefits from both approaches. The major part of this article is to provide a conceptual review of some recent works on deep modeling from the unrolling dynamics viewpoint. This viewpoint stimulates new designs of neural network architectures with inspirations from optimization algorithms and numerical differential equations. Given the popularity of deep modeling, there are still vast remaining challenges in the field, as well as opportunities which we shall discuss at the end of this article.Comment: 31 pages, 6 figures. Survey pape

    Applied microlocal analysis of deep neural networks for inverse problems

    Get PDF
    Deep neural networks have recently shown state-of-the-art performance in different imaging tasks. As an example, EfficientNet is today the best image classifier on the ImageNet challenge. They are also very powerful for image reconstruction, for example, deep learning currently yields the best methods for CT reconstruction. Most imaging problems, such as CT reconstruction, are ill-posed inverse problems, which hence require regularization techniques typically based on a-priori information. Also, due to the human visual system, singularities such as edge-like features are the governing structures of images. This leads to the question of how to incorporate such information into a solver of an inverse problem in imaging and how deep neural networks operate on singularities. The main research theme of this thesis is to introduce theoretically founded approaches to use deep neural networks in combination with model-based methods to solve inverse problems from imaging science. We do this by heavily exploring the singularity structure of images as a-priori information. We then develop a comprehensive analysis of how neural networks act on singularities using predominantly methods from the microlocal analysis. For analyzing the interaction of deep neural networks with singularities, we introduce a novel technique to compute the propagation of wavefront sets through convolutional residual neural networks (conv-ResNet). This is achieved in a two-fold manner: We first study the continuous case where the neural network is defined in an infinite-dimensional continuous space. This problem is tackled by using the structure of these networks as a sequential application of continuous convolutional operators and ReLU non-linearities and applying microlocal analysis techniques to track the propagation of the wavefront set through the layers. This then leads to the so-called \emph{microcanonical relation} that describes the propagation of the wavefront set under the action of such a neural network. Secondly, for studying real-world discrete problems, we digitize the necessary microlocal analysis methods via the digital shearlet transform. The key idea is the fact that the shearlet transform optimally represents Fourier integral operators hence such a discretization decays rapidly, allowing a finite approximation. Fourier integral operators play an important role in microlocal analysis, since it is well known that they preserve singularities on functions, and, in addition, they have a closed form microcanonical relation. Also, based on the newly developed theoretical analysis, we introduce a method that uses digital shearlet coefficients to compute the digital wavefront set of images by a convolutional neural network. Our approach is then used for a similar analysis of the microlocal behavior of the learned-primal dual architecture, which is formed by a sequence of conv-ResNet blocks. This architecture has shown state-of-the-art performance in inverse problem regularization, in particular, computed tomography reconstruction related to the Radon transform. Since the Radon operator is a Fourier integral operator, our microlocal techniques can be applied. Therefore, we can study with high precision the singularities propagation of this architecture. Aiming to empirically analyze our theoretical approach, we focus on the reconstruction of X-ray tomographic data. We approach this problem by using a task-adapted reconstruction framework, in which we combine the task of reconstruction with the task of computing the wavefront set of the original image as a-priori information. Our numerical results show superior performance with respect to current state-of-the-art tomographic reconstruction methods; hence we anticipate our work to also be a significant contribution to the biomedical imaging community.Tiefe neuronale Netze haben in letzter Zeit bei verschiedenen Bildverarbeitungsaufgaben Spitzenleistungen gezeigt. Zum Beispiel ist AlexNet heute der beste Bildklassifikator bei der ImageNet-Challenge. Sie sind auch sehr leistungsfaehig fue die Bildrekonstruktion, zum Beispiel liefert Deep Learning derzeit die besten Methoden fuer die CT-Rekonstruktion. Die meisten Bildgebungsprobleme wie die CT-Rekonstruktion sind schlecht gestellte inverse Probleme, die daher Regularisierungstechniken erfordern, die typischerweise auf vorherigen Informationen basieren. Auch aufgrund des menschlichen visuellen Systems sind Singularitaeten wie kantenartige Merkmale die bestimmenden Strukturen von Bildern. Dies fuehrt zu der Frage, wie man solche Informationen in einen Loeser eines inversen Problems in der Bildverarbeitung einbeziehen kann und wie tiefe neuronale Netze mit Singularitaeten arbeiten. Das Hauptforschungsthema dieser Arbeit ist die Einfuehrung theoretisch fundierter konzeptioneller Ansaetze zur Verwendung von tiefen neuronalen Netzen in Kombination mit modellbasierten Methoden zur Loesung inverser Probleme aus der Bildwissenschaft. Wir tun dies, indem wir die Singularitaetsstruktur von Bildern als Vorinformation intensiv erforschen. Dazu entwickeln wir eine umfassende Analyse, wie neuronale Netze auf Singularitaeten wirken, indem wir vorwiegend Methoden aus der mikrolokalen Analyse verwenden. Um die Interaktion von tiefen neuronalen Netzen mit Singularitaeten zu analysieren, fuehren wir eine neuartige Technik ein, um die Ausbreitung von Wellenfrontsaetzen mit Hilfe von Convolutional Residual neuronalen Netzen (Conv-ResNet) zu berechnen. Dies wird auf zweierlei Weise erreicht: Zunaechst untersuchen wir den kontinuierlichen Fall, bei dem das neuronale Netz in einem unendlich dimensionalen kontinuierlichen Raum definiert ist. Dieses Problem wird angegangen, indem wir die besondere Struktur dieser Netze als sequentielle Anwendung von kontinuierlichen Faltungsoperatoren und ReLU-Nichtlinearitaeten nutzen und mikrolokale Analyseverfahren anwenden, um die Ausbreitung einer Wellenfrontmenge durch die Schichten zu verfolgen. Dies fuehrt dann zu einer mikrokanonischen Beziehung, die die Ausbreitung der Wellenfrontmenge unter ihrer Wirkung beschreibt. Zweitens digitalisieren wir die notwendigen mikrolokalen Analysemethoden ueber die digitale Shearlet-Transformation, wobei die Digitalisierung fuer die Untersuchung realer Probleme notwendig ist. Die Schluesselidee ist die Tatsache, dass die Shearlet-Transformation Fourier-Integraloperatoren optimal repraesentiert, so dass eine solche Diskretisierung schnell abklingt und eine endliche Approximation ermoeglicht. Nebenbei stellen wir auch eine Methode vor, die digitale Shearlet-Koeffizienten verwendet, um den digitalen Wellenfrontsatz von Bildern durch ein Faltungsneuronales Netzwerk zu berechnen. Unser Ansatz wird dann fuer eine aehnliche Analyse fuer die gelernte primale-duale Architektur verwendet, die durch eine Sequenz von conv-ResNet-Bloecken gebildet wird. Diese Architektur hat bei der Rekonstruktion inverser Probleme, insbesondere bei der Rekonstruktion der Computertomographie im Zusammenhang mit der Radon-Transformation, Spitzenleistungen gezeigt. Da der Radon-Operator ein Fourier-Integraloperator ist, koennen unsere mikrolokalen Techniken angewendet werden. Um unseren theoretischen Ansatz numerisch zu analysieren, konzentrieren wir uns auf die Rekonstruktion von Roentgentomographiedaten. Wir naehern uns diesem Problem mit Hilfe eines aufgabenangepassten Rekonstruktionsrahmens, in dem wir die Aufgabe der Rekonstruktion mit der Aufgabe der Berechnung der Wellenfrontmenge des Originalbildes als Vorinformation kombinieren. Unsere numerischen Ergebnisse zeigen eine ueberragende Leistung, daher erwarten wir, dass dies auch ein interessanter Beitrag fuer die biomedizinische Bildgebung sein wird

    Learned Interferometric Imaging for the SPIDER Instrument

    Full text link
    The Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER) is an optical interferometric imaging device that aims to offer an alternative to the large space telescope designs of today with reduced size, weight and power consumption. This is achieved through interferometric imaging. State-of-the-art methods for reconstructing images from interferometric measurements adopt proximal optimization techniques, which are computationally expensive and require handcrafted priors. In this work we present two data-driven approaches for reconstructing images from measurements made by the SPIDER instrument. These approaches use deep learning to learn prior information from training data, increasing the reconstruction quality, and significantly reducing the computation time required to recover images by orders of magnitude. Reconstruction time is reduced to ∼10{\sim} 10 milliseconds, opening up the possibility of real-time imaging with SPIDER for the first time. Furthermore, we show that these methods can also be applied in domains where training data is scarce, such as astronomical imaging, by leveraging transfer learning from domains where plenty of training data are available.Comment: 21 pages, 14 figure
    • …
    corecore