167 research outputs found
Deep MR Brain Image Super-Resolution Using Spatio-Structural Priors
High resolution Magnetic Resonance (MR) images are desired for accurate
diagnostics. In practice, image resolution is restricted by factors like
hardware and processing constraints. Recently, deep learning methods have been
shown to produce compelling state-of-the-art results for image
enhancement/super-resolution. Paying particular attention to desired
hi-resolution MR image structure, we propose a new regularized network that
exploits image priors, namely a low-rank structure and a sharpness prior to
enhance deep MR image super-resolution (SR). Our contributions are then
incorporating these priors in an analytically tractable fashion \color{black}
as well as towards a novel prior guided network architecture that accomplishes
the super-resolution task. This is particularly challenging for the low rank
prior since the rank is not a differentiable function of the image matrix(and
hence the network parameters), an issue we address by pursuing differentiable
approximations of the rank. Sharpness is emphasized by the variance of the
Laplacian which we show can be implemented by a fixed feedback layer at the
output of the network. As a key extension, we modify the fixed feedback
(Laplacian) layer by learning a new set of training data driven filters that
are optimized for enhanced sharpness. Experiments performed on publicly
available MR brain image databases and comparisons against existing
state-of-the-art methods show that the proposed prior guided network offers
significant practical gains in terms of improved SNR/image quality measures.
Because our priors are on output images, the proposed method is versatile and
can be combined with a wide variety of existing network architectures to
further enhance their performance.Comment: Accepted to IEEE transactions on Image Processin
Decomposition Ascribed Synergistic Learning for Unified Image Restoration
Learning to restore multiple image degradations within a single model is
quite beneficial for real-world applications. Nevertheless, existing works
typically concentrate on regarding each degradation independently, while their
relationship has been less exploited to ensure the synergistic learning. To
this end, we revisit the diverse degradations through the lens of singular
value decomposition, with the observation that the decomposed singular vectors
and singular values naturally undertake the different types of degradation
information, dividing various restoration tasks into two groups,\ie, singular
vector dominated and singular value dominated. The above analysis renders a
more unified perspective to ascribe the diverse degradations, compared to
previous task-level independent learning. The dedicated optimization of
degraded singular vectors and singular values inherently utilizes the potential
relationship among diverse restoration tasks, attributing to the Decomposition
Ascribed Synergistic Learning (DASL). Specifically, DASL comprises two
effective operators, namely, Singular VEctor Operator (SVEO) and Singular VAlue
Operator (SVAO), to favor the decomposed optimization, which can be lightly
integrated into existing convolutional image restoration backbone. Moreover,
the congruous decomposition loss has been devised for auxiliary. Extensive
experiments on blended five image restoration tasks demonstrate the
effectiveness of our method, including image deraining, image dehazing, image
denoising, image deblurring, and low-light image enhancement.Comment: 13 page
Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields
In dieser Arbeit werden spektral kodierte multispektrale Lichtfelder untersucht, wie sie von einer Lichtfeldkamera mit einem spektral kodierten Mikrolinsenarray aufgenommen werden. FĂŒr die Rekonstruktion der kodierten Lichtfelder werden zwei Methoden entwickelt, eine basierend auf den Prinzipien des Compressed Sensing sowie eine Deep Learning Methode. Anhand neuartiger synthetischer und realer DatensĂ€tze werden die vorgeschlagenen RekonstruktionsansĂ€tze im Detail evaluiert
Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields
In this work, spatio-spectrally coded multispectral light fields, as taken by a light field camera with a spectrally coded microlens array, are investigated. For the reconstruction of the coded light fields, two methods, one based on the principles of compressed sensing and one deep learning approach, are developed. Using novel synthetic as well as a real-world datasets, the proposed reconstruction approaches are evaluated in detail
Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields
In dieser Arbeit werden spektral codierte multispektrale Lichtfelder, wie sie von einer Lichtfeldkamera mit einem spektral codierten Mikrolinsenarray aufgenommen werden, untersucht. FĂŒr die Rekonstruktion der codierten Lichtfelder werden zwei Methoden entwickelt und im Detail ausgewertet.
ZunĂ€chst wird eine vollstĂ€ndige Rekonstruktion des spektralen Lichtfelds entwickelt, die auf den Prinzipien des Compressed Sensing basiert. Um die spektralen Lichtfelder spĂ€rlich darzustellen, werden 5D-DCT-Basen sowie ein Ansatz zum Lernen eines Dictionary untersucht. Der konventionelle vektorisierte Dictionary-Lernansatz wird auf eine tensorielle Notation verallgemeinert, um das Lichtfeld-Dictionary tensoriell zu faktorisieren. Aufgrund der reduzierten Anzahl von zu lernenden Parametern ermöglicht dieser Ansatz gröĂere effektive AtomgröĂen.
Zweitens wird eine auf Deep Learning basierende Rekonstruktion der spektralen Zentralansicht und der zugehörigen DisparitĂ€tskarte aus dem codierten Lichtfeld entwickelt. Dabei wird die gewĂŒnschte Information direkt aus den codierten Messungen geschĂ€tzt. Es werden verschiedene Strategien des entsprechenden Multi-Task-Trainings verglichen. Um die QualitĂ€t der Rekonstruktion weiter zu verbessern, wird eine neuartige Methode zur Einbeziehung von Hilfslossfunktionen auf der Grundlage ihrer jeweiligen normalisierten GradientenĂ€hnlichkeit entwickelt und gezeigt, dass sie bisherige adaptive Methoden ĂŒbertrifft.
Um die verschiedenen RekonstruktionsansĂ€tze zu trainieren und zu bewerten, werden zwei DatensĂ€tze erstellt. ZunĂ€chst wird ein groĂer synthetischer spektraler Lichtfelddatensatz mit verfĂŒgbarer DisparitĂ€t Ground Truth unter Verwendung eines Raytracers erstellt. Dieser Datensatz, der etwa 100k spektrale Lichtfelder mit dazugehöriger DisparitĂ€t enthĂ€lt, wird in einen Trainings-, Validierungs- und Testdatensatz aufgeteilt. Um die QualitĂ€t weiter zu bewerten, werden sieben handgefertigte Szenen, so genannte Datensatz-Challenges, erstellt. SchlieĂlich wird ein realer spektraler Lichtfelddatensatz mit einer speziell angefertigten spektralen Lichtfeldreferenzkamera aufgenommen. Die radiometrische und geometrische Kalibrierung der Kamera wird im Detail besprochen.
Anhand der neuen DatensĂ€tze werden die vorgeschlagenen RekonstruktionsansĂ€tze im Detail bewertet. Es werden verschiedene Codierungsmasken untersucht -- zufĂ€llige, regulĂ€re, sowie Ende-zu-Ende optimierte Codierungsmasken, die mit einer neuartigen differenzierbaren fraktalen Generierung erzeugt werden. DarĂŒber hinaus werden weitere Untersuchungen durchgefĂŒhrt, zum Beispiel bezĂŒglich der AbhĂ€ngigkeit von Rauschen, der Winkelauflösung oder Tiefe.
Insgesamt sind die Ergebnisse ĂŒberzeugend und zeigen eine hohe RekonstruktionsqualitĂ€t. Die Deep-Learning-basierte Rekonstruktion, insbesondere wenn sie mit adaptiven Multitasking- und Hilfslossstrategien trainiert wird, ĂŒbertrifft die Compressed-Sensing-basierte Rekonstruktion mit anschlieĂender DisparitĂ€tsschĂ€tzung nach dem Stand der Technik
Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields
In this work, spatio-spectrally coded multispectral light fields, as taken by a light field camera with a spectrally coded microlens array, are investigated. For the reconstruction of the coded light fields, two methods, one based on the principles of compressed sensing and one deep learning approach, are developed. Using novel synthetic as well as a real-world datasets, the proposed reconstruction approaches are evaluated in detail
Robust deep learning for computational imaging through random optics
Light scattering is a pervasive phenomenon that poses outstanding challenges in both coherent and incoherent imaging systems. The output of a coherent light scattered from a complex medium exhibits a seemingly random speckle pattern that scrambles the useful information of the object. To date, there is no simple solution for inverting such complex scattering. Advancing the solution of inverse scattering problems could provide important insights into applications across many areas, such as deep tissue imaging, non-line-of-sight imaging, and imaging in degraded environment. On the other hand, in incoherent systems, the randomness of scattering medium could be exploited to build lightweight, compact, and low-cost lensless imaging systems that are applicable in miniaturized biomedical and scientific imaging. The imaging capabilities of such computational imaging systems, however, are largely limited by the ill-posed or ill-conditioned inverse problems, which typically causes imaging artifacts and degradation of the image resolution. Therefore, mitigating this issue by developing modern algorithms is essential for pushing the limits of such lensless computational imaging systems.
In this thesis, I focus on the problem of imaging through random optics and present two novel deep-learning (DL) based methodologies to overcome the challenges in coherent and incoherent systems: 1) no simple solution for inverse scattering problem and lack of robustness to scattering variations; and 2) ill-posed problem for diffuser-based lensless imaging.
In the first part, I demonstrate the novel use of a deep neural network (DNN) to solve the inverse scattering problem in a coherent imaging system. I propose a `one-to-all' deep learning technique that encapsulates a wide range of statistical variations for the model to be resilient to speckle decorrelations. I show for the first time, to the best of my knowledge, that the trained CNN is able to generalize and make high-quality object prediction through an entirely different set of diffusers of the same macroscopic parameter. I then push the limit of robustness against a broader class of perturbations including scatterer change, displacements, and system defocus up to 10X depth of field.
In the second part, I consider the utility of the random light scattering to build a diffuser-based computational lensless imaging system and present a generally applicable novel DL framework to achieve fast and noise-robust color image reconstruction. I developed a diffuser-based computational funduscope that reconstructs important clinical features of a model eye. Experimentally, I demonstrated fundus image reconstruction over a large field of view (FOV) and robustness to refractive error using a constant point-spread-function. Next, I present a physics simulator-trained, adaptive DL framework to achieve fast and noise-robust color imaging. The physics simulator incorporates optical system modeling, the simulation of mixed Poisson-Gaussian noise, and color filter array induced artifacts in color sensors. The learning framework includes an adaptive multi-channel L2-regularized inversion module and a channel-attention enhancement network module. Both simulation and experiments show consistently better reconstruction accuracy and robustness to various noise levels under different light conditions compared with traditional L2-regularized reconstructions.
Overall, this thesis investigated two major classes of problems in imaging through random optics. In the first part of the thesis, my work explored a novel DL-based approach for solving the inverse scattering problem and paves the way to a scalable and robust deep learning approach to imaging through scattering media. In the second part of the thesis, my work developed a broadly applicable adaptive learning-based framework for ill-conditioned image reconstruction and a physics-based simulation model for computational color imaging
- âŠ