21 research outputs found
Coded aperture imaging
This thesis studies the coded aperture camera, a device consisting of a conventional
camera with a modified aperture mask, that enables the recovery
of both depth map and all-in-focus image from a single 2D input image.
Key contributions of this work are the modeling of the statistics of natural
images and the design of efficient blur identification methods in a Bayesian
framework. Two cases are distinguished: 1) when the aperture can be decomposed
in a small set of identical holes, and 2) when the aperture has a
more general configuration. In the first case, the formulation of the problem
incorporates priors about the statistical variation of the texture to avoid
ambiguities in the solution. This allows to bypass the recovery of the sharp
image and concentrate only on estimating depth. In the second case, the
depth reconstruction is addressed via convolutions with a bank of linear
filters. Key advantages over competing methods are the higher numerical
stability and the ability to deal with large blur. The all-in-focus image can
then be recovered by using a deconvolution step with the estimated depth
map. Furthermore, for the purpose of depth estimation alone, the proposed
algorithm does not require information about the mask in use. The
comparison with existing algorithms in the literature shows that the proposed
methods achieve state-of-the-art performance. This solution is also
extended for the first time to images affected by both defocus and motion
blur and, finally, to video sequences with moving and deformable objects
Image Restoration for Remote Sensing: Overview and Toolbox
Remote sensing provides valuable information about objects or areas from a
distance in either active (e.g., RADAR and LiDAR) or passive (e.g.,
multispectral and hyperspectral) modes. The quality of data acquired by
remotely sensed imaging sensors (both active and passive) is often degraded by
a variety of noise types and artifacts. Image restoration, which is a vibrant
field of research in the remote sensing community, is the task of recovering
the true unknown image from the degraded observed image. Each imaging sensor
induces unique noise types and artifacts into the observed image. This fact has
led to the expansion of restoration techniques in different paths according to
each sensor type. This review paper brings together the advances of image
restoration techniques with particular focuses on synthetic aperture radar and
hyperspectral images as the most active sub-fields of image restoration in the
remote sensing community. We, therefore, provide a comprehensive,
discipline-specific starting point for researchers at different levels (i.e.,
students, researchers, and senior researchers) willing to investigate the
vibrant topic of data restoration by supplying sufficient detail and
references. Additionally, this review paper accompanies a toolbox to provide a
platform to encourage interested students and researchers in the field to
further explore the restoration techniques and fast-forward the community. The
toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS
Quantitative Phase Imaging with a Metalens
Quantitative phase imaging (QPI) recovers the exact wavefront of light from
the intensity measured by a camera. Topographical maps of translucent
microscopic bodies can be extracted from these quantified phase shifts. We
demonstrate quantitative phase imaging at the tip of an optical fiber endoscope
with a chromatic silicon nitride metalens. Our method leverages spectral
multiplexing to recover phase from multiple defocus planes in a single capture.
The half millimeter wide metalens shows phase imaging capability with a 280
field of view and 0.1{\lambda} sensitivity in experiments with an endoscopic
fiber bundle. Since the spectral functionality is encoded directly in the
imaging lens, no additional filters are needed. Key limitations in the scaling
of a phase imaging system, such as multiple acquisition, interferometric
alignment or mechanical scanning are completely mitigated in the proposed
schem
Modeling and applications of the focus cue in conventional digital cameras
El enfoque en cámaras digitales juega un papel fundamental tanto en la calidad de la imagen como en la percepción del entorno. Esta tesis estudia el enfoque en cámaras digitales convencionales, tales como cámaras de móviles, fotográficas, webcams y similares. Una revisión rigurosa de los conceptos teóricos detras del enfoque en cámaras convencionales muestra que, a pasar de su utilidad, el modelo clásico del thin lens presenta muchas limitaciones para aplicación en diferentes problemas relacionados con el foco. En esta tesis, el focus profile es propuesto como una alternativa a conceptos clásicos como la profundidad de campo. Los nuevos conceptos introducidos en esta tesis son aplicados a diferentes problemas relacionados con el foco, tales como la adquisición eficiente de imágenes, estimación de profundidad, integración de elementos perceptuales y fusión de imágenes. Los resultados experimentales muestran la aplicación exitosa de los modelos propuestos.The focus of digital cameras plays a fundamental role in both the quality of the acquired images and the perception of the imaged scene. This thesis studies the focus cue in conventional cameras with focus control, such as cellphone cameras, photography cameras, webcams and the like. A deep review of the theoretical concepts behind focus in conventional cameras reveals that, despite its usefulness, the widely known thin lens model has several limitations for solving different focus-related problems in computer vision. In order to overcome these limitations, the focus profile model is introduced as an alternative to classic concepts, such as the near and far limits of the depth-of-field. The new concepts introduced in this dissertation are exploited for solving diverse focus-related problems, such as efficient image capture, depth estimation, visual cue integration and image fusion. The results obtained through an exhaustive experimental validation demonstrate the applicability of the proposed models
Variational image fusion
The main goal of this work is the fusion of multiple images to a single composite that offers more information than the individual input images. We approach those fusion tasks within a variational framework. First, we present iterative schemes that are well-suited for such variational problems and related tasks. They lead to efficient algorithms that are simple to implement and well-parallelisable. Next, we design a general fusion technique that aims for an image with optimal local contrast. This is the key for a versatile method that performs well in many application areas such as multispectral imaging, decolourisation, and exposure fusion. To handle motion within an exposure set, we present the following two-step approach: First, we introduce the complete rank transform to design an optic flow approach that is robust against severe illumination changes. Second, we eliminate remaining misalignments by means of brightness transfer functions that relate the brightness values between frames. Additional knowledge about the exposure set enables us to propose the first fully coupled method that jointly computes an aligned high dynamic range image and dense displacement fields. Finally, we present a technique that infers depth information from differently focused images. In this context, we additionally introduce a novel second order regulariser that adapts to the image structure in an anisotropic way.Das Hauptziel dieser Arbeit ist die Fusion mehrerer Bilder zu einem Einzelbild, das mehr Informationen bietet als die einzelnen Eingangsbilder. Wir verwirklichen diese Fusionsaufgaben in einem variationellen Rahmen. Zunächst präsentieren wir iterative Schemata, die sich gut für solche variationellen Probleme und verwandte Aufgaben eignen. Danach entwerfen wir eine Fusionstechnik, die ein Bild mit optimalem lokalen Kontrast anstrebt. Dies ist der Schlüssel für eine vielseitige Methode, die gute Ergebnisse für zahlreiche Anwendungsbereiche wie Multispektralaufnahmen, Bildentfärbung oder Belichtungsreihenfusion liefert. Um Bewegungen in einer Belichtungsreihe zu handhaben, präsentieren wir folgenden Zweischrittansatz: Zuerst stellen wir die komplette Rangtransformation vor, um eine optische Flussmethode zu entwerfen, die robust gegenüber starken Beleuchtungsänderungen ist. Dann eliminieren wir verbleibende Registrierungsfehler mit der Helligkeitstransferfunktion, welche die Helligkeitswerte zwischen Bildern in Beziehung setzt. Zusätzliches Wissen über die Belichtungsreihe ermöglicht uns, die erste vollständig gekoppelte Methode vorzustellen, die gemeinsam ein registriertes Hochkontrastbild sowie dichte Bewegungsfelder berechnet. Final präsentieren wir eine Technik, die von unterschiedlich fokussierten Bildern Tiefeninformation ableitet. In diesem Kontext stellen wir zusätzlich einen neuen Regularisierer zweiter Ordnung vor, der sich der Bildstruktur anisotrop anpasst
Variational image fusion
The main goal of this work is the fusion of multiple images to a single composite that offers more information than the individual input images. We approach those fusion tasks within a variational framework. First, we present iterative schemes that are well-suited for such variational problems and related tasks. They lead to efficient algorithms that are simple to implement and well-parallelisable. Next, we design a general fusion technique that aims for an image with optimal local contrast. This is the key for a versatile method that performs well in many application areas such as multispectral imaging, decolourisation, and exposure fusion. To handle motion within an exposure set, we present the following two-step approach: First, we introduce the complete rank transform to design an optic flow approach that is robust against severe illumination changes. Second, we eliminate remaining misalignments by means of brightness transfer functions that relate the brightness values between frames. Additional knowledge about the exposure set enables us to propose the first fully coupled method that jointly computes an aligned high dynamic range image and dense displacement fields. Finally, we present a technique that infers depth information from differently focused images. In this context, we additionally introduce a novel second order regulariser that adapts to the image structure in an anisotropic way.Das Hauptziel dieser Arbeit ist die Fusion mehrerer Bilder zu einem Einzelbild, das mehr Informationen bietet als die einzelnen Eingangsbilder. Wir verwirklichen diese Fusionsaufgaben in einem variationellen Rahmen. Zunächst präsentieren wir iterative Schemata, die sich gut für solche variationellen Probleme und verwandte Aufgaben eignen. Danach entwerfen wir eine Fusionstechnik, die ein Bild mit optimalem lokalen Kontrast anstrebt. Dies ist der Schlüssel für eine vielseitige Methode, die gute Ergebnisse für zahlreiche Anwendungsbereiche wie Multispektralaufnahmen, Bildentfärbung oder Belichtungsreihenfusion liefert. Um Bewegungen in einer Belichtungsreihe zu handhaben, präsentieren wir folgenden Zweischrittansatz: Zuerst stellen wir die komplette Rangtransformation vor, um eine optische Flussmethode zu entwerfen, die robust gegenüber starken Beleuchtungsänderungen ist. Dann eliminieren wir verbleibende Registrierungsfehler mit der Helligkeitstransferfunktion, welche die Helligkeitswerte zwischen Bildern in Beziehung setzt. Zusätzliches Wissen über die Belichtungsreihe ermöglicht uns, die erste vollständig gekoppelte Methode vorzustellen, die gemeinsam ein registriertes Hochkontrastbild sowie dichte Bewegungsfelder berechnet. Final präsentieren wir eine Technik, die von unterschiedlich fokussierten Bildern Tiefeninformation ableitet. In diesem Kontext stellen wir zusätzlich einen neuen Regularisierer zweiter Ordnung vor, der sich der Bildstruktur anisotrop anpasst
Foundations, Inference, and Deconvolution in Image Restoration
Image restoration is a critical preprocessing step in computer vision,
producing images with reduced noise, blur, and pixel defects.
This enables precise higher-level reasoning as to the scene content in
later stages of the vision pipeline (e.g., object segmentation,
detection, recognition, and tracking).
Restoration techniques have found extensive usage in a broad range of
applications from industry, medicine, astronomy, biology, and
photography.
The recovery of high-grade results requires models of the image
degradation process, giving rise to a class of often heavily
underconstrained, inverse problems.
A further challenge specific to the problem of blur removal is noise
amplification, which may cause strong distortion by ringing artifacts.
This dissertation presents new insights and problem solving procedures
for three areas of image restoration, namely (1) model
foundations, (2) Bayesian inference for high-order Markov
random fields (MRFs), and (3) blind image deblurring
(deconvolution).
As basic research on model foundations, we contribute to reconciling
the perceived differences between probabilistic MRFs on the one hand,
and deterministic variational models on the other.
To do so, we restrict the variational functional to locally supported finite
elements (FE) and integrate over the domain.
This yields a sum of terms depending locally on FE basis coefficients,
and by identifying the latter with pixels, the terms resolve to MRF
potential functions.
In contrast with previous literature, we place special emphasis on robust
regularizers used commonly in contemporary computer vision.
Moreover, we draw samples from the derived models to further
demonstrate the probabilistic connection.
Another focal issue is a class of high-order Field of Experts MRFs
which are learned generatively from natural image data and yield
best quantitative results under Bayesian estimation.
This involves minimizing an integral expression, which has no closed
form solution in general.
However, the MRF class under study has Gaussian mixture potentials,
permitting expansion by indicator variables as a technical measure.
As approximate inference method, we study Gibbs sampling in the
context of non-blind deblurring and obtain excellent results, yet
at the cost of high computing effort.
In reaction to this, we turn to the mean field algorithm, and show
that it scales quadratically in the clique size for a standard
restoration setting with linear degradation model.
An empirical study of mean field over several restoration scenarios
confirms advantageous properties with regard to both image quality and
computational runtime.
This dissertation further examines the problem of blind deconvolution,
beginning with localized blur from fast moving objects in the
scene, or from camera defocus.
Forgoing dedicated hardware or user labels, we rely only on the image
as input and introduce a latent variable model to explain the
non-uniform blur.
The inference procedure estimates freely varying kernels and we
demonstrate its generality by extensive experiments.
We further present a discriminative method for blind removal of camera
shake.
In particular, we interleave discriminative non-blind deconvolution
steps with kernel estimation and leverage the error cancellation
effects of the Regression Tree Field model to attain a deblurring
process with tightly linked sequential stages
Optical System Identification for Passive Electro-Optical Imaging
A statistical inverse-problem approach is presented for jointly estimating camera blur from aliased data of a known calibration target. Specifically, a parametric Maximum Likelihood (ML) PSF estimate is derived for characterizing a camera's optical imperfections through the use of a calibration target in an otherwise loosely controlled environment. The unknown parameters are jointly estimated from data described by a physical forward-imaging model, and this inverse-problem approach allows one to accommodate all of the available sources of information jointly. These sources include knowledge of the forward imaging process, the types and sources of statistical uncertainty, available prior information, and the data itself. The forward model describes a broad class of imaging systems based on a parameterization with a direct mapping between its parameters and physical imaging phenomena. The imaging perspective, ambient light-levels, target-reflectance, detector gain and offset, quantum-efficiency, and read-noise levels are all treated as nuisance parameters. The Cram'{e}r-Rao Bound (CRB) is derived under this joint model, and simulations demonstrate that the proposed estimator achieves near-optimal MSE performance. Finally, the proposed method is applied to experimental data to validate both the fidelity of the forward-models, as well as to establish the utility of the resulting ML estimates for both system identification and subsequent image restoration.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/153395/1/jwleblan_1.pd
BIOMEDICAL IMAGE RESOLUTION IMPROVEMENTS BY COMBINED USE OF FOCAL MODULATION, PUPIL ENGINEERING, AND SPARSITY PRIORS.
Ph.DDOCTOR OF PHILOSOPH