5 research outputs found
Automatic Estimation of Modulation Transfer Functions
The modulation transfer function (MTF) is widely used to characterise the
performance of optical systems. Measuring it is costly and it is thus rarely
available for a given lens specimen. Instead, MTFs based on simulations or, at
best, MTFs measured on other specimens of the same lens are used. Fortunately,
images recorded through an optical system contain ample information about its
MTF, only that it is confounded with the statistics of the images. This work
presents a method to estimate the MTF of camera lens systems directly from
photographs, without the need for expensive equipment. We use a custom grid
display to accurately measure the point response of lenses to acquire ground
truth training data. We then use the same lenses to record natural images and
employ a data-driven supervised learning approach using a convolutional neural
network to estimate the MTF on small image patches, aggregating the information
into MTF charts over the entire field of view. It generalises to unseen lenses
and can be applied for single photographs, with the performance improving if
multiple photographs are available
Fast Two-step Blind Optical Aberration Correction
The optics of any camera degrades the sharpness of photographs, which is a
key visual quality criterion. This degradation is characterized by the
point-spread function (PSF), which depends on the wavelengths of light and is
variable across the imaging field. In this paper, we propose a two-step scheme
to correct optical aberrations in a single raw or JPEG image, i.e., without any
prior information on the camera or lens. First, we estimate local Gaussian blur
kernels for overlapping patches and sharpen them with a non-blind deblurring
technique. Based on the measurements of the PSFs of dozens of lenses, these
blur kernels are modeled as RGB Gaussians defined by seven parameters. Second,
we remove the remaining lateral chromatic aberrations (not contemplated in the
first step) with a convolutional neural network, trained to minimize the
red/green and blue/green residual images. Experiments on both synthetic and
real images show that the combination of these two stages yields a fast
state-of-the-art blind optical aberration compensation technique that competes
with commercial non-blind algorithms.Comment: 28 pages, 20 figures, accepted at ECCV'22 as a poste
Learning Lens Blur Fields
Optical blur is an inherent property of any lens system and is challenging to
model in modern cameras because of their complex optical elements. To tackle
this challenge, we introduce a high-dimensional neural representation of
blurand a practical method for acquiring
it. The lens blur field is a multilayer perceptron (MLP) designed to (1)
accurately capture variations of the lens 2D point spread function over image
plane location, focus setting and, optionally, depth and (2) represent these
variations parametrically as a single, sensor-specific function. The
representation models the combined effects of defocus, diffraction, aberration,
and accounts for sensor features such as pixel color filters and pixel-specific
micro-lenses. To learn the real-world blur field of a given device, we
formulate a generalized non-blind deconvolution problem that directly optimizes
the MLP weights using a small set of focal stacks as the only input. We also
provide a first-of-its-kind dataset of 5D blur fieldsfor smartphone cameras,
camera bodies equipped with a variety of lenses, etc. Lastly, we show that
acquired 5D blur fields are expressive and accurate enough to reveal, for the
first time, differences in optical behavior of smartphone devices of the same
make and model
Low-Cost Vision Based Autonomous Underwater Vehicle for Abyssal Ocean Ecosystem Research
The oceans have a major impact on the planet: they store 28% of the CO 2 pro-
duced by humans, they act as the world’s thermal damper for temperature changes,
and more than 17, 000 species call the deep oceans their home. Scientific drivers, like climate change, and commercial applications, like deep sea fisheries and underwater mining, are pushing the need to know more about oceans at depths beyond 1000 meters. However, the high cost associated with autonomous underwater vehicles (AUVs) capable of operating beyond the depth of 1000 meters has limited the study of the deep ocean.
Traditional AUVs used for deep-sea navigation are large and typically weigh up-
wards of 1000-kgs, thus requiring careful planning before deployment and multi-
person teams to operate. This thesis proposes the use of a new vehicle design based around a low-cost oceanographic glass sphere as the main pressure enclosure to reduce its size and cost while maintaining the ability for deep-sea operation. This novel housing concept, together with a minimal sensor suite, enables environmental research at depths previously inaccessible at this price point. The key characteristic that enables the cost reduction of this platform is the removal of the Doppler velocity log (DVL) sensor, which is replaced by optical cameras. Cameras allow the vehicle to estimate its motion in the water, but also enable scientific applications such as identification of habitat types or population density estimation of benthic species. After each survey, images can be further processed to produce full, dense 3D models of the survey area.
While underwater optical cameras are frequently placed inside pressure housings
behind flat or domed viewports and used for visual navigation or 3D reconstructions, the underlying assumptions for those algorithms do not hold in the underwater
domain. Refraction at the housing viewport, together with wavelength-dependent
attenuation of light in water, render the ubiquitous pinhole camera model invalid.
This thesis presents a quantitative evaluation of the errors introduced by underwater effects for 3D reconstruction applications, comparing low- and high-cost camera systems to quantify the trade-off between equipment cost and performance.
Although the distortion effects created by underwater refraction of light have been extensively studied for more traditional viewports, the novel design proposed necessitates new research into modeling the lensing effect of this off-axis domed viewport. A novel calibration method is presented that explicitly models the effect of the glass interface on image formation based on the characterization of optical distortions. The method is capable of accurately finding the position of the camera within the dome and further enables the use of deconvolution to improve the quality of the taken image.
Finally, this thesis presents the validation of the designed vehicle for optical surveying tasks and introduces a end-to-end ocean mapping pipeline to streamline AUV deployments, enabling efficient use of time and resources.PHDNaval Architecture & Marine EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155225/1/eiscar_1.pd