39,214 research outputs found
Transcending shift-invariance in the paraxial regime via end-to-end inverse design of freeform nanophotonics
Traditional optical elements and conventional metasurfaces obey
shift-invariance in the paraxial regime. For imaging systems obeying paraxial
shift-invariance, a small shift in input angle causes a corresponding shift in
the sensor image. Shift-invariance has deep implications for the design and
functionality of optical devices, such as the necessity of free space between
components (as in compound objectives made of several curved surfaces). We
present a method for nanophotonic inverse design of compact imaging systems
whose resolution is not constrained by paraxial shift-invariance. Our method is
end-to-end, in that it integrates density-based full-Maxwell topology
optimization with a fully iterative elastic-net reconstruction algorithm. By
the design of nanophotonic structures that scatter light in a
non-shift-invariant manner, our optimized nanophotonic imaging system overcomes
the limitations of paraxial shift-invariance, achieving accurate, noise-robust
image reconstruction beyond shift-invariant resolution
Regularization for Uniform Spatial Resolution Properties in Penalized-Likelihood Image Reconstruction
Traditional space-invariant regularization methods in tomographic image reconstruction using penalized-likelihood estimators produce images with nonuniform spatial resolution properties. The local point spread functions that quantify the smoothing properties of such estimators are space variant, asymmetric, and object-dependent even for space invariant imaging systems. The authors propose a new quadratic regularization scheme for tomographic imaging systems that yields increased spatial uniformity and is motivated by the least-squares fitting of a parameterized local impulse response to a desired global response. The authors have developed computationally efficient methods for PET systems with shift-invariant geometric responses. They demonstrate the increased spatial uniformity of this new method versus conventional quadratic regularization schemes in simulated PET thorax scans.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85867/1/Fessler79.pd
Analytical Approach to Regularization Design for Isotropic Spatial Resolution
In emission tomography, conventional quadratic regularization methods lead to nonuniform and anisotropic spatial resolution in penalized-likelihood (or MAP) reconstructed images, even for idealized shift-invariant imaging systems. Previous methods for designing regularizers that improve resolution uniformity have used matrix manipulations and discrete Fourier transforms. This paper describes a simpler approach for designing data-dependent, shift-variant regularizers. We replace the usual discrete system models used in statistical image reconstruction with locally shift-invariant, continuous-space approximations, and design the regularizer using analytical Fourier transforms. We discretize the final analytical solution to compute the regularizer coefficients. This new approach requires even less computation than previous approaches that used FFTs, and provides additional insight into the problem of designing regularization methods to achieve uniform, isotropic spatial resolution.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85963/1/Fessler187.pd
Alignment, Classification, and Three-Dimensional Reconstruction of Single Particles Embedded in Ice
Cryo-electron microscopy of single biological particles poses new challenges to digital image processing due to the low signal-to-noise ratio of the data. New tools have been devised to deal with important aspects of 3-D reconstruction following the random-conical data collection scheme: (a) a new shift-invariant function has been derived, which promises to facilitate alignment and classification of single particle projections; (b) a new method of orientation search is proposed, which makes it possible to relate random-conical data sets to one another prior to reconstruction; and (c) the foundation is laid for a 3-D variance estimation which utilizes the oversampling of 3-D angular space by projections in the random-conical reconstruction scheme
Distributed Deblurring of Large Images of Wide Field-Of-View
Image deblurring is an economic way to reduce certain degradations (blur and
noise) in acquired images. Thus, it has become essential tool in high
resolution imaging in many applications, e.g., astronomy, microscopy or
computational photography. In applications such as astronomy and satellite
imaging, the size of acquired images can be extremely large (up to gigapixels)
covering wide field-of-view suffering from shift-variant blur. Most of the
existing image deblurring techniques are designed and implemented to work
efficiently on centralized computing system having multiple processors and a
shared memory. Thus, the largest image that can be handle is limited by the
size of the physical memory available on the system. In this paper, we propose
a distributed nonblind image deblurring algorithm in which several connected
processing nodes (with reasonable computational resources) process
simultaneously different portions of a large image while maintaining certain
coherency among them to finally obtain a single crisp image. Unlike the
existing centralized techniques, image deblurring in distributed fashion raises
several issues. To tackle these issues, we consider certain approximations that
trade-offs between the quality of deblurred image and the computational
resources required to achieve it. The experimental results show that our
algorithm produces the similar quality of images as the existing centralized
techniques while allowing distribution, and thus being cost effective for
extremely large images.Comment: 16 pages, 10 figures, submitted to IEEE Trans. on Image Processin
- …