4,798 research outputs found
Better than a lens -- Increasing the signal-to-noise ratio through pupil splitting
Lenses are designed to fulfill Fermats principle such that all light
interferes constructively in its focus, guaranteeing its maximum concentration.
It can be shown that imaging via an unmodified full pupil yields the maximum
transfer strength for all spatial frequencies transferable by the system.
Seemingly also the signal-to-noise ratio (SNR) is optimal. The achievable SNR
at a given photon budget is critical especially if that budget is strictly
limited as in the case of fluorescence microscopy. In this work we propose a
general method which achieves a better SNR for high spatial frequency
information of an optical imaging system, without the need to capture more
photons. This is achieved by splitting the pupil of an incoherent imaging
system such that two (or more) sub-images are simultaneously acquired and
computationally recombined. We compare the theoretical performance of split
pupil imaging to the non-split scenario and implement the splitting using a
tilted elliptical mirror placed at the back-focal-plane (BFP) of a fluorescence
widefield microscope
Recommended from our members
Spectral imaging in preclinical research and clinical pathology.
Spectral imaging methods are attracting increased interest from researchers and practitioners in basic science, pre-clinical and clinical arenas. A combination of better labeling reagents and better optics creates opportunities to detect and measure multiple parameters at the molecular and cellular level. These tools can provide valuable insights into the basic mechanisms of life, and yield diagnostic and prognostic information for clinical applications. There are many multispectral technologies available, each with its own advantages and limitations. This chapter will present an overview of the rationale for spectral imaging, and discuss the hardware, software and sample labeling strategies that can optimize its usefulness in clinical settings
Fusing spatial and temporal components for real-time depth data enhancement of dynamic scenes
The depth images from consumer depth cameras (e.g., structured-light/ToF devices) exhibit a substantial amount of artifacts (e.g., holes, flickering, ghosting) that needs to be removed for real-world applications. Existing methods cannot entirely remove them and perform slow. This thesis proposes a new real-time spatio-temporal depth image enhancement filter that completely removes flickering and ghosting, and significantly reduces holes. This thesis also presents a novel depth-data capture setup and two data reduction methods to optimize the performance of the proposed enhancement method
FVV Live: A real-time free-viewpoint video system with consumer electronics hardware
FVV Live is a novel end-to-end free-viewpoint video system, designed for low
cost and real-time operation, based on off-the-shelf components. The system has
been designed to yield high-quality free-viewpoint video using consumer-grade
cameras and hardware, which enables low deployment costs and easy installation
for immersive event-broadcasting or videoconferencing.
The paper describes the architecture of the system, including acquisition and
encoding of multiview plus depth data in several capture servers and virtual
view synthesis on an edge server. All the blocks of the system have been
designed to overcome the limitations imposed by hardware and network, which
impact directly on the accuracy of depth data and thus on the quality of
virtual view synthesis. The design of FVV Live allows for an arbitrary number
of cameras and capture servers, and the results presented in this paper
correspond to an implementation with nine stereo-based depth cameras.
FVV Live presents low motion-to-photon and end-to-end delays, which enables
seamless free-viewpoint navigation and bilateral immersive communications.
Moreover, the visual quality of FVV Live has been assessed through subjective
assessment with satisfactory results, and additional comparative tests show
that it is preferred over state-of-the-art DIBR alternatives
Plenoptic Signal Processing for Robust Vision in Field Robotics
This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications
Plenoptic Signal Processing for Robust Vision in Field Robotics
This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications
State-of-the-art active optical techniques for three-dimensional surface metrology: a review [Invited]
This paper reviews recent developments of non-contact three-dimensional (3D) surface metrology using an active structured optical probe. We focus primarily on those active non-contact 3D surface measurement techniques that could be applicable to the manufacturing industry. We discuss principles of each technology, and its advantageous characteristics as well as limitations. Towards the end, we discuss our perspectives on the current technological challenges in designing and implementing these methods in practical applications.Purdue Universit
GREGOR Fabry-Perot Interferometer - status report and prospects
The GREGOR Fabry-Perot Interferometer (GFPI) is one of three first-light
instruments of the German 1.5-meter GREGOR solar telescope at the Observatorio
del Teide, Tenerife, Spain. The GFPI allows fast narrow-band imaging and
post-factum image restoration. The retrieved physical parameters will be a
fundamental building block for understanding the dynamic Sun and its magnetic
field at spatial scales down to 50 km on the solar surface. The GFPI is a
tunable dual-etalon system in a collimated mounting. It is designed for
spectropolarimetric observations over the wavelength range from 530-860 nm with
a theoretical spectral resolution of R ~ 250,000. The GFPI is equipped with a
full-Stokes polarimeter. Large-format, high-cadence CCD detectors with powerful
computer hard- and software enable the scanning of spectral lines in time spans
equivalent to the evolution time of solar features. The field-of-view of 50" x
38" covers a significant fraction of the typical area of active regions. We
present the main characteristics of the GFPI including advanced and automated
calibration and observing procedures. We discuss improvements in the optical
design of the instrument and show first observational results. Finally, we lay
out first concrete ideas for the integration of a second FPI, the Blue Imaging
Solar Spectrometer, which will explore the blue spectral region below 530 nm.Comment: 18 pages, 9 Figures, 4 Tables, "Astronomical Telescopes and
Instrumentation", Amsterdam, 1-6 July 2012, SPIE Proc. 8446-276, in pres
- …