148 research outputs found

    Accurate depth from defocus estimation with video-rate implementation

    Get PDF
    The science of measuring depth from images at video rate using „defocus‟ has been investigated. The method required two differently focussed images acquired from a single view point using a single camera. The relative blur between the images was used to determine the in-focus axial points of each pixel and hence depth. The depth estimation algorithm researched by Watanabe and Nayar was employed to recover the depth estimates, but the broadband filters, referred as the Rational filters were designed using a new procedure: the Two Step Polynomial Approach. The filters designed by the new model were largely insensitive to object texture and were shown to model the blur more precisely than the previous method. Experiments with real planar images demonstrated a maximum RMS depth error of 1.18% for the proposed filters, compared to 1.54% for the previous design. The researched software program required five 2D convolutions to be processed in parallel and these convolutions were effectively implemented on a FPGA using a two channel, five stage pipelined architecture, however the precision of the filter coefficients and the variables had to be limited within the processor. The number of multipliers required for each convolution was reduced from 49 to 10 (79.5% reduction) using a Triangular design procedure. Experimental results suggested that the pipelined processor provided depth estimates comparable in accuracy to the full precision Matlab‟s output, and generated depth maps of size 400 x 400 pixels in 13.06msec, that is faster than the video rate. The defocused images (near and far-focused) were optically registered for magnification using Telecentric optics. A frequency domain approach based on phase correlation was employed to measure the radial shifts due to magnification and also to optimally position the external aperture. The telecentric optics ensured pixel to pixel registration between the defocused images was correct and provided more accurate depth estimates

    Accurate depth from defocus estimation with video-rate implementation

    Get PDF
    The science of measuring depth from images at video rate using „defocus‟ has been investigated. The method required two differently focussed images acquired from a single view point using a single camera. The relative blur between the images was used to determine the in-focus axial points of each pixel and hence depth. The depth estimation algorithm researched by Watanabe and Nayar was employed to recover the depth estimates, but the broadband filters, referred as the Rational filters were designed using a new procedure: the Two Step Polynomial Approach. The filters designed by the new model were largely insensitive to object texture and were shown to model the blur more precisely than the previous method. Experiments with real planar images demonstrated a maximum RMS depth error of 1.18% for the proposed filters, compared to 1.54% for the previous design. The researched software program required five 2D convolutions to be processed in parallel and these convolutions were effectively implemented on a FPGA using a two channel, five stage pipelined architecture, however the precision of the filter coefficients and the variables had to be limited within the processor. The number of multipliers required for each convolution was reduced from 49 to 10 (79.5% reduction) using a Triangular design procedure. Experimental results suggested that the pipelined processor provided depth estimates comparable in accuracy to the full precision Matlab‟s output, and generated depth maps of size 400 x 400 pixels in 13.06msec, that is faster than the video rate. The defocused images (near and far-focused) were optically registered for magnification using Telecentric optics. A frequency domain approach based on phase correlation was employed to measure the radial shifts due to magnification and also to optimally position the external aperture. The telecentric optics ensured pixel to pixel registration between the defocused images was correct and provided more accurate depth estimates.EThOS - Electronic Theses Online ServiceUniversity of Warwick (UoW)GBUnited Kingdo

    Coherence Gated Laser Ray Tracing Based on a High Speed FPGA Platform

    Get PDF
    eld, rendering them incapable of differentiating light returned from targets with many layers (such as the human retina). Instead, the measured wavefront is the superposition of the wavefronts returned from each layer. By combining principles from low-coherence interferometry and wavefront sensing, a depth-resolved wavefront sensor may be realised. This allows only light from within the coherence-gate of the interferometer to be measured by the wavefront sensing device. By adjusting the axial position of the coherence-gate, wavefronts from distinct layers of a multi-layer object may be measured. This method has been demonstrated for the Shack-Hartmann wavefront sensor but requires an external PC for image processing and wavefront reconstruction. This dissertation presents, for the first time, a depth-resolved laser ray tracing wavefront sensor. Results are shown, in the form of Zernike modes, which demonstrate the ability of the instrument to resolve wavefronts from a multi-layer target (two stacked microscope slides and a mirror). Also, an FPGA based embedded system was developed for all command, control, image processing and wavefront reconstruction functions. This highly specialised system is able to perform these operations in real-time, limited only by the frame rate of the available camera. Specfic attention is given to the portion of the system focused on wavefront reconstruction. Zernike modes are commonly used in adaptive optics systems to represent optical wavefronts. However, real-time calculation of Zernike modes is time consuming due to two factors: the large factorial components in the radial polynomials used to define them, and the large inverse matrix calculation needed for the linear t. This dissertation presents an efficient parallel method for calculating Zernike coefficients from phase gradients and its real-time implementation using an FPGA by pre-calculation and storage of subsections of the large inverse matrix. The architecture exploits symmetries within the Zernike modes to achieve a significant reduction in memory requirements and a speed-up of 2.9 when compared to published results utilising a 2D-FFT method for a grid size of 8 x 8. Analysis of the processor element's internal word length requirements show that 24-bit precision in pre-calculated values of the Zernike mode partial derivatives ensures less than 0.5% error per Zernike coefficient and an overall error of less than 1%. The design has been synthesized on a

    Scalable control of mounting and attack by Esr1^+ neurons in the ventromedial hypothalamus

    Get PDF
    Social behaviours, such as aggression or mating, proceed through a series of appetitive and consummatory phases that are associated with increasing levels of arousal. How such escalation is encoded in the brain, and linked to behavioural action selection, remains an unsolved problem in neuroscience. The ventrolateral subdivision of the murine ventromedial hypothalamus (VMHvl) contains neurons whose activity increases during male–male and male–female social encounters. Non-cell-type-specific optogenetic activation of this region elicited attack behaviour, but not mounting. We have identified a subset of VMHvl neurons marked by the oestrogen receptor 1 (Esr1), and investigated their role in male social behaviour. Optogenetic manipulations indicated that Esr1^+ (but not Esr1^−) neurons are sufficient to initiate attack, and that their activity is continuously required during ongoing agonistic behaviour. Surprisingly, weaker optogenetic activation of these neurons promoted mounting behaviour, rather than attack, towards both males and females, as well as sniffing and close investigation. Increasing photostimulation intensity could promote a transition from close investigation and mounting to attack, within a single social encounter. Importantly, time-resolved optogenetic inhibition experiments revealed requirements for Esr1^+ neurons in both the appetitive (investigative) and the consummatory phases of social interactions. Combined optogenetic activation and calcium imaging experiments in vitro, as well as c-Fos analysis in vivo, indicated that increasing photostimulation intensity increases both the number of active neurons and the average level of activity per neuron. These data suggest that Esr1^+ neurons in VMHvl control the progression of a social encounter from its appetitive through its consummatory phases, in a scalable manner that reflects the number or type of active neurons in the population

    The Visible and Near Infrared module of EChO

    Full text link
    The Visible and Near Infrared (VNIR) is one of the modules of EChO, the Exoplanets Characterization Observatory proposed to ESA for an M-class mission. EChO is aimed to observe planets while transiting by their suns. Then the instrument had to be designed to assure a high efficiency over the whole spectral range. In fact, it has to be able to observe stars with an apparent magnitude Mv= 9-12 and to see contrasts of the order of 10-4 - 10-5 necessary to reveal the characteristics of the atmospheres of the exoplanets under investigation. VNIR is a spectrometer in a cross-dispersed configuration, covering the 0.4-2.5 micron spectral range with a resolving power of about 330 and a field of view of 2 arcsec. It is functionally split into two channels respectively working in the 0.4-1 and 1.0-2.5 micron spectral ranges. Such a solution is imposed by the fact the light at short wavelengths has to be shared with the EChO Fine Guiding System (FGS) devoted to the pointing of the stars under observation. The spectrometer makes use of a HgCdTe detector of 512 by 512 pixels, 18 micron pitch and working at a temperature of 45K as the entire VNIR optical bench. The instrument has been interfaced to the telescope optics by two optical fibers, one per channel, to assure an easier coupling and an easier colocation of the instrument inside the EChO optical bench.Comment: 26 page

    Topics in Adaptive Optics

    Get PDF
    Advances in adaptive optics technology and applications move forward at a rapid pace. The basic idea of wavefront compensation in real-time has been around since the mid 1970s. The first widely used application of adaptive optics was for compensating atmospheric turbulence effects in astronomical imaging and laser beam propagation. While some topics have been researched and reported for years, even decades, new applications and advances in the supporting technologies occur almost daily. This book brings together 11 original chapters related to adaptive optics, written by an international group of invited authors. Topics include atmospheric turbulence characterization, astronomy with large telescopes, image post-processing, high power laser distortion compensation, adaptive optics and the human eye, wavefront sensors, and deformable mirrors
    • …
    corecore