815 research outputs found

    Antenna pointing compensation based on precision optical measurement techniques

    Get PDF
    The pointing control loops of the Deep Space Network 70 meter antennas extend only to the Intermediate Reference Structure (IRS). Thus, distortion of the structure forward of the IRS due to unpredictable environmental loads can result in uncompensated boresight shifts which degrade blind pointing accuracy. A system is described which can provide real time bias commands to the pointing control system to compensate for environmental effects on blind pointing performance. The bias commands are computed in real time based on optical ranging measurements of the structure from the IRS to a number of selected points on the primary and secondary reflectors

    Refractive shape from light field distortion

    Get PDF
    Acquiring transparent, refractive objects is challenging as these kinds of objects can only be observed by analyzing the distortion of reference background patterns. We present a new, single image approach to reconstructing thin transparent surfaces, such as thin solids or surfaces of fluids. Our method is based on observing the distortion of light field background illumination. Light field probes have the potential to encode up to four dimensions in varying colors and intensities: spatial and angular variation on the probe surface; commonly employed reference patterns are only two-dimensional by coding either position or angle on the probe. We show that the additional information can be used to reconstruct refractive surface normals and a sparse set of control points from a single photograph

    Computational Schlieren Photography with Light Field Probes

    Get PDF
    We introduce a new approach to capturing refraction in transparent media, which we call light field background oriented Schlieren photography. By optically coding the locations and directions of light rays emerging from a light field probe, we can capture changes of the refractive index field between the probe and a camera or an observer. Our prototype capture setup consists of inexpensive off-the-shelf hardware, including inkjet-printed transparencies, lenslet arrays, and a conventional camera. By carefully encoding the color and intensity variations of 4D light field probes, we show how to code both spatial and angular information of refractive phenomena. Such coding schemes are demonstrated to allow for a new, single image approach to reconstructing transparent surfaces, such as thin solids or surfaces of fluids. The captured visual information is used to reconstruct refractive surface normals and a sparse set of control points independently from a single photograph.Natural Sciences and Engineering Research Council of CanadaAlfred P. Sloan FoundationUnited States. Defense Advanced Research Projects Agency. Young Faculty Awar

    ANALYSIS OF UNCERTAINTY IN UNDERWATER MULTIVIEW RECONSTRUCTION

    Get PDF
    Multiview reconstruction, a method for creating 3D models from multiple images from different views, has been a popular topic of research in the eld of computer vision in the last two decades. Increased availability of high-quality cameras led to the development of advanced techniques and algorithms. However, little attention has been paid to multiview reconstruction in underwater conditions. Researchers in a wide variety of elds (e.g. marine biology, archaeology, and geology) could benefit from having 3D models of seafloor and underwater objects. Cameras, designed to operate in air, must be put in protective housings to work underwater. This affects the image formation process. The largest source of underwater image distortion results from refraction of light, which occurs when light rays travel through boundaries between media with different refractive indices. This study addresses methods for accounting for light refraction when using a static rig with multiple cameras. We define a set of procedures to achieve optimal underwater reconstruction results, and we analyze the expected quality of the 3D models\u27 measurements

    Refraction Wiggles for Measuring Fluid Depth and Velocity from Video

    Get PDF
    We present principled algorithms for measuring the velocity and 3D location of refractive fluids, such as hot air or gas, from natural videos with textured backgrounds. Our main observation is that intensity variations related to movements of refractive fluid elements, as observed by one or more video cameras, are consistent over small space-time volumes. We call these intensity variations “refraction wiggles”, and use them as features for tracking and stereo fusion to recover the fluid motion and depth from video sequences. We give algorithms for 1) measuring the (2D, projected) motion of refractive fluids in monocular videos, and 2) recovering the 3D position of points on the fluid from stereo cameras. Unlike pixel intensities, wiggles can be extremely subtle and cannot be known with the same level of confidence for all pixels, depending on factors such as background texture and physical properties of the fluid. We thus carefully model uncertainty in our algorithms for robust estimation of fluid motion and depth. We show results on controlled sequences, synthetic simulations, and natural videos. Different from previous approaches for measuring refractive flow, our methods operate directly on videos captured with ordinary cameras, do not require auxiliary sensors, light sources or designed backgrounds, and can correctly detect the motion and location of refractive fluids even when they are invisible to the naked eye.Shell ResearchMotion Sensing Wi-Fi Sensor Networks Co. (Grant 6925133)National Science Foundation (U.S.). Graduate Research Fellowship (Grant 1122374)Microsoft Research (PhD Fellowship

    Shape Perception of Clear Water in Photo-Realistic Images

    Get PDF
    Light plays a vital role in the perception of transparency, depth and shape of liquids. The perception of the surfaces of liquids is made possible with an understanding of refraction of light and knowledge of the underlying texture geometry. Given this, what specific characteristics of the natural optical environment are essential to the perception of transparent liquids, specifically with respect to efficiency and realism? In this thesis, a light path triangulation method for the recovery of transparent surface shape and a system to estimate the perceived shape of any arbitrary-shaped object with a refractive surface are proposed. A psycho-physical experiment was conducted to investigate this using the perceived shape of water from stereo images using a real time stereoscopic 3-D depth gauge. The results suggest that people are able to consistently perceive shape of liquids from photo-realistic images and that regularity in underlying texture facilitates human judgement of surface shape

    Optical instrumentation for fluid flow in gas turbines

    Get PDF
    Both a novel shearing interferometer and the first demonstration of particle image velocimetry (PIV) to the stator-rotor gap of a spinning turbine cascade are presented. Each of these techniques are suitable for measuring gas turbine representative flows. The simple interferometric technique has been demonstrated on a compressor representative flow in a 2-D wind tunnel. The interferometer has obvious limitations, as it requires a clear line of sight for the integration of refractive index along an optical path. Despite this, it is a credible alternative to schlieren or shadowgraph in that it provides both qualitative visualisation and a quantitative measurement of refractive index and the variables to which it is dependent without the vibration isolation requirements of beam splitting interferometry. The 2-D PIV measurements have been made in the stator-rotor gap of the MTI high-pressure turbine stage within DERA's Isentropic Light Piston Facility (lLPF). The measurements were made at full engine representative conditions adjacent to a rotor spinning at 8200 rpm. This is a particularly challenging application due to the complex geometry and random and periodic effects generated as the stator wake interacts with the adjacent spinning rotor. The application is further complicated due to the transient nature of the facility. The measurements represent a 2- D, instantaneous, quantitative description of the unsteady flow field and reveal evidence of shocks and wakes. The estimated accuracy after scaling, timing, particle centroid and particle lag errors have been considered is ± 5%. Non-smoothed, non-time averaged measurements are qualitatively compared with a numerical prediction generated using a 2-D unsteady flow solver (prediction supplied by DERA). A very close agreement has been achieved. A novel approach to characterising the third component of velocity from the diffraction rings of a defocusing particle viewed through a single camera has been explored. This 3-D PIV technique has been demonstrated on a nozzle flow but issues concerning the aberrations of the curved test section window of the turbine cascade could not be resolved in time for testing on the facility. Suggestions have been made towards solving this problem. Recommendations are also made towards the eventual goal of revealing a temporally and spatially resolved 3-D velocity distribution of the stator wake impinging on the passing rotor

    A virtual object point model for the calibration of underwater stereo cameras to recover accurate 3D information

    Get PDF
    The focus of this thesis is on recovering accurate 3D information from underwater images. Underwater 3D reconstruction differs significantly from 3D reconstruction in air due to the refraction of light. In this thesis, the concepts of stereo 3D reconstruction in air get extended for underwater environments by an explicit consideration of refractive effects with the aid of a virtual object point model. Within underwater stereo 3D reconstruction, the focus of this thesis is on the refractive calibration of underwater stereo cameras
    • …
    corecore