1,230 research outputs found

    A Polynomial Model with Line-of-Sight Constraints for Lagrangian Particle Tracking Under Interface Refraction

    Get PDF
    This paper introduces an improvement of the "Shake-The-Box (STB)" (Schanz, Gesemann, and Schröder, Exp. Fluids 57.5, 2016) technique using the polynomial calibration model and the line-of-sight constraints (LOSC) to overcome the refractive interface issues in Lagrangian particle tracking (LPT) measurement. The method (named LOSC-LPT) draws inspiration from the two-plane polynomial camera calibration in tomographic particle image velocimetry (Tomo-PIV) (Worth and Nickels, Thesis, 2010) and the STB-based open-source Lagrangian particle tracking (OpenLPT) framework (Tan, Salibindla, Masuk, and Ni, Exp. Fluids 61.2, 2019). The LOSC-LPT introduces polynomial mapping functions into STB calibration in conditions involving gas-solid-liquid interfaces at container walls exhibiting large refractive index variations, which facilitates the realization of particle stereo matching, three-dimensional (3D) triangulation, iterative particle reconstruction, and further refinement of 3D particle position by shaking the LOS. Performance evaluation based on synthetic noise-free images with a particle image density of 0.05 particle per pixel (ppp) in the presence of refractive interfaces demonstrates that LOSC-LPT can detect a higher number of particles and exhibits lower position uncertainty in the reconstructed particles, resulting in higher accuracy and robustness than that achieved with OpenLPT. In the application to an elliptical jet flow in an octagonal tank with refractive interfaces, the use of polynomial mapping results in smaller errors (mean calibration error 1.0 px). Moreover, 3D flow-field reconstructions demonstrate that the LOSC-LPT framework can recover a more accurate 3D Eulerian flow field and capture more complete coherent structures in the flow, and thus holds great potential for widespread application in 3D experimental fluid measurements

    A virtual object point model for the calibration of underwater stereo cameras to recover accurate 3D information

    Get PDF
    The focus of this thesis is on recovering accurate 3D information from underwater images. Underwater 3D reconstruction differs significantly from 3D reconstruction in air due to the refraction of light. In this thesis, the concepts of stereo 3D reconstruction in air get extended for underwater environments by an explicit consideration of refractive effects with the aid of a virtual object point model. Within underwater stereo 3D reconstruction, the focus of this thesis is on the refractive calibration of underwater stereo cameras

    Calibration of multiple cameras for large-scale experiments using a freely moving calibration target

    Get PDF
    Abstract: Obtaining accurate experimental data from Lagrangian tracking and tomographic velocimetry requires an accurate camera calibration consistent over multiple views. Established calibration procedures are often challenging to implement when the length scale of the measurement volume exceeds that of a typical laboratory experiment. Here, we combine tools developed in computer vision and non-linear camera mappings used in experimental fluid mechanics, to successfully calibrate a four-camera setup that is imaging inside a large tank of dimensions ∼10×25×6m3. The calibration procedure uses a planar checkerboard that is arbitrarily positioned at unknown locations and orientations. The method can be applied to any number of cameras. The parameters of the calibration yields direct estimates of the positions and orientations of the four cameras as well as the focal lengths of the lenses. These parameters are used to assess the quality of the calibration. The calibration allows us to perform accurate and consistent linear ray-tracing, which we use to triangulate and track fish inside the large tank. An open-source implementation of the calibration in Matlab is available. Graphic abstract: [Figure not available: see fulltext.]

    Distortion Estimation Through Explicit Modeling of the Refractive Surface

    Full text link
    Precise calibration is a must for high reliance 3D computer vision algorithms. A challenging case is when the camera is behind a protective glass or transparent object: due to refraction, the image is heavily distorted; the pinhole camera model alone can not be used and a distortion correction step is required. By directly modeling the geometry of the refractive media, we build the image generation process by tracing individual light rays from the camera to a target. Comparing the generated images to their distorted - observed - counterparts, we estimate the geometry parameters of the refractive surface via model inversion by employing an RBF neural network. We present an image collection methodology that produces data suited for finding the distortion parameters and test our algorithm on synthetic and real-world data. We analyze the results of the algorithm.Comment: Accepted to ICANN 201

    Multi-aperture foveated imaging

    Get PDF
    Foveated imaging, such as that evolved by biological systems to provide high angular resolution with a reduced space–bandwidth product, also offers advantages for man-made task-specific imaging. Foveated imaging systems using exclusively optical distortion are complex, bulky, and high cost, however. We demonstrate foveated imaging using a planar array of identical cameras combined with a prism array and superresolution reconstruction of a mosaicked image with a foveal variation in angular resolution of 5.9:1 and a quadrupling of the field of view. The combination of low-cost, mass-produced cameras and optics with computational image recovery offers enhanced capability of achieving large foveal ratios from compact, low-cost imaging systems

    Reconstruction and rendering of time-varying natural phenomena

    Get PDF
    While computer performance increases and computer generated images get ever more realistic, the need for modeling computer graphics content is becoming stronger. To achieve photo-realism detailed scenes have to be modeled often with a significant amount of manual labour. Interdisciplinary research combining the fields of Computer Graphics, Computer Vision and Scientific Computing has led to the development of (semi-)automatic modeling tools freeing the user of labour-intensive modeling tasks. The modeling of animated content is especially challenging. Realistic motion is necessary to convince the audience of computer games, movies with mixed reality content and augmented reality applications. The goal of this thesis is to investigate automated modeling techniques for time-varying natural phenomena. The results of the presented methods are animated, three-dimensional computer models of fire, smoke and fluid flows.Durch die steigende Rechenkapazität moderner Computer besteht die Möglichkeit immer realistischere Bilder virtuell zu erzeugen. Dadurch entsteht ein größerer Bedarf an Modellierungsarbeit um die nötigen Objekte virtuell zu beschreiben. Um photorealistische Bilder erzeugen zu können müssen sehr detaillierte Szenen, oft in mühsamer Handarbeit, modelliert werden. Ein interdisziplinärer Forschungszweig, der Computergrafik, Bildverarbeitung und Wissenschaftliches Rechnen verbindet, hat in den letzten Jahren die Entwicklung von (semi-)automatischen Methoden zur Modellierung von Computergrafikinhalten vorangetrieben. Die Modellierung dynamischer Inhalte ist dabei eine besonders anspruchsvolle Aufgabe, da realistische Bewegungsabläufe sehr wichtig für eine überzeugende Darstellung von Computergrafikinhalten in Filmen, Computerspielen oder Augmented-Reality Anwendungen sind. Das Ziel dieser Arbeit ist es automatische Modellierungsmethoden für dynamische Naturerscheinungen wie Wasserfluss, Feuer, Rauch und die Bewegung erhitzter Luft zu entwickeln. Das Resultat der entwickelten Methoden sind dabei dynamische, dreidimensionale Computergrafikmodelle

    Computational Imaging for Shape Understanding

    Get PDF
    Geometry is the essential property of real-world scenes. Understanding the shape of the object is critical to many computer vision applications. In this dissertation, we explore using computational imaging approaches to recover the geometry of real-world scenes. Computational imaging is an emerging technique that uses the co-designs of image hardware and computational software to expand the capacity of traditional cameras. To tackle face recognition in the uncontrolled environment, we study 2D color image and 3D shape to deal with body movement and self-occlusion. Especially, we use multiple RGB-D cameras to fuse the varying pose and register the front face in a unified coordinate system. The deep color feature and geodesic distance feature have been used to complete face recognition. To handle the underwater image application, we study the angular-spatial encoding and polarization state encoding of light rays using computational imaging devices. Specifically, we use the light field camera to tackle the challenging problem of underwater 3D reconstruction. We leverage the angular sampling of the light field for robust depth estimation. We also develop a fast ray marching algorithm to improve the efficiency of the algorithm. To deal with arbitrary reflectance, we investigate polarimetric imaging and develop polarimetric Helmholtz stereopsis that uses reciprocal polarimetric image pairs for high-fidelity 3D surface reconstruction. We formulate new reciprocity and diffuse/specular polarimetric constraints to recover surface depths and normals using an optimization framework. To recover the 3D shape in the unknown and uncontrolled natural illumination, we use two circularly polarized spotlights to boost the polarization cues corrupted by the environment lighting, as well as to provide photometric cues. To mitigate the effect of uncontrolled environment light in photometric constraints, we estimate a lighting proxy map and iteratively refine the normal and lighting estimation. Through expensive experiments on the simulated and real images, we demonstrate that our proposed computational imaging methods outperform traditional imaging approaches

    Depth recovery and parameter analysis using single-lens prism based stereovision system

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore