1,202 research outputs found

    Flaw reconstruction in NDE using a limited number of x-ray radiographic projections

    Get PDF
    One of the major problems in nondestructive evaluation (NDE) is the evaluation of flaw sizes and locations in a limited inspectability environment. In NDE x-ray radiography, this frequently occurs when the geometry of the part under test does not allow x-ray penetration in certain directions. Other times, the inspection setup in the field does not allow for inspection at all angles around the object. This dissertation presents a model based reconstruction technique which requires a small number of x-ray projections from one side of the object under test. The estimation and reconstruction of model parameters rather than the flaw distribution itself requires much less information, thereby reducing the number of required projections. Crack-like flaws are modeled as piecewise linear curves (connected points) and are reconstructed stereographically from at least two projections by matching corresponding endpoints of the linear segments. Volumetric flaws are modeled as ellipsoids and elliptical slices through ellipsoids. The elliptical principal axes lengths, orientation angles and locations are estimated by fitting a forward model to the projection data. The fitting procedure is highly nonlinear and requires stereographic projections to obtain initial estimates of the model parameters. The methods are tested both on simulated and experimental data. Comparisons are made with models from the field of stereology. Finally, analysis of reconstruction errors is presented for both models

    Trifocal Relative Pose from Lines at Points and its Efficient Solution

    Full text link
    We present a new minimal problem for relative pose estimation mixing point features with lines incident at points observed in three views and its efficient homotopy continuation solver. We demonstrate the generality of the approach by analyzing and solving an additional problem with mixed point and line correspondences in three views. The minimal problems include correspondences of (i) three points and one line and (ii) three points and two lines through two of the points which is reported and analyzed here for the first time. These are difficult to solve, as they have 216 and - as shown here - 312 solutions, but cover important practical situations when line and point features appear together, e.g., in urban scenes or when observing curves. We demonstrate that even such difficult problems can be solved robustly using a suitable homotopy continuation technique and we provide an implementation optimized for minimal problems that can be integrated into engineering applications. Our simulated and real experiments demonstrate our solvers in the camera geometry computation task in structure from motion. We show that new solvers allow for reconstructing challenging scenes where the standard two-view initialization of structure from motion fails.Comment: This material is based upon work supported by the National Science Foundation under Grant No. DMS-1439786 while most authors were in residence at Brown University's Institute for Computational and Experimental Research in Mathematics -- ICERM, in Providence, R

    Novel Approaches in Structured Light Illumination

    Get PDF
    Among the various approaches to 3-D imaging, structured light illumination (SLI) is widely spread. SLI employs a pair of digital projector and digital camera such that the correspondences can be found based upon the projecting and capturing of a group of designed light patterns. As an active sensing method, SLI is known for its robustness and high accuracy. In this dissertation, I study the phase shifting method (PSM), which is one of the most employed strategy in SLI. And, three novel approaches in PSM have been proposed in this dissertation. First, by regarding the design of patterns as placing points in an N-dimensional space, I take the phase measuring profilometry (PMP) as an example and propose the edge-pattern strategy which achieves maximum signal to noise ratio (SNR) for the projected patterns. Second, I develop a novel period information embedded pattern strategy for fast, reliable 3-D data acquisition and reconstruction. The proposed period coded phase shifting strategy removes the depth ambiguity associated with traditional phase shifting patterns without reducing phase accuracy or increasing the number of projected patterns. Thus, it can be employed for high accuracy realtime 3-D system. Then, I propose a hybrid approach for high quality 3-D reconstructions with only a small number of illumination patterns by maximizing the use of correspondence information from the phase, texture, and modulation data derived from multi-view, PMP-based, SLI images, without rigorously synchronizing the cameras and projectors and calibrating the device gammas. Experimental results demonstrate the advantages of the proposed novel strategies for 3-D SLI systems

    Ricerche di Geomatica 2011

    Get PDF
    Questo volume raccoglie gli articoli che hanno partecipato al Premio AUTeC 2011. Il premio è stato istituito nel 2005. Viene conferito ogni anno ad una tesi di Dottorato giudicata particolarmente significativa sui temi di pertinenza del SSD ICAR/06 (Topografia e Cartografia) nei diversi Dottorati attivi in Italia

    Visual Odometry and Sparse Scene Reconstruction for UAVs with a Multi-Fisheye Camera System

    Get PDF
    Autonomously operating UAVs demand a fast localization for navigation, to actively explore unknown areas and to create maps. For pose estimation, many UAV systems make use of a combination of GPS receivers and inertial sensor units (IMU). However, GPS signal coverage may go down occasionally, especially in the close vicinity of objects, and precise IMUs are too heavy to be carried by lightweight UAVs. This and the high cost of high quality IMU motivate the use of inexpensive vision based sensors for localization using visual odometry or visual SLAM (simultaneous localization and mapping) techniques. The first contribution of this thesis is a more general approach to bundle adjustment with an extended version of the projective coplanarity equation which enables us to make use of omnidirectional multi-camera systems which may consist of fisheye cameras that can capture a large field of view with one shot. We use ray directions as observations instead of image points which is why our approach does not rely on a specific projection model assuming a central projection. In addition, our approach allows the integration and estimation of points at infinity, which classical bundle adjustments are not capable of. We show that the integration of far or infinitely far points stabilizes the estimation of the rotation angles of the camera poses. In its second contribution, we employ this approach to bundle adjustment in a highly integrated system for incremental pose estimation and mapping on light-weight UAVs. Based on the image sequences of a multi-camera system our system makes use of tracked feature points to incrementally build a sparse map and incrementally refines this map using the iSAM2 algorithm. Our system is able to optionally integrate GPS information on the level of carrier phase observations even in underconstrained situations, e.g. if only two satellites are visible, for georeferenced pose estimation. This way, we are able to use all available information in underconstrained GPS situations to keep the mapped 3D model accurate and georeferenced. In its third contribution, we present an approach for re-using existing methods for dense stereo matching with fisheye cameras, which has the advantage that highly optimized existing methods can be applied as a black-box without modifications even with cameras that have field of view of more than 180 deg. We provide a detailed accuracy analysis of the obtained dense stereo results. The accuracy analysis shows the growing uncertainty of observed image points of fisheye cameras due to increasing blur towards the image border. Core of the contribution is a rigorous variance component estimation which allows to estimate the variance of the observed disparities at an image point as a function of the distance of that point to the principal point. We show that this improved stochastic model provides a more realistic prediction of the uncertainty of the triangulated 3D points.Autonom operierende UAVs benötigen eine schnelle Lokalisierung zur Navigation, zur Exploration unbekannter Umgebungen und zur Kartierung. Zur Posenbestimmung verwenden viele UAV-Systeme eine Kombination aus GPS-Empfängern und Inertial-Messeinheiten (IMU). Die Verfügbarkeit von GPS-Signalen ist jedoch nicht überall gewährleistet, insbesondere in der Nähe abschattender Objekte, und präzise IMUs sind für leichtgewichtige UAVs zu schwer. Auch die hohen Kosten qualitativ hochwertiger IMUs motivieren den Einsatz von kostengünstigen bildgebenden Sensoren zur Lokalisierung mittels visueller Odometrie oder SLAM-Techniken zur simultanen Lokalisierung und Kartierung. Im ersten wissenschaftlichen Beitrag dieser Arbeit entwickeln wir einen allgemeineren Ansatz für die Bündelausgleichung mit einem erweiterten Modell für die projektive Kollinearitätsgleichung, sodass auch omnidirektionale Multikamerasysteme verwendet werden können, welche beispielsweise bestehend aus Fisheyekameras mit einer Aufnahme einen großen Sichtbereich abdecken. Durch die Integration von Strahlrichtungen als Beobachtungen ist unser Ansatz nicht von einem kameraspezifischen Abbildungsmodell abhängig solange dieses der Zentralprojektion folgt. Zudem erlaubt unser Ansatz die Integration und Schätzung von unendlich fernen Punkten, was bei klassischen Bündelausgleichungen nicht möglich ist. Wir zeigen, dass durch die Integration weit entfernter und unendlich ferner Punkte die Schätzung der Rotationswinkel der Kameraposen stabilisiert werden kann. Im zweiten Beitrag verwenden wir diesen entwickelten Ansatz zur Bündelausgleichung für ein System zur inkrementellen Posenschätzung und dünnbesetzten Kartierung auf einem leichtgewichtigen UAV. Basierend auf den Bildsequenzen eines Mulitkamerasystems baut unser System mittels verfolgter markanter Bildpunkte inkrementell eine dünnbesetzte Karte auf und verfeinert diese inkrementell mittels des iSAM2-Algorithmus. Unser System ist in der Lage optional auch GPS Informationen auf dem Level von GPS-Trägerphasen zu integrieren, wodurch sogar in unterbestimmten Situation - beispielsweise bei nur zwei verfügbaren Satelliten - diese Informationen zur georeferenzierten Posenschätzung verwendet werden können. Im dritten Beitrag stellen wir einen Ansatz zur Verwendung existierender Methoden für dichtes Stereomatching mit Fisheyekameras vor, sodass hoch optimierte existierende Methoden als Black Box ohne Modifzierungen sogar mit Kameras mit einem Gesichtsfeld von mehr als 180 Grad verwendet werden können. Wir stellen eine detaillierte Genauigkeitsanalyse basierend auf dem Ergebnis des dichten Stereomatchings dar. Die Genauigkeitsanalyse zeigt, wie stark die Genauigkeit beobachteter Bildpunkte bei Fisheyekameras zum Bildrand aufgrund von zunehmender Unschärfe abnimmt. Das Kernstück dieses Beitrags ist eine Varianzkomponentenschätzung, welche die Schätzung der Varianz der beobachteten Disparitäten an einem Bildpunkt als Funktion von der Distanz dieses Punktes zum Hauptpunkt des Bildes ermöglicht. Wir zeigen, dass dieses verbesserte stochastische Modell eine realistischere Prädiktion der Genauigkeiten der 3D Punkte ermöglicht

    Surfaces from the visual past : recovering high-resolution terrain data from historic aerial imagery for multitemporal landscape analysis

    No full text
    Historic aerial images are invaluable sources of aid to archaeological research. Often collected with large-format photogrammetric quality cameras, these images are potential archives of multidimensional data that can be used to recover information about historic landscapes that have been lost to modern development. However, a lack of camera information for many historic images coupled with physical degradation of their media has often made it difficult to compute geometrically rigorous 3D content from such imagery. While advances in photogrammetry and computer vision over the last two decades have made possible the extraction of accurate and detailed 3D topographical data from high-quality digital images emanating from uncalibrated or unknown cameras, the target source material for these algorithms is normally digital content and thus not negatively affected by the passage of time. In this paper, we present refinements to a computer vision-based workflow for the extraction of 3D data from historic aerial imagery, using readily available software, specific image preprocessing techniques and in-field measurement observations to mitigate some shortcomings of archival imagery and improve extraction of historical digital elevation models (hDEMs) for use in landscape archaeological research. We apply the developed method to a series of historic image sets and modern topographic data covering a period of over 70 years in western Sicily (Italy) and evaluate the outcome. The resulting series of hDEMs form a temporal data stack which is compared with modern high-resolution terrain data using a geomorphic change detection approach, providing a quantification of landscape change through time in extent and depth, and the impact of this change on archaeological resources

    Automatic face recognition using stereo images

    Get PDF
    Face recognition is an important pattern recognition problem, in the study of both natural and artificial learning problems. Compaxed to other biometrics, it is non-intrusive, non- invasive and requires no paxticipation from the subjects. As a result, it has many applications varying from human-computer-interaction to access control and law-enforcement to crowd surveillance. In typical optical image based face recognition systems, the systematic vaxiability arising from representing the three-dimensional (3D) shape of a face by a two-dimensional (21)) illumination intensity matrix is treated as random vaxiability. Multiple examples of the face displaying vaxying pose and expressions axe captured in different imaging conditions. The imaging environment, pose and expressions are strictly controlled and the images undergo rigorous normalisation and pre-processing. This may be implemented in a paxtially or a fully automated system. Although these systems report high classification accuracies (>90%), they lack versatility and tend to fail when deployed outside laboratory conditions. Recently, more sophisticated 3D face recognition systems haxnessing the depth information have emerged. These systems usually employ specialist equipment such as laser scanners and structured light projectors. Although more accurate than 2D optical image based recognition, these systems are equally difficult to implement in a non-co-operative environment. Existing face recognition systems, both 2D and 3D, detract from the main advantages of face recognition and fail to fully exploit its non-intrusive capacity. This is either because they rely too much on subject co-operation, which is not always available, or because they cannot cope with noisy data. The main objective of this work was to investigate the role of depth information in face recognition in a noisy environment. A stereo-based system, inspired by the human binocular vision, was devised using a pair of manually calibrated digital off-the-shelf cameras in a stereo setup to compute depth information. Depth values extracted from 2D intensity images using stereoscopy are extremely noisy, and as a result this approach for face recognition is rare. This was cofirmed by the results of our experimental work. Noise in the set of correspondences, camera calibration and triangulation led to inaccurate depth reconstruction, which in turn led to poor classifier accuracy for both 3D surface matching and 211) 2 depth maps. Recognition experiments axe performed on the Sheffield Dataset, consisting 692 images of 22 individuals with varying pose, illumination and expressions

    Volumetric error modelling of a stereo vision system for error correction in photogrammetric three-dimensional coordinate metrology

    Get PDF
    Optical three-dimensional coordinate measurement using stereo vision has systematic errors that affect measurement quality. This paper presents a scheme for measuring, modelling and correcting these errors. The position and orientation of a linear stage are measured with a laser interferometer while a stereo vision system tracks target points on the moving stage. With reference to the higher accuracy laser interferometer measurement, the displacement errors of the tracked points are evaluated. Regression using a neural network is used to generate a volumetric error model from the evaluated displacement errors. The regression model is shown to outperform other interpolation methods. The volumetric error model is validated by correcting the three-dimensional coordinates of the point cloud from a photogrammetry instrument that uses the stereo vision system. The corrected points from the measurement of a calibrated spherical artefact are shown to have size and form errors of less than 50 ÎĽm and 110 ÎĽm respectively. A reduction of up to 30% in the magnitude of the probing size error is observed after error correction is applied

    Keyframe-based monocular SLAM: design, survey, and future directions

    Get PDF
    Extensive research in the field of monocular SLAM for the past fifteen years has yielded workable systems that found their way into various applications in robotics and augmented reality. Although filter-based monocular SLAM systems were common at some time, the more efficient keyframe-based solutions are becoming the de facto methodology for building a monocular SLAM system. The objective of this paper is threefold: first, the paper serves as a guideline for people seeking to design their own monocular SLAM according to specific environmental constraints. Second, it presents a survey that covers the various keyframe-based monocular SLAM systems in the literature, detailing the components of their implementation, and critically assessing the specific strategies made in each proposed solution. Third, the paper provides insight into the direction of future research in this field, to address the major limitations still facing monocular SLAM; namely, in the issues of illumination changes, initialization, highly dynamic motion, poorly textured scenes, repetitive textures, map maintenance, and failure recovery

    A Tomographic-PIV Investigation of Vapor-Induced Flow Structures in Confined Jet Impingement Boiling

    Get PDF
    Tomographic particle image velocimetry (PIV) is used to study the effect of confinement gap height on the liquid flow characteristics in jet impingement boiling. This first application of tomographic PIV to flow boiling is significant given the complexity of confined two-phase jet impingement. A jet of subcooled wa- ter at a Reynolds number of 5,0 0 0 impinges onto a circular heat source undergoing boiling heat transfer at a constant heat input. Confinement gap heights of 8, 4, and 2 jet diameters are investigated. A visual hull method is used to reconstruct the time-varying regions of the vapor in the flow. The vapor motion is found to govern the liquid flow pattern and turbulence generation in the confinement gap. Time-averaged velocities and regions of turbulent kinetic energy in the liquid are highest for a confinement gap height of 8 jet diameters, with lower velocity magnitude and turbulence being observed for the smaller spac- ings. Coherent vortical structures identified with the λ2 -criterion are found to occur most frequently near the moving vapor interface. The most intense regions of turbulent kinetic energy do not coincide with the location of coherent structures within the flow. Irrotational velocity fluctuations in the liquid phase caused by vapor bubble pinch-offare the primary cause of the high turbulent kinetic energy measured in these regions. At a gap height of H / d = 2 the vapor plume is constrained as it grows from the heat source, causing bulk flow oscillations in the downstream region of the confinement gap
    • …
    corecore