285 research outputs found

    Surface reconstruction of a blast plate using stereo vision

    Get PDF
    Includes bibliographical references.This thesis presents method for reconstructing and measuring the profile of a blast metal plate. Among the many methods in computer vision, stereo vision using two cameras is chosen as the range finding method in this thesis. This is because it is a non-contact method and hence eliminates the need to calibrate moving parts. A stereo-rig consists of two calibrated cameras and hence gives two view geometry. Stereoscopic reconstruction relies on epipolar geometry to constrain the relationship between the views. The 3-D point is then estimated using triangulation of the corresponding points from the two views. The blast plates that are reconstructed have highly reflective surfaces. This causes a problem due to specular reflection. This thesis further studies the reflective properties of the metal plate surface. Different methods of scanning the plate using the stereo-rig are investigated. The reconstructions obtained from these methods are analyzed for accuracy and consistency. Since low cost cameras are used in constructing the stereo-rig, the point cloud data obtained is further investigated for consistency by aligning different instances of the reconstruction. This is done using the Iterative Closest Programme (ICP) algorithm which tries to align two sets of data iteratively

    Registration and Recognition in 3D

    Get PDF
    The simplest Computer Vision algorithm can tell you what color it sees when you point it at an object, but asking that computer what it is looking at is a much harder problem. Camera and LiDAR (Light Detection And Ranging) sensors generally provide streams pixel of values and sophisticated algorithms must be engineered to recognize objects or the environment. There has been significant effort expended by the computer vision community on recognizing objects in color images; however, LiDAR sensors, which sense depth values for pixels instead of color, have been studied less. Recently we have seen a renewed interest in depth data with the democratization provided by consumer depth cameras. Detecting objects in depth data is more challenging in some ways because of the lack of texture and increased complexity of processing unordered point sets. We present three systems that contribute to solving the object recognition problem from the LiDAR perspective. They are: calibration, registration, and object recognition systems. We propose a novel calibration system that works with both line and raster based LiDAR sensors, and calibrates them with respect to image cameras. Our system can be extended to calibrate LiDAR sensors that do not give intensity information. We demonstrate a novel system that produces registrations between different LiDAR scans by transforming the input point cloud into a Constellation Extended Gaussian Image (CEGI) and then uses this CEGI to estimate the rotational alignment of the scans independently. Finally we present a method for object recognition which uses local (Spin Images) and global (CEGI) information to recognize cars in a large urban dataset. We present real world results from these three systems. Compelling experiments show that object recognition systems can gain much information using only 3D geometry. There are many object recognition and navigation algorithms that work on images; the work we propose in this thesis is more complimentary to those image based methods than competitive. This is an important step along the way to more intelligent robots

    ANALYSIS OF UNCERTAINTY IN UNDERWATER MULTIVIEW RECONSTRUCTION

    Get PDF
    Multiview reconstruction, a method for creating 3D models from multiple images from different views, has been a popular topic of research in the eld of computer vision in the last two decades. Increased availability of high-quality cameras led to the development of advanced techniques and algorithms. However, little attention has been paid to multiview reconstruction in underwater conditions. Researchers in a wide variety of elds (e.g. marine biology, archaeology, and geology) could benefit from having 3D models of seafloor and underwater objects. Cameras, designed to operate in air, must be put in protective housings to work underwater. This affects the image formation process. The largest source of underwater image distortion results from refraction of light, which occurs when light rays travel through boundaries between media with different refractive indices. This study addresses methods for accounting for light refraction when using a static rig with multiple cameras. We define a set of procedures to achieve optimal underwater reconstruction results, and we analyze the expected quality of the 3D models\u27 measurements

    Fusion of LIDAR with stereo camera data - an assessment

    Get PDF
    This thesis explores data fusion of LIDAR (laser range-finding) with stereo matching, with a particular emphasis on close-range industrial 3D imaging. Recently there has been interest in improving the robustness of stereo matching using data fusion with active range data. These range data have typically been acquired using time of flight cameras (ToFCs), however ToFCs offer poor spatial resolution and are noisy. Comparatively little work has been performed using LIDAR. It is argued that stereo and LIDAR are complementary and there are numerous advantages to integrating LIDAR into stereo systems. For instance, camera calibration is a necessary prerequisite for stereo 3D reconstruction, but the process is often tedious and requires precise calibration targets. It is shown that a visible-beam LIDAR enables automatic, accurate (sub-pixel) extrinsic and intrinsic camera calibration without any explicit targets. Two methods for using LIDAR to assist dense disparity maps from featureless scenes were investigated. The first involved using a LIDAR to provide high-confidence seed points for a region growing stereo matching algorithm. It is shown that these seed points allow dense matching in scenes which fail to match using stereo alone. Secondly, LIDAR was used to provide artificial texture in featureless image regions. Texture was generated by combining real or simulated images of every point the laser hits to form a pseudo-random pattern. Machine learning was used to determine the image regions that are most likely to be stereo- matched, reducing the number of LIDAR points required. Results are compared to competing techniques such as laser speckle, data projection and diffractive optical elements

    Challenges in 3D scanning: Focusing on Ears and Multiple View Stereopsis

    Get PDF

    Assessment of Camera Pose Estimation Using Geo-Located Images from Simultaneous Localization and Mapping

    Get PDF
    This research proposes a method for enabling low-cost camera localization using geo-located images generated with factorgraph-based Simultaneous Localization And Mapping (SLAM). The SLAM results are paired with panoramic image data to generate geo-located images, which can be used to locate and orient low-cost cameras. This study determines the efficacy of using a spherical camera and LIDAR sensor to enable localization for a wide range of cameras with low size, weight, power, and cost. This includes determining the accuracy of SLAM when geo-referencing images, along with introducing a promising method for extracting range measurements from monocular images of known features

    Visual Human-Computer Interaction

    Get PDF

    Geometrical Analysis and Rectification of Thermal Infrared Video Frame Scanner Imagery and Its Potential Applications to Topographic Mapping

    Get PDF
    This thesis is concerned with an investigation into the possiblites of generating metric information and carrying out topographic mapping operations from thermal frame scanner video images. The main aspects discussed within the context of this thesis are:- (i) the construction and operational characteristics of video frame scanners; (ii) the geometry of frame scanners; (iii) geometric calibration of thermal video frame scanners; (iv) the devising, construction and integration of a video-based monocomparator for video image coordinate measurements; (v) devising and implementing suitable analytical photogrammetric techniques to be applied to frame scanner imagery; (vi) the use of such frame scanners to acquire airborne video images for a pre-selected test area; (vii) the interpretation of thermal video frame scanners for topographic mapping; (viii) digital rectification of frame scanner imagery; and (ix) creation of a three-dimensional stereo model on a video monitor screen using the digitally rectified video images

    The use of consumer depth cameras for calculating body segment parameters.

    Get PDF
    Body segment parameters (BSPs) are pivotal to a number of key analyses within sports and healthcare. Accuracy is paramount, as investigations have shown small errors in BSPs to have significant impact upon subsequent analyses, particularly when analysing the dynamics of high acceleration movements. There are many techniques with which to estimate BSPs, however, the majority are complex, time consuming, and make large assumptions about the underlying structure of the human body, leading to considerable errors. Interest is increasingly turning towards obtaining person-specific BSPs from 3D scans, however, the majority of current scanning systems are expensive, complex, require skilled operators, and require lengthy post processing of the captured data. The purpose of this study was to develop a low cost 3D scanning system capable of estimating accurate and reliable person-specific segmental volume, forming a fundamental first step towards calculation of the full range of BSPs.A low cost 3D scanning system was developed, comprising four Microsoft Kinect RGB-D sensors, and capable of estimating person-specific segmental volume in a scanning operation taking less than one second. Individual sensors were calibrated prior to first use, overcoming inherent distortion of the 3D data. Scans from each of the sensors were aligned with one another via an initial extrinsic calibration process, producing 360° colour rendered 3D scans. A scanning protocol was developed, designed to limit movement due to postural sway and breathing throughout the scanning operation. Scans were post processed to remove discontinuities at edges, and parameters of interest calculated using a combination of manual digitisation and automated algorithms.The scanning system was validated using a series of geometric objects representative of human body segments, showing high reliability and systematic over estimation of scan-derived measurements. Scan-derived volumes of living human participants were also compared to those calculated using a typical geometric BSP model. Results showed close agreement, however, absolute differences could not be quantified owing to the lack of gold standard data. The study suggests the scanning system would be well received by practitioners, offering many advantages over current techniques. However, future work is required to further characterise the scanning system's absolute accuracy
    • …
    corecore