285 research outputs found
Surface reconstruction of a blast plate using stereo vision
Includes bibliographical references.This thesis presents method for reconstructing and measuring the profile of a blast metal plate. Among the many methods in computer vision, stereo vision using two cameras is chosen as the range finding method in this thesis. This is because it is a non-contact method and hence eliminates the need to calibrate moving parts. A stereo-rig consists of two calibrated cameras and hence gives two view geometry. Stereoscopic reconstruction relies on epipolar geometry to constrain the relationship between the views. The 3-D point is then estimated using triangulation of the corresponding points from the two views. The blast plates that are reconstructed have highly reflective surfaces. This causes a problem due to specular reflection. This thesis further studies the reflective properties of the metal plate surface. Different methods of scanning the plate using the stereo-rig are investigated. The reconstructions obtained from these methods are analyzed for accuracy and consistency. Since low cost cameras are used in constructing the stereo-rig, the point cloud data obtained is further investigated for consistency by aligning different instances of the reconstruction. This is done using the Iterative Closest Programme (ICP) algorithm which tries to align two sets of data iteratively
Registration and Recognition in 3D
The simplest Computer Vision algorithm can tell you what color it sees when you point it at an object, but asking that computer what it is looking at is a much harder problem. Camera and LiDAR (Light Detection And Ranging) sensors generally provide streams pixel of values and sophisticated algorithms must be engineered to recognize objects or the environment. There has been significant effort expended by the computer vision community on recognizing objects in color images; however, LiDAR sensors, which sense depth values for pixels instead of color, have been studied less. Recently we have seen a renewed interest in depth data with the democratization provided by consumer depth cameras. Detecting objects in depth data is more challenging in some ways because of the lack of texture and increased complexity of processing unordered point sets. We present three systems that contribute to solving the object recognition problem from the LiDAR perspective. They are: calibration, registration, and object recognition systems. We propose a novel calibration system that works with both line and raster based LiDAR sensors, and calibrates them with respect to image cameras. Our system can be extended to calibrate LiDAR sensors that do not give intensity information. We demonstrate a novel system that produces registrations between different LiDAR scans by transforming the input point cloud into a Constellation Extended Gaussian Image (CEGI) and then uses this CEGI to estimate the rotational alignment of the scans independently. Finally we present a method for object recognition which uses local (Spin Images) and global (CEGI) information to recognize cars in a large urban dataset. We present real world results from these three systems. Compelling experiments show that object recognition systems can gain much information using only 3D geometry. There are many object recognition and navigation algorithms that work on images; the work we propose in this thesis is more complimentary to those image based methods than competitive. This is an important step along the way to more intelligent robots
Recommended from our members
Development of a Robotic Positioning and Tracking System for a Research Laboratory
Measurement of residual stress using neutron or synchrotron diffraction relies on the accurate alignment of the sample in relation to the gauge volume of the instrument. Automatic sample alignment can be achieved using kinematic models of the positioning system provided the relevant kinematic parameters are known, or can be determined, to a suitable accuracy.
The main problem addressed in this thesis is improving the repeatability and accuracy of the sample positioning for the strain scanning, through the use of techniques from robotic calibration theory to generate kinematic models of both off-the-shelf and custom-built positioning systems. The approach is illustrated using a positioning system in use on the ENGIN-X instrument at the UK’s ISIS pulsed neutron source comprising a traditional XYZΩ table augmented with a triple axis manipulator. Accuracies better than 100microns were achieved for this compound system. Although discussed here in terms of sample positioning systems these methods are entirely applicable to other moving instrument components such as beam shaping jaws and detectors.
Several factors could lead to inaccurate positioning on a neutron or synchrotron diffractometer. It is therefore essential to validate the accuracy of positioning especially during experiments which require a high level of accuracy. In this thesis, a stereo camera system is developed to monitor the sample and other moving parts of the diffractometer. The camera metrology system is designed to measure the positions of retroreflective markers attached to any object that is being monitored. A fully automated camera calibration procedure is developed with an emphasis on accuracy. The potential accuracy of this system is demonstrated and problems that limit accuracy are discussed. It is anticipated that the camera system would be used to correct the positioning system when the error is minimal or notify the user of the error when it is significant
ANALYSIS OF UNCERTAINTY IN UNDERWATER MULTIVIEW RECONSTRUCTION
Multiview reconstruction, a method for creating 3D models from multiple images from different views, has been a popular topic of research in the eld of computer vision in the last two decades. Increased availability of high-quality cameras led to the development of advanced techniques and algorithms. However, little attention has been paid to multiview reconstruction in underwater conditions. Researchers in a wide variety of elds (e.g. marine biology, archaeology, and geology) could benefit from having 3D models of seafloor and underwater objects. Cameras, designed to operate in air, must be put in protective housings to work underwater. This affects the image formation process. The largest source of underwater image distortion results from refraction of light, which occurs when light rays travel through boundaries between media with different refractive indices. This study addresses methods for accounting for light refraction when using a static rig with multiple cameras. We define a set of procedures to achieve optimal underwater reconstruction results, and we analyze the expected quality of the 3D models\u27 measurements
Fusion of LIDAR with stereo camera data - an assessment
This thesis explores data fusion of LIDAR (laser range-finding) with stereo matching, with a particular emphasis on close-range industrial 3D imaging. Recently there has been interest in improving the robustness of stereo matching using data fusion with active range data. These range data have typically been acquired using time of flight cameras (ToFCs), however ToFCs offer poor spatial resolution and are noisy. Comparatively little work has been performed using LIDAR. It is argued that stereo and LIDAR are complementary and there are numerous advantages to integrating LIDAR into stereo systems. For instance, camera calibration is a necessary prerequisite for stereo 3D reconstruction, but the process is often tedious and requires precise calibration targets. It is shown that a visible-beam LIDAR enables automatic, accurate (sub-pixel) extrinsic and intrinsic camera calibration without any explicit targets. Two methods for using LIDAR to assist dense disparity maps from featureless scenes were investigated. The first involved using a LIDAR to provide high-confidence seed points for a region growing stereo matching algorithm. It is shown that these seed points allow dense matching in scenes which fail to match using stereo alone. Secondly, LIDAR was used to provide artificial texture in featureless image regions. Texture was generated by combining real or simulated images of every point the laser hits to form a pseudo-random pattern. Machine learning was used to determine the image regions that are most likely to be stereo- matched, reducing the number of LIDAR points required. Results are compared to competing techniques such as laser speckle, data projection and diffractive optical elements
Assessment of Camera Pose Estimation Using Geo-Located Images from Simultaneous Localization and Mapping
This research proposes a method for enabling low-cost camera localization using geo-located images generated with factorgraph-based Simultaneous Localization And Mapping (SLAM). The SLAM results are paired with panoramic image data to generate geo-located images, which can be used to locate and orient low-cost cameras. This study determines the efficacy of using a spherical camera and LIDAR sensor to enable localization for a wide range of cameras with low size, weight, power, and cost. This includes determining the accuracy of SLAM when geo-referencing images, along with introducing a promising method for extracting range measurements from monocular images of known features
Geometrical Analysis and Rectification of Thermal Infrared Video Frame Scanner Imagery and Its Potential Applications to Topographic Mapping
This thesis is concerned with an investigation into the possiblites of generating metric information and carrying out topographic mapping operations from thermal frame scanner video images. The main aspects discussed within the context of this thesis are:- (i) the construction and operational characteristics of video frame scanners; (ii) the geometry of frame scanners; (iii) geometric calibration of thermal video frame scanners; (iv) the devising, construction and integration of a video-based monocomparator for video image coordinate measurements; (v) devising and implementing suitable analytical photogrammetric techniques to be applied to frame scanner imagery; (vi) the use of such frame scanners to acquire airborne video images for a pre-selected test area; (vii) the interpretation of thermal video frame scanners for topographic mapping; (viii) digital rectification of frame scanner imagery; and (ix) creation of a three-dimensional stereo model on a video monitor screen using the digitally rectified video images
The use of consumer depth cameras for calculating body segment parameters.
Body segment parameters (BSPs) are pivotal to a number of key analyses within sports and healthcare. Accuracy is paramount, as investigations have shown small errors in BSPs to have significant impact upon subsequent analyses, particularly when analysing the dynamics of high acceleration movements. There are many techniques with which to estimate BSPs, however, the majority are complex, time consuming, and make large assumptions about the underlying structure of the human body, leading to considerable errors. Interest is increasingly turning towards obtaining person-specific BSPs from 3D scans, however, the majority of current scanning systems are expensive, complex, require skilled operators, and require lengthy post processing of the captured data. The purpose of this study was to develop a low cost 3D scanning system capable of estimating accurate and reliable person-specific segmental volume, forming a fundamental first step towards calculation of the full range of BSPs.A low cost 3D scanning system was developed, comprising four Microsoft Kinect RGB-D sensors, and capable of estimating person-specific segmental volume in a scanning operation taking less than one second. Individual sensors were calibrated prior to first use, overcoming inherent distortion of the 3D data. Scans from each of the sensors were aligned with one another via an initial extrinsic calibration process, producing 360° colour rendered 3D scans. A scanning protocol was developed, designed to limit movement due to postural sway and breathing throughout the scanning operation. Scans were post processed to remove discontinuities at edges, and parameters of interest calculated using a combination of manual digitisation and automated algorithms.The scanning system was validated using a series of geometric objects representative of human body segments, showing high reliability and systematic over estimation of scan-derived measurements. Scan-derived volumes of living human participants were also compared to those calculated using a typical geometric BSP model. Results showed close agreement, however, absolute differences could not be quantified owing to the lack of gold standard data. The study suggests the scanning system would be well received by practitioners, offering many advantages over current techniques. However, future work is required to further characterise the scanning system's absolute accuracy
- …