8,271 research outputs found

    Comparison of bundle adjustment software for camera calibration in close range photogrammetry

    Get PDF
    Camera calibration has always been an essential component of photogrammetric measurement. Nowadays, with self-calibration being an integral and routinely applied operation within photogrammetric triangulation, especially in high-accuracy close-range measurement. Photogrammetric camera calibration is usually carried out together with the calculation of object coordinates such as principal distance; principal point and lens distortion are usually determined by a self-calibrating bundle adjustment approach. There is a variety of bundle adjustment software for camera calibration that is available in the market nowadays. Basically, each of the software has their own capabilities to calibrate the camera. The user has to select appropriate and correct software to suite their needs. This paper discusses about the investigation and assessment of several bundle adjustment software used to calibrate digital camera. In this study, a test field was designed and fabricated. Then the digital camera is calibrated by using bundle adjustment software. Normally the camera calibration parameters comprise of the unknown parameters of the interior and exterior orientation and the 3D object coordinates. The quality of the result depends on many factors; however, the network configuration is among the most vital factor. After motion the differences of camera parameters determined by self-calibration bundle adjustment software are reported in this paper. In this study the result showed that the flexible and powerful tool for camera calibration using bundle block adjustment method is the Australis software

    Towards Visual Ego-motion Learning in Robots

    Full text link
    Many model-based Visual Odometry (VO) algorithms have been proposed in the past decade, often restricted to the type of camera optics, or the underlying motion manifold observed. We envision robots to be able to learn and perform these tasks, in a minimally supervised setting, as they gain more experience. To this end, we propose a fully trainable solution to visual ego-motion estimation for varied camera optics. We propose a visual ego-motion learning architecture that maps observed optical flow vectors to an ego-motion density estimate via a Mixture Density Network (MDN). By modeling the architecture as a Conditional Variational Autoencoder (C-VAE), our model is able to provide introspective reasoning and prediction for ego-motion induced scene-flow. Additionally, our proposed model is especially amenable to bootstrapped ego-motion learning in robots where the supervision in ego-motion estimation for a particular camera sensor can be obtained from standard navigation-based sensor fusion strategies (GPS/INS and wheel-odometry fusion). Through experiments, we show the utility of our proposed approach in enabling the concept of self-supervised learning for visual ego-motion estimation in autonomous robots.Comment: Conference paper; Submitted to IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017, Vancouver CA; 8 pages, 8 figures, 2 table

    Hierarchical structure-and-motion recovery from uncalibrated images

    Full text link
    This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D struc- ture from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method.Comment: Accepted for publication in CVI

    Vision-Based Navigation III: Pose and Motion from Omnidirectional Optical Flow and a Digital Terrain Map

    Full text link
    An algorithm for pose and motion estimation using corresponding features in omnidirectional images and a digital terrain map is proposed. In previous paper, such algorithm for regular camera was considered. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables recovering the absolute position and orientation of the camera. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. In this paper, these constraints are extended to handle non-central projection, as is the case with many omnidirectional systems. The utilization of omnidirectional data is shown to improve the robustness and accuracy of the navigation algorithm. The feasibility of this algorithm is established through lab experimentation with two kinds of omnidirectional acquisition systems. The first one is polydioptric cameras while the second is catadioptric camera.Comment: 6 pages, 9 figure

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Calibration routine for a telecentric stereo vision system considering affine mirror ambiguity

    Get PDF
    A robust calibration approach for a telecentric stereo camera system for three-dimensional (3-D) surface measurements is presented, considering the effect of affine mirror ambiguity. By optimizing the parameters of a rigid body transformation between two marker planes and transforming the two-dimensional (2-D) data into one coordinate frame, a 3-D calibration object is obtained, avoiding high manufacturing costs. Based on the recent contributions in the literature, the calibration routine consists of an initial parameter estimation by affine reconstruction to provide good start values for a subsequent nonlinear stereo refinement based on a Levenberg–Marquardt optimization. To this end, the coordinates of the calibration target are reconstructed in 3-D using the Tomasi–Kanade factorization algorithm for affine cameras with Euclidean upgrade. The reconstructed result is not properly scaled and not unique due to affine ambiguity. In order to correct the erroneous scaling, the similarity transformation between one of the 2-D calibration plane points and the corresponding 3-D points is estimated. The resulting scaling factor is used to rescale the 3-D point data, which then allows in combination with the 2-D calibration plane data for a determination of the start values for the subsequent nonlinear stereo refinement. As the rigid body transformation between the 2-D calibration planes is also obtained, a possible affine mirror ambiguity in the affine reconstruction result can be robustly corrected. The calibration routine is validated by an experimental calibration and various plausibility tests. Due to the usage of a calibration object with metric information, the determined camera projection matrices allow for a triangulation of correctly scaled metric 3-D points without the need for an individual camera magnification determination

    Integration, Testing, And Analysis Of Multispectral Imager On Small Unmanned Aerial System For Skin Detection

    Get PDF
    Small Unmanned Aerial Systems (SUAS) have been utilized by the military, geological researchers, and first responders, to provide information about the environment in real time. Hyperspectral Imagery (HSI) provides high resolution data in the spatial and spectral dimension; all objects, including skin have unique spectral signatures. However, little research has been done to integrate HSI into SUAS due to their cost and form factor. Multispectral Imagery (MSI) has proven capable of dismount detection with several distinct wavelengths. This research proposes a spectral imaging system that can detect dismounts on SUAS. Also, factors that pertain to accurate dismount detection with an SUAS are explored. Dismount skin detection from an aerial platform also has an inherent difficulty compared to ground-based platforms. Computer vision registration, stereo camera calibration, and geolocation from autopilot telemetry are utilized to design a dismount detection platform with the Systems Engineering methodology. An average 5.112% difference in ROC AUC values that compared a line scan spectral imager to the prototype area scan imager was recorded. Results indicated that an SUAS-based Spectral Imagers are capable tools in dismount detection protocols. Deficiencies associated with the test expedient prototype are discussed and recommendations for further improvements are provided

    Calibration by correlation using metric embedding from non-metric similarities

    Get PDF
    This paper presents a new intrinsic calibration method that allows us to calibrate a generic single-view point camera just by waving it around. From the video sequence obtained while the camera undergoes random motion, we compute the pairwise time correlation of the luminance signal for a subset of the pixels. We show that, if the camera undergoes a random uniform motion, then the pairwise correlation of any pixels pair is a function of the distance between the pixel directions on the visual sphere. This leads to formalizing calibration as a problem of metric embedding from non-metric measurements: we want to find the disposition of pixels on the visual sphere from similarities that are an unknown function of the distances. This problem is a generalization of multidimensional scaling (MDS) that has so far resisted a comprehensive observability analysis (can we reconstruct a metrically accurate embedding?) and a solid generic solution (how to do so?). We show that the observability depends both on the local geometric properties (curvature) as well as on the global topological properties (connectedness) of the target manifold. We show that, in contrast to the Euclidean case, on the sphere we can recover the scale of the points distribution, therefore obtaining a metrically accurate solution from non-metric measurements. We describe an algorithm that is robust across manifolds and can recover a metrically accurate solution when the metric information is observable. We demonstrate the performance of the algorithm for several cameras (pin-hole, fish-eye, omnidirectional), and we obtain results comparable to calibration using classical methods. Additional synthetic benchmarks show that the algorithm performs as theoretically predicted for all corner cases of the observability analysis
    • 

    corecore