8,203 research outputs found

    Projector calibration method based on optical coaxial camera

    Get PDF
    This paper presents a novel method to accurately calibrate a DLP projector by using an optical coaxial camera to capture the needed images. A plate beam splitter is used to make imaging axis of the CCD camera and projecting axis of the DLP projector coaxial, so the DLP projector can be treated as a true inverse camera. A plate having discrete markers on the surface will be designed and manufactured to calibrate the DLP projector. By projecting vertical and horizontal sinusoidal fringe patterns on the plate surface from the projector, the absolute phase of each marker’s center can be obtained. The corresponding projector pixel coordinate of each marker is determined from the obtained absolute phase. The internal and external parameters of the DLP projector are calibrated by the corresponding point pair between the projector coordinate and the world coordinate of discrete markers. Experimental results show that the proposed method accurately obtains the parameters of the DLP projector. One advantage of the method is the calibrated internal and external parameters have high accuracy because of uncalibrating the camera. The other is the optical coaxes geometry gives a true inverse camera, so the calibrated parameters are more accurate than that of crossed-optical-axes, especially the principal points and the radial distortion coefficients of the projector lens

    Structured light techniques for 3D surface reconstruction in robotic tasks

    Get PDF
    Robotic tasks such as navigation and path planning can be greatly enhanced by a vision system capable of providing depth perception from fast and accurate 3D surface reconstruction. Focused on robotic welding tasks we present a comparative analysis of a novel mathematical formulation for 3D surface reconstruction and discuss image processing requirements for reliable detection of patterns in the image. Models are presented for a parallel and angled configurations of light source and image sensor. It is shown that the parallel arrangement requires 35\% fewer arithmetic operations to compute a point cloud in 3D being thus more appropriate for real-time applications. Experiments show that the technique is appropriate to scan a variety of surfaces and, in particular, the intended metallic parts for robotic welding tasks

    Practical and precise projector-camera calibration

    No full text
    International audienceProjectors are important display devices for large scale augmented reality applications. However, precisely calibrating projectors with large focus distances implies a trade-off between practicality and accuracy. People either need a huge calibration board or a precise 3D model [12]. In this paper, we present a practical projector-camera calibration method to solve this problem. The user only needs a small calibration board to calibrate the system regardless of the focus distance of the projector. Results show that the root-mean-squared re-projection error (RMSE) for a 450cm projection distance is only about 4mm, even though it is calibrated using a small B4 (250 × 353mm) calibration board

    Fleksibilni optički digitalizacijski sustav s proizvoljnim brojem kamera

    Get PDF
    Development of the flexible multi-camera optical surface digitization system which projects non coherent coded light in two perpendicular directions is presented. By the introduction of the absolute method for stereopairs indexing the need for twofold searching through the phase images is eliminated, as well as the influence of discontinuities. Critical areas responsible for outlier generation are eliminated prior to triangulation by combining the modulation filtering of phase images and gradient filtering of the absolute phase images. Sequential triangulation process enabled triangulation of points that are not visible in all the cameras, thus providing means for digitization of partially occluded areas. Free form calibration object eliminated the need for specialized planar calibration objects, which combined with variable external camera parameters resulted in a system that can be adjusted depending on the measurement problem. In comparison to the commercial single and stereo camera systems our approach reduces the number of projections for the digitization of the complete objects.Razmatrana je problematika razvoja fleksibilnog optičkog sustava s proizvoljnim brojem slobodnih kamera koji digitalizaciju oblika povrĆĄine provodi dvostrukim projiciranjem nekoherentnog kodiranog svjetla. Apsolutnom metodom određivanja stereoparova eliminirana je potreba za dvostrukim pretraĆŸivanjem faznih slika, te utjecaj diskontinuiteta. Kombinacijom amplitudnog filtriranja slike parcijalnih faza i gradijentnog filtriranja slike apsolutnih faza eliminirana su kritična područja, te smanjen broj pogreĆĄno identificiranih stereoparova. Uvođenjem slijednog postupka triangulacije omogućeno je trianguliranje i onih točaka koje nisu istovremeno vidljive u svim kamerama, odnosno uvedena je mogućnost digitalizacije povrĆĄina djelomično zasjenjenih povrĆĄinskim artefaktima. Kroz kalibraciju slobodnim kalibrom eliminirana je potreba za specijalnim planarnim kalibracijskim objektima, u sprezi sa varijabilnim vanjskim parametrima kalibracije sustav postaje prilagodljiv mjernom zadatku. U odnosu na komercijalno dostupne sustave s jednom i dvije kamere novi sustav omogućava smanjenje broja potrebnih projekcija za digitalizaciju kompletnog mjernog volumena

    Temporal phase unwrapping using deep learning

    Full text link
    The multi-frequency temporal phase unwrapping (MF-TPU) method, as a classical phase unwrapping algorithm for fringe projection profilometry (FPP), is capable of eliminating the phase ambiguities even in the presence of surface discontinuities or spatially isolated objects. For the simplest and most efficient case, two sets of 3-step phase-shifting fringe patterns are used: the high-frequency one is for 3D measurement and the unit-frequency one is for unwrapping the phase obtained from the high-frequency pattern set. The final measurement precision or sensitivity is determined by the number of fringes used within the high-frequency pattern, under the precondition that the phase can be successfully unwrapped without triggering the fringe order error. Consequently, in order to guarantee a reasonable unwrapping success rate, the fringe number (or period number) of the high-frequency fringe patterns is generally restricted to about 16, resulting in limited measurement accuracy. On the other hand, using additional intermediate sets of fringe patterns can unwrap the phase with higher frequency, but at the expense of a prolonged pattern sequence. Inspired by recent successes of deep learning techniques for computer vision and computational imaging, in this work, we report that the deep neural networks can learn to perform TPU after appropriate training, as called deep-learning based temporal phase unwrapping (DL-TPU), which can substantially improve the unwrapping reliability compared with MF-TPU even in the presence of different types of error sources, e.g., intensity noise, low fringe modulation, and projector nonlinearity. We further experimentally demonstrate for the first time, to our knowledge, that the high-frequency phase obtained from 64-period 3-step phase-shifting fringe patterns can be directly and reliably unwrapped from one unit-frequency phase using DL-TPU

    Efficient generic calibration method for general cameras with single centre of projection

    Get PDF
    Generic camera calibration is a non-parametric calibration technique that is applicable to any type of vision sensor. However, the standard generic calibration method was developed with the goal of generality and it is therefore sub-optimal for the common case of cameras with a single centre of projection (e.g. pinhole, fisheye, hyperboloidal catadioptric). This paper proposes novel improvements to the standard generic calibration method for central cameras that reduce its complexity, and improve its accuracy and robustness. Improvements are achieved by taking advantage of the geometric constraints resulting from a single centre of projection. Input data for the algorithm is acquired using active grids, the performance of which is characterised. A new linear estimation stage to the generic algorithm is proposed incorporating classical pinhole calibration techniques, and it is shown to be significantly more accurate than the linear estimation stage of the standard method. A linear method for pose estimation is also proposed and evaluated against the existing polynomial method. Distortion correction and motion reconstruction experiments are conducted with real data for a hyperboloidal catadioptric sensor for both the standard and proposed methods. Results show the accuracy and robustness of the proposed method to be superior to those of the standard method

    Calibration of structured light system using unidirectional fringe patterns

    Get PDF
    3D shape measurement has a variety of applications in many areas, such as manufacturing, design, medicine and entertainment. There are many technologies that were successfully implemented in the past decades to measure three dimensional information of an object. The measurement techniques can be broadly classified into contact and non-contact measurement methods. One of the most widely used contact method is Coordinate Measuring Machine (CMM) which dates back to late 1950s. The method by far is one of the most accurate method as it can have sub-micrometer accuracy. But it becomes difficult to use this technique for soft objects as the probe might deform the surface of the object being measured. Also the scanning could be a time-consuming process. In order to address the problems in contact methods, non-contact methods such as time of flight (TOF), triangulation based laser scanner techniques, depth from defocus and stereo vision were invented. The main limitation with the time of flight laser scanner is that it does not give a high depth resolution. On the other hand, triangulation based laser scanning method scans the object line by line which might be time consuming. The depth from defocus method obtains 3D information of the object by relating depth to defocus blur analysis. However, it is difficult to capture the 3D geometry of objects that does not have a rich texture. The stereo vision system imitates human vision. It uses two cameras for capturing pictures of the object from different angles. The 3D coordinate information is obtained using triangulation. The main limitation with this technology is: when the object has a uniform texture, it becomes difficult to find corresponding pairs between the two cameras. Therefore, the structured light system (SLS) was introduced to address the above mentioned limitations. SLS is an extension of stereo vision system with one of the cameras being replaced by a projector. The pre-designed structured patterns are projected on to the object using a video projector. The main advantage with this system is that it does not use the object\u27s texture for identifying the corresponding pairs. But the patterns have to be coded in a certain way so that the camera-projector correspondence can be established. There are many codifications techniques such as pseudo-random codification, binary and N-ary codification. Pseudo-random codification uses laser speckles or structure-coded speckle patterns that vary in both the directions. However, the resolution is limited because each coded structure occupies multiple pixels in order to be unique. On the other hand, binary codifications projects a sequence of binary patterns. The main advantage with such a codification is that it is robust to noise as only two intensity levels are used (0s and 255). However, the resolution is limited because the width of the narrowest coding stripe should be more than the pixel size. Moreover, it takes many images to encode a scene that occupies a large number of pixels. To address this, N-ary codification makes use of multiple intensity levels between 0 and 255. Therefore the total number of coded patterns can be reduced. The main limitation is that the intensity-ratio analysis may be subject to noise. Digital Fringe Projection (DFP) system was developed to address the limitations of binary and N-ary codifications. In DFP computer generated sinusoidal patterns are projected on to the object and then the camera captures the distorted patterns from another angle. The main advantage of this method is that it is robust to the noise, ambient light and reflectivity as phase information is used instead of intensity. Albeit the merit of using phase, to achieve highly accurate 3D geometric reconstruction, it is also of crucial importance to calibrate the camera-projector system. Unlike the camera calibration, the projector calibration is difficult. This is mainly because the projector cannot capture images like a camera. Early attempts were made to calibrate the camera-projector system using a reference plane. The object geometry was reconstructed by comparing the phase difference between the object and the reference plane. However, the chosen reference plane needs to simultaneously possess a high planarity and a good optical property, which is typically difficult to achieve. Also, such calibration may be inaccurate if non-telecentric lenses are used. Calibration of the projector can also be done by treating it as the inverse of a camera. This method addressed the limitations of reference plane based method, as the exact intrinsic and extrinsic parameters of the imaging lenses are obtained. So a perfect reference plane is no longer required. The calibration method typically requires projecting orthogonal patterns on to the object. However, this method of calibration can be used only for structured light system with video projector. Grating slits and interferometers cannot be calibrated by this method as we cannot produce orthogonal patterns with such systems. In this research we have introduced a novel calibration method which uses patterns only in a single direction. We have theoretically proved that there exists one degree-of-freedom of redundancy in the conventional calibration methods, thus making it possible to use unidirectional patterns instead of orthogonal fringe patterns. Experiments show that under a measurement range of 200mm x 150mm x 120mm, our measurement results are comparable to the results obtained using conventional calibration method. Evaluated by repeatedly measuring a sphere with 147.726 mm diameter, our measurement accuracy on average can be as high as 0.20 mm with a standard deviation of 0.12 mm

    Wearable Structured Light System in Non-Rigid Configuration

    Get PDF
    Traditionally, structured light methods have been studied in rigid configurations. In these configurations the position and orientation between the light emitter and the camera are fixed and known beforehand. In this paper we break with this rigidness and present a new structured light system in non-rigid configuration. This system is composed by a wearable standard perspective camera and a simple laser emitter. Our non-rigid configuration permits free motion of the light emitter with respect to the camera. The point-based pattern emitted by the laser permits us to easily establish correspondences between the image from the camera and a virtual one generated from the light emitter. Using these correspondences, our method computes rotation and translation up to scale of the planes of the scene where the point pattern is projected and reconstructs them. This constitutes a very useful tool for navigation applications in indoor environments, which are mainly composed of planar surfaces
    • 

    corecore