18 research outputs found

    Application of Detecting Part's Size Online Based on Machine Vision

    Get PDF
    AbstractIn order to use a method untouched to measure part's size, detection system is built. The research is on characteristic point detection, CCD camera calibration and distance detection between two special points. Firstly, the theory of edge detection based on Gauss point spread function is introduced. Secondly, it is calibrated using the characteristic points on the edge. Then a simple CCD calibration method of resolving the solution is deduced to solve the related parameters of model. Finally, 3D special coordinates of characteristic points detected can be obtained by the model. It is realized to detecting part size process online by calculating the distance between two special points. The experimental result shows that when physical length is 0.6 times than focal length, the given precise of measuring result can reach nearly 0.006mm. The precise of image is stable

    VIDEOGRAMMETRIC RECONSTRUCTION APPLIED TO VOLCANOLOGY: PERSPECTIVES FOR A NEW MEASUREMENT TECHNIQUE IN VOLCANO MONITORING

    Full text link

    Calibration of structured light system using unidirectional fringe patterns

    Get PDF
    3D shape measurement has a variety of applications in many areas, such as manufacturing, design, medicine and entertainment. There are many technologies that were successfully implemented in the past decades to measure three dimensional information of an object. The measurement techniques can be broadly classified into contact and non-contact measurement methods. One of the most widely used contact method is Coordinate Measuring Machine (CMM) which dates back to late 1950s. The method by far is one of the most accurate method as it can have sub-micrometer accuracy. But it becomes difficult to use this technique for soft objects as the probe might deform the surface of the object being measured. Also the scanning could be a time-consuming process. In order to address the problems in contact methods, non-contact methods such as time of flight (TOF), triangulation based laser scanner techniques, depth from defocus and stereo vision were invented. The main limitation with the time of flight laser scanner is that it does not give a high depth resolution. On the other hand, triangulation based laser scanning method scans the object line by line which might be time consuming. The depth from defocus method obtains 3D information of the object by relating depth to defocus blur analysis. However, it is difficult to capture the 3D geometry of objects that does not have a rich texture. The stereo vision system imitates human vision. It uses two cameras for capturing pictures of the object from different angles. The 3D coordinate information is obtained using triangulation. The main limitation with this technology is: when the object has a uniform texture, it becomes difficult to find corresponding pairs between the two cameras. Therefore, the structured light system (SLS) was introduced to address the above mentioned limitations. SLS is an extension of stereo vision system with one of the cameras being replaced by a projector. The pre-designed structured patterns are projected on to the object using a video projector. The main advantage with this system is that it does not use the object\u27s texture for identifying the corresponding pairs. But the patterns have to be coded in a certain way so that the camera-projector correspondence can be established. There are many codifications techniques such as pseudo-random codification, binary and N-ary codification. Pseudo-random codification uses laser speckles or structure-coded speckle patterns that vary in both the directions. However, the resolution is limited because each coded structure occupies multiple pixels in order to be unique. On the other hand, binary codifications projects a sequence of binary patterns. The main advantage with such a codification is that it is robust to noise as only two intensity levels are used (0s and 255). However, the resolution is limited because the width of the narrowest coding stripe should be more than the pixel size. Moreover, it takes many images to encode a scene that occupies a large number of pixels. To address this, N-ary codification makes use of multiple intensity levels between 0 and 255. Therefore the total number of coded patterns can be reduced. The main limitation is that the intensity-ratio analysis may be subject to noise. Digital Fringe Projection (DFP) system was developed to address the limitations of binary and N-ary codifications. In DFP computer generated sinusoidal patterns are projected on to the object and then the camera captures the distorted patterns from another angle. The main advantage of this method is that it is robust to the noise, ambient light and reflectivity as phase information is used instead of intensity. Albeit the merit of using phase, to achieve highly accurate 3D geometric reconstruction, it is also of crucial importance to calibrate the camera-projector system. Unlike the camera calibration, the projector calibration is difficult. This is mainly because the projector cannot capture images like a camera. Early attempts were made to calibrate the camera-projector system using a reference plane. The object geometry was reconstructed by comparing the phase difference between the object and the reference plane. However, the chosen reference plane needs to simultaneously possess a high planarity and a good optical property, which is typically difficult to achieve. Also, such calibration may be inaccurate if non-telecentric lenses are used. Calibration of the projector can also be done by treating it as the inverse of a camera. This method addressed the limitations of reference plane based method, as the exact intrinsic and extrinsic parameters of the imaging lenses are obtained. So a perfect reference plane is no longer required. The calibration method typically requires projecting orthogonal patterns on to the object. However, this method of calibration can be used only for structured light system with video projector. Grating slits and interferometers cannot be calibrated by this method as we cannot produce orthogonal patterns with such systems. In this research we have introduced a novel calibration method which uses patterns only in a single direction. We have theoretically proved that there exists one degree-of-freedom of redundancy in the conventional calibration methods, thus making it possible to use unidirectional patterns instead of orthogonal fringe patterns. Experiments show that under a measurement range of 200mm x 150mm x 120mm, our measurement results are comparable to the results obtained using conventional calibration method. Evaluated by repeatedly measuring a sphere with 147.726 mm diameter, our measurement accuracy on average can be as high as 0.20 mm with a standard deviation of 0.12 mm

    Silhouette Coherence for Camera Calibration under Circular Motion

    Full text link

    Flexible and User-Centric Camera Calibration using Planar Fiducial Markers

    Full text link
    The benefit of accurate camera calibration for recovering 3D structure from images is a well-studied topic. Recently 3D vision tools for end-user applications have become popular among large audiences, mostly unskilled in computer vision. This motivates the need for a flexible and user-centric camera calibration method which drastically releases the critical requirements on the calibration target and ensures that low-quality or faulty images provided by end users do not degrade the overall calibration and in effect the resulting 3D model. In this paper we present and advocate an approach to camera cal-ibration using fiducial markers, aiming at the accuracy of target calibration techniques without the requirement for a precise calibration pattern, to ease the calibration effort for the end-user. An extensive set of experiments with real images is presented which demonstrates improvements in the estimation of the parameters of the camera model as well as accuracy in the multi-view stereo reconstruction of large scale scenes. Pixel re-projection errors and ground truth errors obtained by our method are significantly lower compared to popular calibration routines, even though paper-printable and easy-to-use targets are employed.

    High-quality 3D shape measurement with binarized dual phase-shifting method

    Get PDF
    ABSTRACT 3-D technology is commonplace in today\u27s world. They are used in many dierent aspects of life. Researchers have been keen on 3-D shape measurement and 3-D reconstruction techniques in past decades as a result of inspirations from dierent applications ranging from manufacturing, medicine to entertainment. The techniques can be broadly divided into contact and non-contact techniques. The contact techniques like coordinate measuring machine (CMM) dates way back to 1950s. It has been used extensively in the industries since then. It becomes predominant in industrial inspections owing to its high accuracy in the order of m. As we know that quality control is an important part of modern industries hence the technology is enjoying great popularity. However, the main disadvantage of this method is its slow speeds due to its requirement of point-by-point touch. Also, since this is a contact process, it might deform a soft object while performing measurements. Such limitations led the researchers to explore non-contact measurement technologies (optical metrology techniques). There are a variety of optical techniques developed till now. Some of the well-known technologies include laser scanners, stereo vision, and structured light systems. The main limitation of laser scanners is its limited speed due to its point-by-point or line-by-line scanning process. The stereo vision uses two cameras which take pictures of the object at two dierent angles. Then epipolar geometry is used to determine the 3-D coordinates of points in real-world. Such technology imitates human vision, but it had a few limitations too like the diculty of correspondence detection for uniform or periodic textures. Hence structured light systems were introduced which addresses the aforementioned limitations. There are various techniques developed including 2-D pseudo-random codication, binary codication, N-ary codication and digital fringe projection (DFP). The limitation of 2-D pseudo-random codication technique is its inability to achieve high spatial resolution since any uniquely generated and projected feature requires a span of several projector pixels. The binary codication techniques reduce the requirement of 2-D features to 1-D ones. However, since there are only two intensities, it is dicult to differentiate between the individual pixels within each black or white stripe. The other disadvantage is that n patterns are required to encode 2n pixels, meaning that the measurement speeds will be severely affected if a scene is to be coded with high-resolution. Dierently, DFP uses continuous sinusoidal patterns. The usage of continuous patterns addresses the main disadvantage of binary codication (i.e. the inability of this technique to differentiate between pixels was resolved by using sinusoid patterns). Thus, the spatial resolution is increased up to camera-pixel-level. On the other hand, since the DFP technique used 8-bit sinusoid patterns, the speed of measurement is limited to the maximum refreshing rate of 8-bit images for many video projectors (e.g. 120 Hz). This made it inapplicable for measurements of highly dynamic scenes. In order to overcome this speed limitation, the binary defocussing technique was proposed which uses 1-bit patterns to produce sinusoidal prole by projector defocusing. Although this technique has signicantly boosted the measurement speed up to kHz-level, if the patterns are not properly defocused (nearly focused or overly defocused), increased phase noise or harmonic errors will deteriorate the reconstructed surface quality. In this thesis research, two techniques are proposed to overcome the limitations of both DFP and binary defocusing technique: binarized dual phase shifting (BDPS) technique and Hilbert binarized dual phase shifting technique (HBDPS). Both techniques were able to achieve high-quality 3-D shape measurements even when the projector is not sufficiently defocused. The harmonic error was reduced by 47% by the BDPS method, and 74% by the HBDPS method. Moreover, both methods use binary patterns which preserve the speed advantage of the binary technology, hence it is potentially applicable to simultaneous high-speed and high-accuracy 3D shape measurements

    Machine vision systems : automated inspection & metrology

    Get PDF
    The purpose of the project was to develop a high speed, high accuracy measuring device to aid the engineering technology department at Western Carolina University. When something requires measurement with a high degree of accuracy a coordinate measuring machine is used. This process can be very time consuming especially when multiple iterations are required. A machine vision system is capable of making the same type of measurements in a matter of seconds rather than minutes. This study covers the development and testing of a machine vision system. Several tests were conducted to help develop and improve the system through changes to the test fixture, lighting, programming, and test object. The results of these tests are only valid for the specific set-up and equipment used, and cannot be transferred to any other system. Even slight changes to the equipment during testing showed significant changes in the data being gathered. This contributed to the final conclusion that the measurements gathered by the machine vision system are not comparable to those of the coordinate measuring machine at any level of accuracy. Improvements to the machine vision system setup must be made to improve accuracy

    High quality three-dimensional (3D) shape measurement using intensity-optimized dithering technique

    Get PDF
    In past decades, there has been an upsurge in the development of three-dimensional (3D) shape measurement and its applications. Over the years, there are a variety of technologies developed including laser scanning, stereo vision, and structured light. Among these technologies, the structured-light technique has the advantages of fast computation speeds and high measurement resolution. Therefore, it has been extensively studied in this field of research. Nowadays, with the rapid development of digital devices, different kinds of patterns can be easily generated by a video projector. As a result, digital fringe projection (DFP), a variation of the structured light method, has had many applications owing to its speed and accuracy. Typically, for a DFP system, ideal sinusoidal fringe pattern projection is required for high accuracy 3D information retrieval. Since traditional DFP projects 8-bit sinusoidal fringe patterns, it suffers from some major limitations such as the speed limit (e.g., 120 Hz), the requirement for nonlinear gamma calibration, and the rigid synchronization requirement between the projector and the camera. To overcome these limitations, the binary defocusing technology was developed which projects 1-bit square binary pattern and generates ideal sinusoidal pattern through projector defocusing. In the past few years, the binary defocusing technique has shown great potential for many applications owing to its speed breakthroughs, nonlinear gamma calibration free and no rigid synchronization requirement between the camera and the projector. However, a typical square binary pattern suffers from some major limitations: (1) high-order harmonics, introduced by a square wave, which affect the accuracy of measurement, cannot be completely eliminated by projector defocusing; (2) a reduced measurement volume since the projector needs to be properly defocused to generate the desired high-quality sinusoidal patterns; and (3) difficulty achieving high-quality measurements with wider square binary patterns. The binary dithering technique, originally developed for printing technology, is found to have great potential for overcoming these aforementioned limitations of the square binary method. However, the binary dithering technique, which simply applies a matrix operation to the whole image, still has great room for improvement especially when the fringe patterns are not sufficiently defocused. Although there have been past efforts made to improve the performance of dithering techniques for 3D shape measurement, those approaches are either computationally expensive or fail to improve the quality with different amounts of defocusing. In this research, we aim at further improving the binary dithering technique by optimizing the dithered patterns in intensity domain. We have developed both global and local optimization frameworks for improving dithered patterns. Our simulation and experimental results have demonstrated that: the global optimization framework improves the Bayer-order dithering technique by approximately 25% overall and up to 50% for narrower fringe patterns (e.g. fringe period of T = 18 pixels); the local optimization framework can improve the performance of a more advanced error-diffusion dithering technique by 20% overall and up to 40% for narrower fringe patterns (e.g. T = 18 pixels). Moreover, since the local algorithm involves optimizing a small image block and building up the desired-size patterns using symmetry and periodicity, it is much faster in terms of optimization time than the global algorithm

    Facilitating sensor interoperability and incorporating quality in fingerprint matching systems

    Get PDF
    This thesis addresses the issues of sensor interoperability and quality in the context of fingerprints and makes a three-fold contribution. The first contribution is a method to facilitate fingerprint sensor interoperability that involves the comparison of fingerprint images originating from multiple sensors. The proposed technique models the relationship between images acquired by two different sensors using a Thin Plate Spline (TPS) function. Such a calibration model is observed to enhance the inter-sensor matching performance on the MSU dataset containing images from optical and capacitive sensors. Experiments indicate that the proposed calibration scheme improves the inter-sensor Genuine Accept Rate (GAR) by 35% to 40% at a False Accept Rate (FAR) of 0.01%. The second contribution is a technique to incorporate the local image quality information in the fingerprint matching process. Experiments on the FVC 2002 and 2004 databases suggest the potential of this scheme to improve the matching performance of a generic fingerprint recognition system. The final contribution of this thesis is a method for classifying fingerprint images into 3 categories: good, dry and smudged. Such a categorization would assist in invoking different image processing or matching schemes based on the nature of the input fingerprint image. A classification rate of 97.45% is obtained on a subset of the FVC 2004 DB1 database
    corecore