15 research outputs found

    Camera-based close-range coordinate metrology

    Get PDF
    The main focus of this thesis was on the development of techniques and methodologies to allow the effective application of photogrammetry to coordinate metrology. The current understanding of the propagation of uncertainty through camera-based measurement systems is limited due to the highly complex and non-linear nature of the techniques. Additionally, the application of existing verification standards is unclear and difficult to apply directly to high-accuracy photogrammetry systems. The aim of this work was, therefore, to develop techniques to evaluate the coordinate measurement uncertainty of photogrammetry systems as well methods that allow existing verification standards to be effectively applied. Based on Monte Carlo simulations, an evaluation of the contributing factors to the expanded uncertainty on measurements made by a stereo photogrammetry system was performed. A traceable scaling methodology was also applied to the stereo system, allowing the identification of the key contributing factors to the stereo system measurement to be highlighted and targeted for future. Additionally, the effect of systematic errors on the measurement volume was simulated and then verified through experimental observations. A laser speckle texture projection methodology was also developed in order to allow existing verification standards to be applied to conventional photogrammetry systems. By projecting artificial texture onto the verification artefact surface, the verification outlined in the VDI/VDE 2634 part 3 standards were applied. The results of the verification tests demonstrated the high levels of accuracy that can be achieved by photogrammetry based coordinate measurement systems. Through the use of fringe projection techniques, an additional method of applying verification standards to a stereo photogrammetry system was also performed. By using phase encoding to find correspondence between cameras, the verification tests were applied and were in agreement with predicted values. Additionally, the use of phase encoded correspondence also presents a promising method to vastly improve the accuracy of the characterisation of stereo system properties. Finally, the principles of photogrammetry were applied to several case studies. The photogrammetry principles developed in this thesis were used to develop data fusion methods to greatly improve the bandwidth of measurements, use laser speckle to produce material agnostic measurements of part geometry and calibrate reconstruction scale factors using light-field imaging principles

    Verification of micro-scale photogrammetry for smooth three-dimensional object measurement

    Get PDF
    By using sub-millimetre laser speckle pattern projection we show that photogrammetry systems are able to measure smooth three-dimensional objects with surface height deviations less than 1 μm. The projection of laser speckle patterns allows correspondences on the surface of smooth spheres to be found, and as a result, verification artefacts with low surface height deviations were measured. A combination of VDI/VDE and ISO standards were also utilised to provide a complete verification method, and determine the quality parameters for the system under test. Using the proposed method applied to a photogrammetry system, a 5 mm radius sphere was measured with an expanded uncertainty of 8.5 μm for sizing errors, and 16.6 μm for form errors with a 95 % confidence interval. Sphere spacing lengths between 6 mm and 10 mm were also measured by the photogrammetry system, and were found to have expanded uncertainties of around 20 μm with a 95 % confidence interval

    Volumetric error modelling of a stereo vision system for error correction in photogrammetric three-dimensional coordinate metrology

    Get PDF
    Optical three-dimensional coordinate measurement using stereo vision has systematic errors that affect measurement quality. This paper presents a scheme for measuring, modelling and correcting these errors. The position and orientation of a linear stage are measured with a laser interferometer while a stereo vision system tracks target points on the moving stage. With reference to the higher accuracy laser interferometer measurement, the displacement errors of the tracked points are evaluated. Regression using a neural network is used to generate a volumetric error model from the evaluated displacement errors. The regression model is shown to outperform other interpolation methods. The volumetric error model is validated by correcting the three-dimensional coordinates of the point cloud from a photogrammetry instrument that uses the stereo vision system. The corrected points from the measurement of a calibrated spherical artefact are shown to have size and form errors of less than 50 μm and 110 μm respectively. A reduction of up to 30% in the magnitude of the probing size error is observed after error correction is applied

    Design and characterisation of an additive manufacturing benchmarking artefact following a design-for-metrology approach

    Get PDF
    We present the design and characterisation of a high-speed sintering additive manufacturing benchmarking artefact following a design-for-metrology approach. In an important improvement over conventional approaches, the specifications and operating principles of the instruments that would be used to measure the manufactured artefact were taken into account during its design process. With the design-for-metrology methodology, we aim to improve and facilitate measurements on parts produced using additive manufacturing. The benchmarking artefact has a number of geometrical features, including sphericity, cylindricity, coaxiality and minimum feature size, all of which are measured using contact, optical and X-ray computed tomography coordinate measuring systems. The results highlight the differences between the measuring methods, and the need to establish a specification standards and guidance for the dimensional assessment of additive manufacturing parts

    Multi-view fringe projection system for surface topography measurement during metal powder bed fusion

    Get PDF
    Metal powder bed fusion (PBF) methods need in-process measurement methods to increase user confidence and encourage further adoption in high-value manufacturing sectors. In this paper, a novel measurement method for PBF systems is proposed that uses multi-view fringe projection to acquire high-resolution surface topography information of the powder bed. Measurements were made using a mock-up of a commercial PBF system to assess the system’s accuracy and precision in comparison to conventional single-view fringe projection techniques for the same application. Results show that the multi-view system is more accurate, but less precise, than single view fringe projection on a point-by-point basis. The multi-view system also achieves a high degree of surface coverage by using alternate views to access areas not measured by a single camera

    Flexible decoupled camera and projector fringe projection system using inertial sensors

    Get PDF
    Measurement of objects with complex geometry and many self occlusions is increasingly important in many fields, including additive manufacturing. In a fringe projection system, the camera and the projector cannot move independently with respect to each other, which limits the ability of the system to overcome object self-occlusions. We demonstrate a fringe projection setup where the camera can move independently with respect to the projector, thus minimizing the effects of self-occlusion. The angular motion of the camera is tracked and recalibrated using an on-board inertial angular sensor, which can additionally perform automated point cloud registration

    Optimisation of camera positions for optical coordinate measurement based on visible point analysis

    Get PDF
    In optical coordinate measurement using cameras, the number of images, and positions and orientations of the cameras, are critical to object accessibility and the accuracy of a measurement. In this paper, we propose a technique to optimise the number of cameras and the positions of these cameras for the measurement of a given object using visible point analysis of the object's computer aided design data. The visible point analysis technique is based on a hidden point removal approach; this technique is used to detect which surface points on the object are visible from a given camera position. A genetic algorithm is used to find the set of positions that provide optimum surface point density and overlap between views, while minimising the total number of camera images required. The genetic algorithm is used to minimise the measurement data processing time while maintaining optimum surface point density. We test this optimisation procedure on four artefacts and the measurements are shown to be comparable to that from a traceable contact co-ordinate measurement machine. We show that using our procedure improves the measurement quality compared to the more conventional approach of using equally spaced images. This work is part of a larger effort to fully automate and optimise optical coordinate measurement techniques

    Fusion of photogrammetry and coherence scanning interferometry data for all-optical coordinate measurement

    Get PDF
    Multisensor data fusion is an approach to enlarge the potential applicability of measuring techniques and improve accuracy, taking advantage of the strengths of different techniques. In this work, we present a new method for the fusion of photogrammetry and coherence scanning interferometry (CSI) data. This method allows the photogrammetry data to be accurately scaled with reference to the CSI data, and in turn the exact locations of multiple CSI measurements can be determined in the coordinate system defined by photogrammetry. The culmination of this work is to allow for high-accuracy three-dimensional optical coordinate measurement and surface topography measurement simultaneously

    Characterisation of a multi-view fringe projection system based on the stereo matching of rectified phase maps

    Get PDF
    Multi-view fringe projection systems can be effective solutions to address the limitations imposed by the limited field of view, line-of-sight issues and occlusions when measuring the geometry ofcomplex objects, associated with single camera-projector systems. However, characterisation of a multi-view system is challenging since it requires the cameras and projectors to be in a common global coordinate system. We present a method for characterising a multi-view fringe projection system which does not require the characterisation of the projector. The novelty of the methodlies in determining the correspondences in the phase domain using the rectified unwrapped phase maps and triangulating the matched phase values to reconstruct the three-dimensional shape of theobject. A benefit of the method is that it does not require registration of the point clouds acquired from multiple perspectives. The proposed method is validated by experiment and comparison with a conventional system and a contact coordinate measuring machine
    corecore