26 research outputs found
The concept of virtual landmarks in 3D multi-view fringe projection
For a 360-deg 3D measurement of an object the optical 3D sensor scan the object from different positions and the resulting single patches have to transform into a common global coordinate system so that these point clouds are patched together to generate the final complete 3D data set. Here we summarize and give some system realizations for the method, which we called "method of virtual landmarks" /1, 2/ realizing this local-global coordinate transformation without accurate mechanical sensor handling, sensor tracking, markers fixed on the object or point-cloud based registration techniques. For this the calculation of the co-ordinates, orientation of the sensor and local-global coordinate transformation is done by bundle adjustment methods, whereby the pixel of the so called connecting camera form 'virtual landmarks' for the registration of the single views in order to obtain a complete all around image. The flexibility makes the method useful for a wide range of system realizations which will be shown in the paper, like robot guided, handheld /3/ and tripod based systems for the flexible measurement of complex and/or large objects
View planning for 3D reconstruction using time-of-flight camera data as a-priori information
Solving the next best view (NBV) problem is an important task for automated 3D reconstruction. An NBV algorithm provides sensor positions, from which maximal information gain about the measurement object during the next scan can be expected. With no or limited information available during the first views, automatic data driven view planning performs suboptimal. In order to overcome these inefficiencies during startup phase, we examined the use of time-of-flight (TOF) camera data to improve view planning. The additional low resolution 3D information, gathered during sensor movement, allows to plan even the first scans customized to previously unknown objects. Measurement examples using a robot mounted fringe projection stereo 3D scanner with a TOF camera are presented
High Performance, low latency 3D sensor network for live full object reconstruction
With recent advances in high speed 3D measurement sensor technologies, focus changes from merely acquiring 3D sensor data fast. An advanced application area is to fusion multiple sensor streams into a complete object representation without occlusions. Even more challenging is how to process the high speed 3D streams online, instead of the current offline processing approaches. To this end we combine our cost-effective GOBO slide-based pattern projector (GOes Before Optics) with commodity GigE vision network sensors to a multi sensor system for complete online monitoring capabilities. The targeted use-case has to deal with partial occlusions and low latency requirements for machine control. Specifically, three active NIR stereo 3D sensors are aggregated through a 10Gb-Ethernet-switch and processed by a single GPU assisted workstation. Thus a combined continuous data-stream of up to 78 million 3D points is calculated online per second out of a raw 2D data-stream of up to approximately 1250 Mb/s. The systems latency for simpler 3D analysis task, like movement tracking, is ≤ 200 ms
View Planning for 3D Reconstruction using Time-of-Flight Camera Data
Solving the next best view (NBV) problem is an important task for automated 3D reconstruction. An NBV algorithm provides sensor positions, from which maximal information gain about the measurement object during the next scan can be expected. With no or limited information available during the first views, automatic data driven view planning performs suboptimal. In order to overcome these inefficiencies during startup phase, we examined the use of time-of-flight (TOF) camera data to improve view planning. The additional low resolution 3D information, gathered during sensor movement, allows to plan even the first scans customized to previously unknown objects. Measurement examples using a robot mounted fringe projection stereo 3D scanner with a TOF camera are presented
Comparison and evaluation of correspondence finding methods in 3D measurement systems using fringe projection
Three different methods to realize point correspondences in 3D measurement systems based on fringe projection are described and compared concerning accuracy, sensitivity, and handling. Advantages and disadvantages of the three techniques are discussed. A suggestion is made to combine the principles in order to achieve an improved completeness of the measurements. The principle of a virtual image point raster which is the basis of the combination of the methods is explained. A model to describe the random error of a 3D point measurement for the three methods is established and described. Simulations and real measurements confirm this error model. Experiments are described and results are presented
3D-Sensornetzwerk mit geringer Latenz für die Echtzeit-Objektrekonstruktion
Die Fusion mehrerer 3D-Datensätze, die gleichzeitig von unterschiedlichen Sensoren in verschiedenen Positionen bezüglich des Messobjekts gewonnen werden, um die Oberfläche des Messobjektes vollständig zu erfassen, und die Echtzeitverarbeitung dieser Datensätze sind anspruchsvolle Aufgabengebiete der optischen 3D-Messtechnik. Hierzu wurde ein Multi-Sensor-System, bestehend aus drei IR-Stereo-Sensoren für die vollständige Echtzeiterfassung und latenzarme 3D-Ergebnisdarstellung entwickelt. Dabei wurde durch Verwendung der kostengünstigen GOBO Musterprojektionstechnik und einer GPU-Recheneinheit die Echtzeitverarbeitung eines kontinuierlichen Datenstroms von bis zu 78 Mio 3D-Punkten pro Sekunde und eine Latenz unter 200 ms realisiert. Insbesondere wurde die Messgenauigkeit des Sensornetzwerkes in einem Messvolumen von ca. 1000 mm x 2200 mm x 500 mm durch Bestimmung des Längenmessfehlers und der Antastgenauigkeit sowohl für die einzelnen Sensoren als auch das gesamte Sensornetzwerk charakterisiert. Es werden Messbeispiele gegeben und Möglichkeiten der Weiterentwicklung diskutiert
Phase unwrapping in fringe projection systems using epipolar geometry
A new method for phase unwrapping is introduced which realizes the unwrapping of phase images without binary codes produced by fringe projection systems using at least two cameras and one projector. The novelty of the method is the use of the epipolar geometry between the two cameras and the projector in order to achieve a unique point correspondence. The method is suited for systems which should realize a short recording time for the image sequence acquisition. It is very robust even at positions with abrupt change of depth
CALIBRATION EVALUATION AND CALIBRATION STABILITY MONITORING OF FRINGE PROJECTION BASED 3D SCANNERS
In this work a simple new method for calibration evaluation and calibration stability monitoring of fringe projection based 3D
scanners is introduced. This method is based on high precision point correspondence finding of fringe projection sensors using phase
values in two perpendicular directions and epipolar geometry concerning calibration data of stereo sensors. The calibration
evaluation method can be applied in the measurement process and does not require any additional effort or equipment. It allows the
evaluation of the current set of calibration parameters and consideration of the stability of the current calibration over certain
temporal progression. Additionally, the quality of distortion correction can be scored. The behavior of three fringe projection based
3D stereo scanner types was analyzed by experimental measurements. Results of the different types of scanners show that calibration
may be stable over a long time period. On the other hand, suddenly occurring disturbances may be detected well. Additionally, the
calibration error usually shows a significant drift in the warm-up phase until the operating temperature is achieved