2,213 research outputs found
Uncertainty on fringe projection technique: a Monte-Carlo-based approach
International audienceError estimation on optical full field techniques (OFFT) is millstone in the diffusion of OFFT. The present work describes a generic way to estimate overall error in fringe projection, either due to random sources (phase error, basically related to the quality of the camera and of the fringe extraction algorithm) or the bias (calibration errors). Here, a high level calibration procedure based on pinhole model has been implemented. This model compensates for the divergence effects of both the video-projector and the camera. The work is based on a Monte Carlo procedure. So far, the complete models of the calibration procedure and of a reference experiment are necessary. Here, the reference experiment consists in multiple step out-of-plane displacement of a plane surface. Main conclusions of this work are: 1/ the uncertainties in the calibration procedure lead to a global rotation of the plane, 2/ the overall error has been calculated in two situations; the overall error ranges from 104 µm down to 10 µm, 3/ the main error source is the phase error even if errors due to the calibration are not always negligible
Camera Calibration from Dynamic Silhouettes Using Motion Barcodes
Computing the epipolar geometry between cameras with very different
viewpoints is often problematic as matching points are hard to find. In these
cases, it has been proposed to use information from dynamic objects in the
scene for suggesting point and line correspondences.
We propose a speed up of about two orders of magnitude, as well as an
increase in robustness and accuracy, to methods computing epipolar geometry
from dynamic silhouettes. This improvement is based on a new temporal
signature: motion barcode for lines. Motion barcode is a binary temporal
sequence for lines, indicating for each frame the existence of at least one
foreground pixel on that line. The motion barcodes of two corresponding
epipolar lines are very similar, so the search for corresponding epipolar lines
can be limited only to lines having similar barcodes. The use of motion
barcodes leads to increased speed, accuracy, and robustness in computing the
epipolar geometry.Comment: Update metadat
Intelligent composite layup by the application of low cost tracking and projection technologies
Hand layup is still the dominant forming process for the creation of the widest range of complex geometry and mixed material composite parts. However, this process is still poorly understood and informed, limiting productivity. This paper seeks to address this issue by proposing a novel and low cost system enabling a laminator to be guided in real-time, based on a predetermined instruction set, thus improving the standardisation of produced components. Within this paper the current methodologies are critiqued and future trends are predicted, prior to introducing the required input and outputs, and developing the implemented system. As a demonstrator a U-Shaped component typical of the complex geometry found in many difficult to manufacture composite parts was chosen, and its drapeability assessed by the use of a kinematic drape simulation tool. An experienced laminator's knowledgebase was then used to divide the tool into a finite number of features, with layup conducted by projecting and sequentially highlighting target features while tracking a laminator's hand movements across the ply. The system has been implemented with affordable hardware and demonstrates tangible benefits in comparison to currently employed laser-based systems. It has shown remarkable success to date, with rapid Technology Readiness Level advancement. This is a major stepping stone towards augmenting manual labour, with further benefits including more appropriate automation
Frequency-based image analysis of random patterns: an alternative way to classical stereocorrelation
The paper presents an alternative way to classical stereocorrelation. First, 2D image processing of random patterns is described. Sub-pixel displacements are determined using phase analysis. Then distortion evaluation is presented. The distortion is identified without any assumption on the lens model because of the use of a grid technique approach. Last, shape measurement and shape variation is caught by fringe projection. Analysis is based on two pin-hole assumptions for the video-projector and the camera. Then, fringe projection is coupled to in-plane displacement to give rise to 3D measurement set-up. Metrological characterization shows a resolution comparable to classical (stereo) correlation technique (1/100th pixel). Spatial resolution seems to be an advantage of the method, because of the use of temporal phase stepping (shape measurement, 1 pixel) and windowed Fourier transform (in plane displacements measurement, 9 pixels). Two examples are given. First one is the study of skin properties; second one is a study on leather fabric. In both cases, results are convincing, and have been exploited to give mechanical interpretation
Temporal phase unwrapping using deep learning
The multi-frequency temporal phase unwrapping (MF-TPU) method, as a classical
phase unwrapping algorithm for fringe projection profilometry (FPP), is capable
of eliminating the phase ambiguities even in the presence of surface
discontinuities or spatially isolated objects. For the simplest and most
efficient case, two sets of 3-step phase-shifting fringe patterns are used: the
high-frequency one is for 3D measurement and the unit-frequency one is for
unwrapping the phase obtained from the high-frequency pattern set. The final
measurement precision or sensitivity is determined by the number of fringes
used within the high-frequency pattern, under the precondition that the phase
can be successfully unwrapped without triggering the fringe order error.
Consequently, in order to guarantee a reasonable unwrapping success rate, the
fringe number (or period number) of the high-frequency fringe patterns is
generally restricted to about 16, resulting in limited measurement accuracy. On
the other hand, using additional intermediate sets of fringe patterns can
unwrap the phase with higher frequency, but at the expense of a prolonged
pattern sequence. Inspired by recent successes of deep learning techniques for
computer vision and computational imaging, in this work, we report that the
deep neural networks can learn to perform TPU after appropriate training, as
called deep-learning based temporal phase unwrapping (DL-TPU), which can
substantially improve the unwrapping reliability compared with MF-TPU even in
the presence of different types of error sources, e.g., intensity noise, low
fringe modulation, and projector nonlinearity. We further experimentally
demonstrate for the first time, to our knowledge, that the high-frequency phase
obtained from 64-period 3-step phase-shifting fringe patterns can be directly
and reliably unwrapped from one unit-frequency phase using DL-TPU
PC-HMR: Pose Calibration for 3D Human Mesh Recovery from 2D Images/Videos
The end-to-end Human Mesh Recovery (HMR) approach has been successfully used
for 3D body reconstruction. However, most HMR-based frameworks reconstruct
human body by directly learning mesh parameters from images or videos, while
lacking explicit guidance of 3D human pose in visual data. As a result, the
generated mesh often exhibits incorrect pose for complex activities. To tackle
this problem, we propose to exploit 3D pose to calibrate human mesh.
Specifically, we develop two novel Pose Calibration frameworks, i.e., Serial
PC-HMR and Parallel PC-HMR. By coupling advanced 3D pose estimators and HMR in
a serial or parallel manner, these two frameworks can effectively correct human
mesh with guidance of a concise pose calibration module. Furthermore, since the
calibration module is designed via non-rigid pose transformation, our PC-HMR
frameworks can flexibly tackle bone length variations to alleviate misplacement
in the calibrated mesh. Finally, our frameworks are based on generic and
complementary integration of data-driven learning and geometrical modeling. Via
plug-and-play modules, they can be efficiently adapted for both
image/video-based human mesh recovery. Additionally, they have no requirement
of extra 3D pose annotations in the testing phase, which releases inference
difficulties in practice. We perform extensive experiments on the popular
bench-marks, i.e., Human3.6M, 3DPW and SURREAL, where our PC-HMR frameworks
achieve the SOTA results.Comment: 9 pages, 7 figures. AAAI202
- …