3,342 research outputs found
Micro Fourier Transform Profilometry (FTP): 3D shape measurement at 10,000 frames per second
Recent advances in imaging sensors and digital light projection technology
have facilitated a rapid progress in 3D optical sensing, enabling 3D surfaces
of complex-shaped objects to be captured with improved resolution and accuracy.
However, due to the large number of projection patterns required for phase
recovery and disambiguation, the maximum fame rates of current 3D shape
measurement techniques are still limited to the range of hundreds of frames per
second (fps). Here, we demonstrate a new 3D dynamic imaging technique, Micro
Fourier Transform Profilometry (FTP), which can capture 3D surfaces of
transient events at up to 10,000 fps based on our newly developed high-speed
fringe projection system. Compared with existing techniques, FTP has the
prominent advantage of recovering an accurate, unambiguous, and dense 3D point
cloud with only two projected patterns. Furthermore, the phase information is
encoded within a single high-frequency fringe image, thereby allowing
motion-artifact-free reconstruction of transient events with temporal
resolution of 50 microseconds. To show FTP's broad utility, we use it to
reconstruct 3D videos of 4 transient scenes: vibrating cantilevers, rotating
fan blades, bullet fired from a toy gun, and balloon's explosion triggered by a
flying dart, which were previously difficult or even unable to be captured with
conventional approaches.Comment: This manuscript was originally submitted on 30th January 1
Temporal shape super-resolution by intra-frame motion encoding using high-fps structured light
One of the solutions of depth imaging of moving scene is to project a static
pattern on the object and use just a single image for reconstruction. However,
if the motion of the object is too fast with respect to the exposure time of
the image sensor, patterns on the captured image are blurred and reconstruction
fails. In this paper, we impose multiple projection patterns into each single
captured image to realize temporal super resolution of the depth image
sequences. With our method, multiple patterns are projected onto the object
with higher fps than possible with a camera. In this case, the observed pattern
varies depending on the depth and motion of the object, so we can extract
temporal information of the scene from each single image. The decoding process
is realized using a learning-based approach where no geometric calibration is
needed. Experiments confirm the effectiveness of our method where sequential
shapes are reconstructed from a single image. Both quantitative evaluations and
comparisons with recent techniques were also conducted.Comment: 9 pages, Published at the International Conference on Computer Vision
(ICCV 2017
An intelligent real time 3D vision system for robotic welding tasks
MARWIN is a top-level robot control system that has been designed for automatic robot welding tasks. It extracts welding parameters and calculates robot trajectories directly from CAD models which are then verified by real-time 3D scanning and registration. MARWIN's 3D computer vision provides a user-centred robot environment in which a task is specified by the user by simply confirming and/or adjusting suggested parameters and welding sequences. The focus of this paper is on describing a mathematical formulation for fast 3D reconstruction using structured light together with the mechanical design and testing of the 3D vision system and show how such technologies can be exploited in robot welding tasks
Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect
Recently, the new Kinect One has been issued by Microsoft, providing the next
generation of real-time range sensing devices based on the Time-of-Flight (ToF)
principle. As the first Kinect version was using a structured light approach,
one would expect various differences in the characteristics of the range data
delivered by both devices. This paper presents a detailed and in-depth
comparison between both devices. In order to conduct the comparison, we propose
a framework of seven different experimental setups, which is a generic basis
for evaluating range cameras such as Kinect. The experiments have been designed
with the goal to capture individual effects of the Kinect devices as isolatedly
as possible and in a way, that they can also be adopted, in order to apply them
to any other range sensing device. The overall goal of this paper is to provide
a solid insight into the pros and cons of either device. Thus, scientists that
are interested in using Kinect range sensing cameras in their specific
application scenario can directly assess the expected, specific benefits and
potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and
Image Understanding (CVIU
Structured light techniques for 3D surface reconstruction in robotic tasks
Robotic tasks such as navigation and path planning can be greatly enhanced by a vision system capable of providing depth perception from fast and accurate 3D surface reconstruction. Focused on robotic welding tasks we present a comparative analysis of a novel mathematical formulation for 3D surface reconstruction and discuss image processing requirements for reliable detection of patterns in the image. Models are presented for a parallel and angled configurations of light source and image sensor. It is shown that the parallel arrangement requires 35\% fewer arithmetic operations to compute a point cloud in 3D being thus more appropriate for real-time applications. Experiments show that the technique is appropriate to scan a variety of surfaces and, in particular, the intended metallic parts for robotic welding tasks
Projector calibration method based on optical coaxial camera
This paper presents a novel method to accurately calibrate a DLP projector by using an optical coaxial camera to capture
the needed images. A plate beam splitter is used to make imaging axis of the CCD camera and projecting axis of the DLP
projector coaxial, so the DLP projector can be treated as a true inverse camera. A plate having discrete markers on the
surface will be designed and manufactured to calibrate the DLP projector. By projecting vertical and horizontal
sinusoidal fringe patterns on the plate surface from the projector, the absolute phase of each marker’s center can be
obtained. The corresponding projector pixel coordinate of each marker is determined from the obtained absolute phase.
The internal and external parameters of the DLP projector are calibrated by the corresponding point pair between the
projector coordinate and the world coordinate of discrete markers. Experimental results show that the proposed method
accurately obtains the parameters of the DLP projector. One advantage of the method is the calibrated internal and
external parameters have high accuracy because of uncalibrating the camera. The other is the optical coaxes geometry
gives a true inverse camera, so the calibrated parameters are more accurate than that of crossed-optical-axes, especially
the principal points and the radial distortion coefficients of the projector lens
- …