1,886 research outputs found

    Dynamic 3D shape measurement based on the phase-shifting moir\'e algorithm

    Full text link
    In order to increase the efficiency of phase retrieval,Wang proposed a high-speed moire phase retrieval method.But it is used only to measure the tiny object. In view of the limitation of Wang method,we proposed a dynamic three-dimensional (3D) measurement based on the phase-shifting moire algorithm.First, four sinusoidal fringe patterns with a pi/2 phase-shift are projected on the reference plane and acquired four deformed fringe patterns of the reference plane in advance. Then only single-shot deformed fringe pattern of the tested object is captured in measurement process.Four moire fringe patterns can be obtained by numerical multiplication between the the AC component of the object pattern and the AC components of the reference patterns respectively. The four low-frequency components corresponding to the moire fringe patterns are calculated by the complex encoding FT (Fourier transform) ,spectrum filtering and inverse FT.Thus the wrapped phase of the object can be determined in the tangent form from the four phase-shifting moire fringe patterns using the four-step phase shifting algorithm.The continuous phase distribution can be obtained by the conventional unwrapping algorithm. Finally, experiments were conducted to prove the validity and feasibility of the proposed method. The results are analyzed and compared with those of Wang method, demonstrating that our method not only can expand the measurement scope, but also can improve accuracy.Comment: 14 pages,5 figures. ams.or

    Micro Fourier Transform Profilometry (μ\muFTP): 3D shape measurement at 10,000 frames per second

    Full text link
    Recent advances in imaging sensors and digital light projection technology have facilitated a rapid progress in 3D optical sensing, enabling 3D surfaces of complex-shaped objects to be captured with improved resolution and accuracy. However, due to the large number of projection patterns required for phase recovery and disambiguation, the maximum fame rates of current 3D shape measurement techniques are still limited to the range of hundreds of frames per second (fps). Here, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (μ\muFTP), which can capture 3D surfaces of transient events at up to 10,000 fps based on our newly developed high-speed fringe projection system. Compared with existing techniques, μ\muFTP has the prominent advantage of recovering an accurate, unambiguous, and dense 3D point cloud with only two projected patterns. Furthermore, the phase information is encoded within a single high-frequency fringe image, thereby allowing motion-artifact-free reconstruction of transient events with temporal resolution of 50 microseconds. To show μ\muFTP's broad utility, we use it to reconstruct 3D videos of 4 transient scenes: vibrating cantilevers, rotating fan blades, bullet fired from a toy gun, and balloon's explosion triggered by a flying dart, which were previously difficult or even unable to be captured with conventional approaches.Comment: This manuscript was originally submitted on 30th January 1

    Temporal phase unwrapping using deep learning

    Full text link
    The multi-frequency temporal phase unwrapping (MF-TPU) method, as a classical phase unwrapping algorithm for fringe projection profilometry (FPP), is capable of eliminating the phase ambiguities even in the presence of surface discontinuities or spatially isolated objects. For the simplest and most efficient case, two sets of 3-step phase-shifting fringe patterns are used: the high-frequency one is for 3D measurement and the unit-frequency one is for unwrapping the phase obtained from the high-frequency pattern set. The final measurement precision or sensitivity is determined by the number of fringes used within the high-frequency pattern, under the precondition that the phase can be successfully unwrapped without triggering the fringe order error. Consequently, in order to guarantee a reasonable unwrapping success rate, the fringe number (or period number) of the high-frequency fringe patterns is generally restricted to about 16, resulting in limited measurement accuracy. On the other hand, using additional intermediate sets of fringe patterns can unwrap the phase with higher frequency, but at the expense of a prolonged pattern sequence. Inspired by recent successes of deep learning techniques for computer vision and computational imaging, in this work, we report that the deep neural networks can learn to perform TPU after appropriate training, as called deep-learning based temporal phase unwrapping (DL-TPU), which can substantially improve the unwrapping reliability compared with MF-TPU even in the presence of different types of error sources, e.g., intensity noise, low fringe modulation, and projector nonlinearity. We further experimentally demonstrate for the first time, to our knowledge, that the high-frequency phase obtained from 64-period 3-step phase-shifting fringe patterns can be directly and reliably unwrapped from one unit-frequency phase using DL-TPU

    Acquisition of 3D shapes of moving objects using fringe projection profilometry

    Get PDF
    Three-dimensional (3D) shape measurement for object surface reconstruction has potential applications in many areas, such as security, manufacturing and entertainment. As an effective non-contact technique for 3D shape measurements, fringe projection profilometry (FPP) has attracted significant research interests because of its high measurement speed, high measurement accuracy and ease to implement. Conventional FPP analysis approaches are applicable to the calculation of phase differences for static objects. However, 3D shape measurement for dynamic objects remains a challenging task, although they are highly demanded in many applications. The study of this thesis work aims to enhance the measurement accuracy of the FPP techniques for the 3D shape of objects subject to movement in the 3D space. The 3D movement of objects changes not only the position of the object but also the height information with respect to the measurement system, resulting in motion-induced errors with the use of existing FPP technology. The thesis presents the work conducted for solutions of this challenging problem

    Three Dimensional Shape Reconstruction with Dual-camera Measurement Fusion

    Get PDF
    Recently, three-dimensional (3D) shape measurement technologies have been extensively researched in the fields such as computer science and medical engineering. They have been applied in various industries and commercial uses, including robot navigation, reverser engineering and face and gesture recognition. Optical 3D shape measurement is one of the most popular methods, which can be divided into two categories: passive 3D shape reconstruction and active 3D shape imaging. Passive 3D shape measurement techniques use cameras to capture the object with only ambient light. Stereo vision (SV) is one of the typical methods in passive 3D measurement approaches. This method uses two cameras to take photos of the scene from different viewpoints and extract the 3D information by establishing the correspondence between the photos captured. To translate the correspondence to the depth map, epipolar geometry is applied to determine the depth of each pixel. Active 3D shape imaging methods add diverse active light sources to project on the object and use the camera to capture the scene with pre-defined patterns on the object’s surface. The fringe projection profilometry (FPP) is a representative technique among active 3D reconstruction methods. It replaces one of the cameras in stereo vision with a projector, and projects the fringe patterns onto the object before the camera captures it. The depth map can be built via triangulations by analysing the phase difference between patterns distorted by the object’s surface and the original one. Those two mainstream techniques work alone in different scenarios and have various advantages and disadvantages. Active stereo vision (ASV) has excellent dynamic performance, yet its accuracy and spatial resolution are limited. On the other hand, 3D shape measurement methods like FPP have higher accuracy and speed; however, their dynamic performance varies depending on the codification schemes chosen. This thesis presents the research on developing a fusion method that contains both passive and active 3D shape reconstruction algorithms in one system to combine their advantages and reduce the budget of building a high-precision 3D shape measurement system with good dynamic performance. Specifically, in the thesis, we propose a fusion method that combines the epipolar geometry in ASV and triangulations in the FPP system by a specially designed cost function. This way, the information obtained from each system alone is combined, leading to better accuracy. Furthermore, the correlation of object surface is exploited with the autoregressive model to improve the precision of the fusion system. In addition, the expectation maximization framework is employed to address the issue of estimating variables with unknown parameters introduced by AR. Moreover, the fusion cost function derived before is embedded into the EM framework. Next, the message passing algorithm is applied to implement the EM efficiently on large image sizes. A factor graph is derived from fitting the EM approach. To implement belief propagation to solve the problem, it is divided into two sub-graphs: the E-Step factor graph and the M-Step factor graph. Based on two factor graphs, belief propagation is implemented on each of them to estimate the unknown parameters and EM messages. In the last iteration, the height of the object surface can be obtained with the forward and backward messages. Due to the consideration of the object’s surface correlation, the fusion system’s precision is further improved. Simulation and experimental results are presented at last to examine the performance of the proposed system. It is found that the accuracy of the depth map of the fusion method is improved compared to fringe projection profilometry or stereo vision system alone. The limitations of the current study are discussed, and potential future work is presented

    Single-shot compressed ultrafast photography: a review

    Get PDF
    Compressed ultrafast photography (CUP) is a burgeoning single-shot computational imaging technique that provides an imaging speed as high as 10 trillion frames per second and a sequence depth of up to a few hundred frames. This technique synergizes compressed sensing and the streak camera technique to capture nonrepeatable ultrafast transient events with a single shot. With recent unprecedented technical developments and extensions of this methodology, it has been widely used in ultrafast optical imaging and metrology, ultrafast electron diffraction and microscopy, and information security protection. We review the basic principles of CUP, its recent advances in data acquisition and image reconstruction, its fusions with other modalities, and its unique applications in multiple research fields

    Portal-s: High-resolution real-time 3D video telepresence

    Get PDF
    The goal of telepresence is to allow a person to feel as if they are present in a location other than their true location; a common application of telepresence is video conferencing in which live video of a user is transmitted to a remote location for viewing. In conventional two-dimensional (2D) video conferencing, loss of correct eye gaze commonly occurs, due to a disparity between the capture and display optical axes. Newer systems are being developed which allow for three-dimensional (3D) video conferencing, circumventing issues with this disparity, but new challenges are arising in the capture, delivery, and redisplay of 3D contents across existing infrastructure. To address these challenges, a novel system is proposed which allows for 3D video conferencing across existing networks while delivering full resolution 3D video and establishing correct eye gaze. During the development of Portal-s, many innovations to the field of 3D scanning and its applications were made; specifically, this dissertation research has achieved the following innovations: a technique to realize 3D video processing entirely on a graphics processing unit (GPU), methods to compress 3D videos on a GPU, and combination of the aforementioned innovations with a special holographic display hardware system to enable the novel 3D telepresence system entitled Portal-s. The first challenge this dissertation addresses is the cost of real-time 3D scanning technology, both from a monetary and computing power perspective. New advancements in 3D scanning and computation technology are continuing to increase, simplifying the acquisition and display of 3D data. These advancements are allowing users new methods of interaction and analysis of the 3D world around them. Although the acquisition of static 3D geometry is becoming easy, the same cannot be said of dynamic geometry, since all aspects of the 3D processing pipeline, capture, processing, and display, must be realized in real-time simultaneously. Conventional approaches to solve these problems utilize workstation computers with powerful central processing units (CPUs) and GPUs to accomplish the large amounts of processing power required for a single 3D frame. A challenge arises when trying to realize real-time 3D scanning on commodity hardware such as a laptop computer. To address the cost of a real-time 3D scanning system, an entirely parallel 3D data processing pipeline that makes use of a multi-frequency phase-shifting technique is presented. This novel processing pipeline can achieve simultaneous 3D data capturing, processing, and display at 30 frames per second (fps) on a laptop computer. By implementing the pipeline within the OpenGL Shading Language (GLSL), nearly any modern computer with a dedicated graphics device can run the pipeline. Making use of multiple threads sharing GPU resources and direct memory access transfers, high frame rates on low compute power devices can be achieved. Although these advancements allow for low compute power devices such as a laptop to achieve real-time 3D scanning, this technique is not without challenges. The main challenge being selecting frequencies that allow for high quality phase, yet do not include phase jumps in equivalent frequencies. To address this issue, a new modified multi-frequency phase shifting technique was developed that allows phase jumps to be introduced in equivalent frequencies yet unwrapped in parallel, increasing phase quality and reducing reconstruction error. Utilizing these techniques, a real-time 3D scanner was developed that captures 3D geometry at 30 fps with a root mean square error (RMSE) of 0:00081 mm for a measurement area of 100 mm X 75 mm at a resolution of 800 X 600 on a laptop computer. With the above mentioned pipeline the CPU is nearly idle, freeing it to perform additional tasks such as image processing and analysis. The second challenge this dissertation addresses is associated with delivering huge amounts of 3D video data in real-time across existing network infrastructure. As the speed of 3D scanning continues to increase, and real-time scanning is achieved on low compute power devices, a way of compressing the massive amounts of 3D data being generated is needed. At a scan resolution of 800 X 600, streaming a 3D point cloud at 30 frames per second (FPS) would require a throughput of over 1.3 Gbps. This amount of throughput is large for a PCIe bus, and too much for most commodity network cards. Conventional approaches involve serializing the data into a compressible state such as a polygon file format (PLY) or Wavefront object (OBJ) file. While this technique works well for structured 3D geometry, such as that created with computer aided drafting (CAD) or 3D modeling software, this does not hold true for 3D scanned data as it is inherently unstructured. A challenge arises when trying to compress this unstructured 3D information in such a way that it can be easily utilized with existing infrastructure. To address the need for real-time 3D video compression, new techniques entitled Holoimage and Holovideo are presented, which have the ability to compress, respectively, 3D geometry and 3D video into 2D counterparts and apply both lossless and lossy encoding. Similar to the aforementioned 3D scanning pipeline, these techniques make use of a completely parallel pipeline for encoding and decoding; this affords high speed processing on the GPU, as well as compression before streaming the data over the PCIe bus. Once in the compressed 2D state, the information can be streamed and saved until the 3D information is needed, at which point 3D geometry can be reconstructed while maintaining a low amount of reconstruction error. Further enhancements of the technique have allowed additional information, such as texture information, to be encoded by reducing the bit rate of the data through image dithering. This allows both the 3D video and associated 2D texture information to be interlaced and compressed into 2D video, synchronizing the streams automatically. The third challenge this dissertation addresses is achieving correct eye gaze in video conferencing. In 2D video conferencing, loss of correct eye gaze commonly occurs, due to a disparity between the capture and display optical axes. Conventional approaches to mitigate this issue involve either reducing the angle of disparity between the axes by increasing the distance of the user to the system, or merging the axes through the use of beam splitters. Newer approaches to this issue make use of 3D capture and display technology, as the angle of disparity can be corrected through transforms of the 3D data. Challenges arise when trying to create such novel systems, as all aspects of the pipeline, capture, transmission, and redisplay must be simultaneously achieved in real-time with the massive amounts of 3D data. Finally, the Portal-s system is presented, which is an integration of all the aforementioned technologies into a holistic software and hardware system that enables real-time 3D video conferencing with correct mutual eye gaze. To overcome the loss of eye contact in conventional video conferencing, Portal-s makes use of dual structured-light scanners that capture through the same optical axis as the display. The real-time 3D video frames generated on the GPU are then compressed using the Holovideo technique. This allows the 3D video to be streamed across a conventional network or the Internet, and redisplayed at a remote node for another user on the Holographic display glass. Utilizing two connected Portal-s nodes, users of the systems can engage in 3D video conferencing with natural eye gaze established. In conclusion, this dissertation research substantially advances the field of real-time 3D scanning and its applications. Contributions of this research span into both academic and industrial practices, where the use of this information has allowed users new methods of interaction and analysis of the 3D world around them
    • …
    corecore