77 research outputs found

    Micro Fourier Transform Profilometry (μ\muFTP): 3D shape measurement at 10,000 frames per second

    Full text link
    Recent advances in imaging sensors and digital light projection technology have facilitated a rapid progress in 3D optical sensing, enabling 3D surfaces of complex-shaped objects to be captured with improved resolution and accuracy. However, due to the large number of projection patterns required for phase recovery and disambiguation, the maximum fame rates of current 3D shape measurement techniques are still limited to the range of hundreds of frames per second (fps). Here, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (μ\muFTP), which can capture 3D surfaces of transient events at up to 10,000 fps based on our newly developed high-speed fringe projection system. Compared with existing techniques, μ\muFTP has the prominent advantage of recovering an accurate, unambiguous, and dense 3D point cloud with only two projected patterns. Furthermore, the phase information is encoded within a single high-frequency fringe image, thereby allowing motion-artifact-free reconstruction of transient events with temporal resolution of 50 microseconds. To show μ\muFTP's broad utility, we use it to reconstruct 3D videos of 4 transient scenes: vibrating cantilevers, rotating fan blades, bullet fired from a toy gun, and balloon's explosion triggered by a flying dart, which were previously difficult or even unable to be captured with conventional approaches.Comment: This manuscript was originally submitted on 30th January 1

    Development of a calibration pipeline for a monocular-view structured illumination 3D sensor utilizing an array projector

    Get PDF
    Commercial off-the-shelf digital projection systems are commonly used in active structured illumination photogrammetry of macro-scale surfaces due to their relatively low cost, accessibility, and ease of use. They can be described as inverse pinhole modelled. The calibration pipeline of a 3D sensor utilizing pinhole devices in a projector-camera setup configuration is already well-established. Recently, there have been advances in creating projection systems offering projection speeds greater than that available from conventional off-the-shelf digital projectors. However, they cannot be calibrated using well established techniques based on the pinole assumption. They are chip-less and without projection lens. This work is based on the utilization of unconventional projection systems known as array projectors which contain not one but multiple projection channels that project a temporal sequence of illumination patterns. None of the channels implement a digital projection chip or a projection lens. To workaround the calibration problem, previous realizations of a 3D sensor based on an array projector required a stereo-camera setup. Triangulation took place between the two pinhole modelled cameras instead. However, a monocular setup is desired as a single camera configuration results in decreased cost, weight, and form-factor. This study presents a novel calibration pipeline that realizes a single camera setup. A generalized intrinsic calibration process without model assumptions was developed that directly samples the illumination frustum of each array projection channel. An extrinsic calibration process was then created that determines the pose of the single camera through a downhill simplex optimization initialized by particle swarm. Lastly, a method to store the intrinsic calibration with the aid of an easily realizable calibration jig was developed for re-use in arbitrary measurement camera positions so that intrinsic calibration does not have to be repeated

    Real Time Structured Light and Applications

    Get PDF

    Crosstalk in stereoscopic displays

    Get PDF
    Crosstalk is an important image quality attribute of stereoscopic 3D displays. The research presented in this thesis examines the presence, mechanisms, simulation, and reduction of crosstalk for a selection of stereoscopic display technologies. High levels of crosstalk degrade the perceived quality of stereoscopic displays hence it is important to minimise crosstalk. This thesis provides new insights which are critical to a detailed understanding of crosstalk and consequently to the development of effective crosstalk reduction techniques

    Perceived Depth Control in Stereoscopic Cinematography

    Get PDF
    Despite the recent explosion of interest in the stereoscopic 3D (S3D) technology, the ultimate prevailing of the S3D medium is still significantly hindered by adverse effects regarding the S3D viewing discomfort. This thesis attempts to improve the S3D viewing experience by investigating perceived depth control methods in stereoscopic cinematography on desktop 3D displays. The main contributions of this work are: (1) A new method was developed to carry out human factors studies on identifying the practical limits of the 3D Comfort Zone on a given 3D display. Our results suggest that it is necessary for cinematographers to identify the specific limits of 3D Comfort Zone on the target 3D display as different 3D systems have different ranges for the 3D Comfort Zone. (2) A new dynamic depth mapping approach was proposed to improve the depth perception in stereoscopic cinematography. The results of a human-based experiment confirmed its advantages in controlling the perceived depth in viewing 3D motion pictures over the existing depth mapping methods. (3) The practicability of employing the Depth of Field (DoF) blur technique in S3D was also investigated. Our results indicate that applying the DoF blur simulation on stereoscopic content may not improve the S3D viewing experience without the real time information about what the viewer is looking at. Finally, a basic guideline for stereoscopic cinematography was introduced to summarise the new findings of this thesis alongside several well-known key factors in 3D cinematography. It is our assumption that this guideline will be of particular interest not only to 3D filmmaking but also to 3D gaming, sports broadcasting, and TV production

    Expanding Dimensionality in Cinema Color: Impacting Observer Metamerism through Multiprimary Display

    Get PDF
    Television and cinema display are both trending towards greater ranges and saturation of reproduced colors made possible by near-monochromatic RGB illumination technologies. Through current broadcast and digital cinema standards work, system designs employing laser light sources, narrow-band LED, quantum dots and others are being actively endorsed in promotion of Wide Color Gamut (WCG). Despite artistic benefits brought to creative content producers, spectrally selective excitations of naturally different human color response functions exacerbate variability of observer experience. An exaggerated variation in color-sensing is explicitly counter to the exhaustive controls and calibrations employed in modern motion picture pipelines. Further, singular standard observer summaries of human color vision such as found in the CIE’s 1931 and 1964 color matching functions and used extensively in motion picture color management are deficient in recognizing expected human vision variability. Many researchers have confirmed the magnitude of observer metamerism in color matching in both uniform colors and imagery but few have shown explicit color management with an aim of minimized difference in observer perception variability. This research shows that not only can observer metamerism influences be quantitatively predicted and confirmed psychophysically but that intentionally engineered multiprimary displays employing more than three primaries can offer increased color gamut with drastically improved consistency of experience. To this end, a seven-channel prototype display has been constructed based on observer metamerism models and color difference indices derived from the latest color vision demographic research. This display has been further proven in forced-choice paired comparison tests to deliver superior color matching to reference stimuli versus both contemporary standard RGB cinema projection and recently ratified standard laser projection across a large population of color-normal observers

    VR-LAB: A Distributed Multi-User Environment for Educational Purposes and Presentations

    Get PDF
    In the last three years our research was focused on a new distributed multi-user environment. Finally, all components were integrated in a system called the VR-Lab, which will be described on the following pages. The VR-Lab provides Hard- and Software for a distributed presentation system. Elements which are often used in environments called Computer Supported Cooperative Work (CSCW). In contrast to other projects the VR-Lab integrates a distributed system in a common environment of a lecture room and does not generate a virtual conference room in a computer system. Thus, allowing inexperienced persons to use the VR-LAB and benefit from the multimedia tools in their common environment. To build the VR-LAB we developed a lot of hard- and software and integrated it into a lecture room to perform distributed presentations, conferences or teaching. Additionally other software components were developed to be connected to the VR-LAB, control its components, or distribute content between VR-LAB installations. Beside standard software for video and audio transmission, we developed and integrated a distributed 3D-VRML-Browser to present three dimensional content to a distributed audience. One of the interesting features of this browser is the object oriented distributed scene graph. By coupling a high-speed rendering system with a database we could distribute objects to other participants. So the semantic properties of any geometrical or control object can be kept and used by the remote participant. Because of the high compression achieved by the transport of objects instead of triangles a lot of bandwidth could be saved. Also each participant could select a display quality appropriate to its hardware.Diese Arbeit beschreibt ein integriertes Virtual-Reality System, das VR Lab. Das System besteht aus verschiedenen Hard- und Softwarekomponenten die eine verteiltevirtuelle Multi-User Umgebung darstellen die vor allem im Bereich verteilter Präsentationen verwendet werden kann. Im Gegensatz zu anderen Systemen dieser Art, die oft im Bereich des Computer Supported Cooperative Work (CSCW) eingesetzt werden dient unser System nicht dazu eine Präsentationsumgebung im Computer nachzubilden sondern eine reele Umgebung zu schaffen in der verteilte Präsentationen durchgeführt werden können. Dies soll vor allem ungeübten Personen die Arbeit mit verteilten Umgebungen erleichtern. Dazu wurden verschiedene Hard- und Softwarekomponenten entwickelt. Darunter der verteilte 3D Browser MRT-VR, der es ermöglicht 3D Daten an verschiedenen Stellen gleichzeitig zu visualisieren. MRT-VR zeichnet sich insbesondere dadurch aus, daß die 3D Objekte nicht als Polygondaten transportiert werden, sonderen als Objekte und so deren Objekteigenschaften beibehalten werden. Dies spart nicht nur sehr viel Bandbreite bei der Übertragung sondern ermöglicht auch Darstellungen in unterschiedlichen Qualitätsstufen auf den unterschiedlichen Zielrechnern der Teilnehmer. Ein weiterer Teil der Arbeit beschreibt die Entwicklung einer preiswerten imersiven 3D Umgebung um die 3D Daten in ansprechender Qualität zu visualisieren. Alle Komponenten wurden in einer gemeinsamen Umgebung, dem VR-Lab, integriert und mt Steuerungskomponenten versehen
    corecore