982 research outputs found

    Stereo uparivanje iz video isjeÄŤka

    Get PDF
    This paper proposes a novel method for stereo matching which is based on combination of active and passive stereo 3D reconstruction approaches. A laser line is used to scan the reconstructed scene and a stereo camera pair is used for the image acquisition. Each image pixel is scanned at a specific scan time so that the intensity time patterns of the correspondent pixels are highly correlated. This yield in highly confident and accurate disparity map calculation and also allows the reconstruction of poorly textured as well as the extremely textured surface which are very hard to deal with using the conventional passive stereo approaches. The occluded regions are also detected successfully. This method is not computationally intensive and can be used for turning the smartphone into the practical 3D scanner as presented in this work.Ovim radom predstavljena je nova metoda stereo uparivanja temeljena na kombinaciji aktivnog i pasivnog stereo pristupa. Rekonstruirana scena skenirana je laserskom linijom, dok se par stereo kamera koristi za akviziciju video isječka. Svaki slikovni element rekonstruirane scene skeniran je laserskom linijom u određenom trenutku stoga su profili intenziteta svjetline u vremenskoj domeni izrazito korelirani za slikovne elemente lijeve i desne kamere koji odgovaraju istom slikovnom element rekonstruirane scene. Stoga je rezultat predstavljene metode određivanje stereo parova slikovnih elemenata s visokom pouzdanošću. Nadalje, predstavljena metoda omogućuje rekonstruiranje izrazito slabo odnosno izrazito intenzivno teksturiranih scena što je često veoma teško postići korištenjem konvencionalnih metoda stereo 3D rekonstrukcije. Metoda je jednostavna te ju je moguće implementirati na sustavima ograničenih računarskih resursa, stoga je iznimno pogodna za primjenu na mobilnim platformama primjerice pametnim telefonima

    State of the art 3D technologies and MVV end to end system design

    Get PDF
    L’oggetto del presente lavoro di tesi è costituito dall’analisi e dalla recensione di tutte le tecnologie 3D: esistenti e in via di sviluppo per ambienti domestici; tenendo come punto di riferimento le tecnologie multiview video (MVV). Tutte le sezioni della catena dalla fase di cattura a quella di riproduzione sono analizzate. Lo scopo è di progettare una possibile architettura satellitare per un futuro sistema MVV televisivo, nell’ambito di due possibili scenari, broadcast o interattivo. L’analisi coprirà considerazioni tecniche, ma anche limitazioni commerciali

    Rectification Strategies for a Binary Coded Structured Light 3D Scanner

    Get PDF
    Making a computer able to see exactly as a human being does was for many years one of the most interesting and challenging tasks involving lots of experts and pioneers in fields such as Computer Science and Artificial Intelligence. As a result, a whole field called Computer Vision has emerged becoming very soon a part of our daily life. The successful methodologies of this discipline have been applied in countless areas of application and their use is still in continuous expansion. On the other hand, in an increasing number of applications extracting information from simple 2D images is not enough and what is more requested instead is to use three-dimensional imaging techniques in order to reconstruct the 3D shape of the imaged objects and scene. The techniques developed in this context include both active systems, where some form of illumination is projected onto the scene, and passive systems, where the natural illumination of the scene is used. Among the active systems, one of the most reliable approaches for recovering the surface of objects is the use of structured light. This technique is based on projecting a light pattern and viewing the illuminated scene from one or more points of view. Since the pattern is coded, correspondences between image points and points of the projected pattern can be easily found. In particular, the performances of this kind of 3D scanner are determined by two key aspects, the accuracy and the acquisition time. This thesis aims to design and experiment some rectification strategies for a prototype of binary coded structured light 3D scanner. The rectification is a commonly used technique for stereo vision systems which, in case of structured light, facilitates the establishment of correspondences across a projected pattern and an acquired image and reduces the number of pattern images to be projected, resulting finally in a speeding-up of the acquisition times.Making a computer able to see exactly as a human being does was for many years one of the most interesting and challenging tasks involving lots of experts and pioneers in fields such as Computer Science and Artificial Intelligence. As a result, a whole field called Computer Vision has emerged becoming very soon a part of our daily life. The successful methodologies of this discipline have been applied in countless areas of application and their use is still in continuous expansion. On the other hand, in an increasing number of applications extracting information from simple 2D images is not enough and what is more requested instead is to use three-dimensional imaging techniques in order to reconstruct the 3D shape of the imaged objects and scene. The techniques developed in this context include both active systems, where some form of illumination is projected onto the scene, and passive systems, where the natural illumination of the scene is used. Among the active systems, one of the most reliable approaches for recovering the surface of objects is the use of structured light. This technique is based on projecting a light pattern and viewing the illuminated scene from one or more points of view. Since the pattern is coded, correspondences between image points and points of the projected pattern can be easily found. In particular, the performances of this kind of 3D scanner are determined by two key aspects, the accuracy and the acquisition time. This thesis aims to design and experiment some rectification strategies for a prototype of binary coded structured light 3D scanner. The rectification is a commonly used technique for stereo vision systems which, in case of structured light, facilitates the establishment of correspondences across a projected pattern and an acquired image and reduces the number of pattern images to be projected, resulting finally in a speeding-up of the acquisition times

    Loss-resilient Coding of Texture and Depth for Free-viewpoint Video Conferencing

    Full text link
    Free-viewpoint video conferencing allows a participant to observe the remote 3D scene from any freely chosen viewpoint. An intermediate virtual viewpoint image is commonly synthesized using two pairs of transmitted texture and depth maps from two neighboring captured viewpoints via depth-image-based rendering (DIBR). To maintain high quality of synthesized images, it is imperative to contain the adverse effects of network packet losses that may arise during texture and depth video transmission. Towards this end, we develop an integrated approach that exploits the representation redundancy inherent in the multiple streamed videos a voxel in the 3D scene visible to two captured views is sampled and coded twice in the two views. In particular, at the receiver we first develop an error concealment strategy that adaptively blends corresponding pixels in the two captured views during DIBR, so that pixels from the more reliable transmitted view are weighted more heavily. We then couple it with a sender-side optimization of reference picture selection (RPS) during real-time video coding, so that blocks containing samples of voxels that are visible in both views are more error-resiliently coded in one view only, given adaptive blending will erase errors in the other view. Further, synthesized view distortion sensitivities to texture versus depth errors are analyzed, so that relative importance of texture and depth code blocks can be computed for system-wide RPS optimization. Experimental results show that the proposed scheme can outperform the use of a traditional feedback channel by up to 0.82 dB on average at 8% packet loss rate, and by as much as 3 dB for particular frames

    Towards Highly-Integrated Stereovideoscopy for \u3ci\u3ein vivo\u3c/i\u3e Surgical Robots

    Get PDF
    When compared to traditional surgery, laparoscopic procedures result in better patient outcomes: shorter recovery, reduced post-operative pain, and less trauma to incisioned tissue. Unfortunately, laparoscopic procedures require specialized training for surgeons, as these minimally-invasive procedures provide an operating environment that has limited dexterity and limited vision. Advanced surgical robotics platforms can make minimally-invasive techniques safer and easier for the surgeon to complete successfully. The most common type of surgical robotics platforms -- the laparoscopic robots -- accomplish this with multi-degree-of-freedom manipulators that are capable of a diversified set of movements when compared to traditional laparoscopic instruments. Also, these laparoscopic robots allow for advanced kinematic translation techniques that allow the surgeon to focus on the surgical site, while the robot calculates the best possible joint positions to complete any surgical motion. An important component of these systems is the endoscopic system used to transmit a live view of the surgical environment to the surgeon. Coupled with 3D high-definition endoscopic cameras, the entirety of the platform, in effect, eliminates the peculiarities associated with laparoscopic procedures, which allows less-skilled surgeons to complete minimally-invasive surgical procedures quickly and accurately. A much newer approach to performing minimally-invasive surgery is the idea of using in-vivo surgical robots -- small robots that are inserted directly into the patient through a single, small incision; once inside, an in-vivo robot can perform surgery at arbitrary positions, with a much wider range of motion. While laparoscopic robots can harness traditional endoscopic video solutions, these in-vivo robots require a fundamentally different video solution that is as flexible as possible and free of bulky cables or fiber optics. This requires a miniaturized videoscopy system that incorporates an image sensor with a transceiver; because of severe size constraints, this system should be deeply embedded into the robotics platform. Here, early results are presented from the integration of a miniature stereoscopic camera into an in-vivo surgical robotics platform. A 26mm X 24mm stereo camera was designed and manufactured. The proposed device features USB connectivity and 1280 X 720 resolution at 30 fps. Resolution testing indicates the device performs much better than similarly-priced analog cameras. Suitability of the platform for 3D computer vision tasks -- including stereo reconstruction -- is examined. The platform was also tested in a living porcine model at the University of Nebraska Medical Center. Results from this experiment suggest that while the platform performs well in controlled, static environments, further work is required to obtain usable results in true surgeries. Concluding, several ideas for improvement are presented, along with a discussion of core challenges associated with the platform. Adviser: Lance C. PĂ©rez [Document = 28 Mb

    Real-Time High-Resolution Multiple-Camera Depth Map Estimation Hardware and Its Applications

    Get PDF
    Depth information is used in a variety of 3D based signal processing applications such as autonomous navigation of robots and driving systems, object detection and tracking, computer games, 3D television, and free view-point synthesis. These applications require high accuracy and speed performances for depth estimation. Depth maps can be generated using disparity estimation methods, which are obtained from stereo matching between multiple images. The computational complexity of disparity estimation algorithms and the need of large size and bandwidth for the external and internal memory make the real-time processing of disparity estimation challenging, especially for high resolution images. This thesis proposes a high-resolution high-quality multiple-camera depth map estimation hardware. The proposed hardware is verified in real-time with a complete system from the initial image capture to the display and applications. The details of the complete system are presented. The proposed binocular and trinocular adaptive window size disparity estimation algorithms are carefully designed to be suitable to real-time hardware implementation by allowing efficient parallel and local processing while providing high-quality results. The proposed binocular and trinocular disparity estimation hardware implementations can process 55 frames per second on a Virtex-7 FPGA at a 1024 x 768 XGA video resolution for a 128 pixel disparity range. The proposed binocular disparity estimation hardware provides best quality compared to existing real-time high-resolution disparity estimation hardware implementations. A novel compressed-look up table based rectification algorithm and its real-time hardware implementation are presented. The low-complexity decompression process of the rectification hardware utilizes a negligible amount of LUT and DFF resources of the FPGA while it does not require the existence of external memory. The first real-time high-resolution free viewpoint synthesis hardware utilizing three-camera disparity estimation is presented. The proposed hardware generates high-quality free viewpoint video in real-time for any horizontally aligned arbitrary camera positioned between the leftmost and rightmost physical cameras. The full embedded system of the depth estimation is explained. The presented embedded system transfers disparity results together with synchronized RGB pixels to the PC for application development. Several real-time applications are developed on a PC using the obtained RGB+D results. The implemented depth estimation based real-time software applications are: depth based image thresholding, speed and distance measurement, head-hands-shoulders tracking, virtual mouse using hand tracking and face tracking integrated with free viewpoint synthesis. The proposed binocular disparity estimation hardware is implemented in an ASIC. The ASIC implementation of disparity estimation imposes additional constraints with respect to the FPGA implementation. These restrictions, their implemented efficient solutions and the ASIC implementation results are presented. In addition, a very high-resolution (82.3 MP) 360°x90° omnidirectional multiple camera system is proposed. The hemispherical camera system is able to view the target locations close to horizontal plane with more than two cameras. Therefore, it can be used in high-resolution 360° depth map estimation and its applications in the future

    Three-dimensional media for mobile devices

    Get PDF
    Cataloged from PDF version of article.This paper aims at providing an overview of the core technologies enabling the delivery of 3-D Media to next-generation mobile devices. To succeed in the design of the corresponding system, a profound knowledge about the human visual system and the visual cues that form the perception of depth, combined with understanding of the user requirements for designing user experience for mobile 3-D media, are required. These aspects are addressed first and related with the critical parts of the generic system within a novel user-centered research framework. Next-generation mobile devices are characterized through their portable 3-D displays, as those are considered critical for enabling a genuine 3-D experience on mobiles. Quality of 3-D content is emphasized as the most important factor for the adoption of the new technology. Quality is characterized through the most typical, 3-D-specific visual artifacts on portable 3-D displays and through subjective tests addressing the acceptance and satisfaction of different 3-D video representation, coding, and transmission methods. An emphasis is put on 3-D video broadcast over digital video broadcasting-handheld (DVB-H) in order to illustrate the importance of the joint source-channel optimization of 3-D video for its efficient compression and robust transmission over error-prone channels. The comparative results obtained identify the best coding and transmission approaches and enlighten the interaction between video quality and depth perception along with the influence of the context of media use. Finally, the paper speculates on the role and place of 3-D multimedia mobile devices in the future internet continuum involving the users in cocreation and refining of rich 3-D media content
    • …
    corecore