13 research outputs found

    Joint view expansion and filtering for automultiscopic 3D displays

    Get PDF
    Multi-view autostereoscopic displays provide an immersive, glasses-free 3D viewing experience, but they require correctly filtered content from multiple viewpoints. This, however, cannot be easily obtained with current stereoscopic production pipelines. We provide a practical solution that takes a stereoscopic video as an input and converts it to multi-view and filtered video streams that can be used to drive multi-view autostereoscopic displays. The method combines a phase-based video magnification and an interperspective antialiasing into a single filtering process. The whole algorithm is simple and can be efficiently implemented on current GPUs to yield a near real-time performance. Furthermore, the ability to retarget disparity is naturally supported. Our method is robust and works well for challenging video scenes with defocus blur, motion blur, transparent materials, and specularities. We show that our results are superior when compared to the state-of-the-art depth-based rendering methods. Finally, we showcase the method in the context of a real-time 3D videoconferencing system that requires only two cameras.Quanta Computer (Firm)National Science Foundation (U.S.) (NSF IIS-1111415)National Science Foundation (U.S.) (NSF IIS-1116296

    Digital Holography Data Compression

    Get PDF
    Digital holography processing is a research topic related to the development of novel visual immersive applications. The huge amount of information conveyed by a digital hologram and the different properties of holographic data with respect to conventional photographic data require a comprehension of the performances and limitations of current image and video standard techniques. This paper proposes an architecture for objective evaluation of the performances of the state-of-the-art compression techniques applied to digital holographic data

    Methods for Light Field Display Profiling and Scalable Super-Multiview Video Coding

    Get PDF
    Light field 3D displays reproduce the light field of real or synthetic scenes, as observed by multiple viewers, without the necessity of wearing 3D glasses. Reproducing light fields is a technically challenging task in terms of optical setup, content creation, distributed rendering, among others; however, the impressive visual quality of hologramlike scenes, in full color, with real-time frame rates, and over a very wide field of view justifies the complexity involved. Seeing objects popping far out from the screen plane without glasses impresses even those viewers who have experienced other 3D displays before.Content for these displays can either be synthetic or real. The creation of synthetic (rendered) content is relatively well understood and used in practice. Depending on the technique used, rendering has its own complexities, quite similar to the complexity of rendering techniques for 2D displays. While rendering can be used in many use-cases, the holy grail of all 3D display technologies is to become the future 3DTVs, ending up in each living room and showing realistic 3D content without glasses. Capturing, transmitting, and rendering live scenes as light fields is extremely challenging, and it is necessary if we are about to experience light field 3D television showing real people and natural scenes, or realistic 3D video conferencing with real eye-contact.In order to provide the required realism, light field displays aim to provide a wide field of view (up to 180°), while reproducing up to ~80 MPixels nowadays. Building gigapixel light field displays is realistic in the next few years. Likewise, capturing live light fields involves using many synchronized cameras that cover the same display wide field of view and provide the same high pixel count. Therefore, light field capture and content creation has to be well optimized with respect to the targeted display technologies. Two major challenges in this process are addressed in this dissertation.The first challenge is how to characterize the display in terms of its capabilities to create light fields, that is how to profile the display in question. In clearer terms this boils down to finding the equivalent spatial resolution, which is similar to the screen resolution of 2D displays, and angular resolution, which describes the smallest angle, the color of which the display can control individually. Light field is formalized as 4D approximation of the plenoptic function in terms of geometrical optics through spatiallylocalized and angularly-directed light rays in the so-called ray space. Plenoptic Sampling Theory provides the required conditions to sample and reconstruct light fields. Subsequently, light field displays can be characterized in the Fourier domain by the effective display bandwidth they support. In the thesis, a methodology for displayspecific light field analysis is proposed. It regards the display as a signal processing channel and analyses it as such in spectral domain. As a result, one is able to derive the display throughput (i.e. the display bandwidth) and, subsequently, the optimal camera configuration to efficiently capture and filter light fields before displaying them.While the geometrical topology of optical light sources in projection-based light field displays can be used to theoretically derive display bandwidth, and its spatial and angular resolution, in many cases this topology is not available to the user. Furthermore, there are many implementation details which cause the display to deviate from its theoretical model. In such cases, profiling light field displays in terms of spatial and angular resolution has to be done by measurements. Measurement methods that involve the display showing specific test patterns, which are then captured by a single static or moving camera, are proposed in the thesis. Determining the effective spatial and angular resolution of a light field display is then based on an automated analysis of the captured images, as they are reproduced by the display, in the frequency domain. The analysis reveals the empirical limits of the display in terms of pass-band both in the spatial and angular dimension. Furthermore, the spatial resolution measurements are validated by subjective tests confirming that the results are in line with the smallest features human observers can perceive on the same display. The resolution values obtained can be used to design the optimal capture setup for the display in question.The second challenge is related with the massive number of views and pixels captured that have to be transmitted to the display. It clearly requires effective and efficient compression techniques to fit in the bandwidth available, as an uncompressed representation of such a super-multiview video could easily consume ~20 gigabits per second with today’s displays. Due to the high number of light rays to be captured, transmitted and rendered, distributed systems are necessary for both capturing and rendering the light field. During the first attempts to implement real-time light field capturing, transmission and rendering using a brute force approach, limitations became apparent. Still, due to the best possible image quality achievable with dense multi-camera light field capturing and light ray interpolation, this approach was chosen as the basis of further work, despite the massive amount of bandwidth needed. Decompression of all camera images in all rendering nodes, however, is prohibitively time consuming and is not scalable. After analyzing the light field interpolation process and the data-access patterns typical in a distributed light field rendering system, an approach to reduce the amount of data required in the rendering nodes has been proposed. This approach, on the other hand, requires rectangular parts (typically vertical bars in case of a Horizontal Parallax Only light field display) of the captured images to be available in the rendering nodes, which might be exploited to reduce the time spent with decompression of video streams. However, partial decoding is not readily supported by common image / video codecs. In the thesis, approaches aimed at achieving partial decoding are proposed for H.264, HEVC, JPEG and JPEG2000 and the results are compared.The results of the thesis on display profiling facilitate the design of optimal camera setups for capturing scenes to be reproduced on 3D light field displays. The developed super-multiview content encoding also facilitates light field rendering in real-time. This makes live light field transmission and real-time teleconferencing possible in a scalable way, using any number of cameras, and at the spatial and angular resolution the display actually needs for achieving a compelling visual experience

    Modeling and Simulation in Engineering

    Get PDF
    This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results

    Συμβολή στην ανάλυση και κωδικοποίηση συστοιχίας εικόνων τρισδιάστατης απεικόνισης

    Get PDF
    Τα τρισδιάστατα (3Δ) συστήματα απεικόνισης αποτελούν σήμερα το κύριο μέσο παρατήρησης για ένα πλήθος από εξειδικευμένες εφαρμογές και με την εξέλιξη των τεχνολογικών τους παραμέτρων και των δικτυακών υποδομών αναμένεται να αποτελέσουν στο άμεσο μέλλον την κύρια μέθοδο απεικόνισης για ένα ακόμη μεγαλύτερο πλήθος από καθημερινές εφαρμογές. Η έρευνα που πραγματοποιήθηκε στα πλαίσια της παρούσας διατριβής αποτελεί μία προχωρημένη μελέτη για ένα συγκεκριμένο είδος μεθόδου 3Δ απεικόνισης που ονομάζεται Ολοκληρωτική Φωτογράφιση (Ιntegral Photography - IP). Στο πρώτο τμήμα της μελέτης εξετάστηκαν οι δυνατότητες της μεθόδου και αναπτύχθηκε ένα πρωτότυπο ψηφιακό σύστημα καταγραφής εικόνων Ολοκληρωτικής Φωτογράφισης (ΟΦ) πραγματικών αντικειμένων του εγγύς πεδίου της συσκευής με χρήση ενός επίπεδου σαρωτή, ικανό να παράγει εικόνες με ιδιαίτερα υψηλή ανάλυση, σε σχέση με τα μέχρι τούδε προταθέντα ψηφιακά συστήματα. Στο δεύτερο τμήμα της παρούσας έρευνας αναπτύχθηκε, για πρώτη φορά, ένα αυτόματο σύστημα ευθυγράμμισης των αισθητήρων που χρησιμοποιούνται με τα οπτικά μέρη του συστήματος, το οποίο δεν προϋποθέτει καμία γνώση για τα χαρακτηριστικά του συστήματος χρησιμοποιώντας ένα πλήθος τεχνικών ανάλυσης εικόνας και αναγνώρισης προτύπων. Η παρούσα έρευνα ολοκληρώνεται με την ανάπτυξη εξειδικευμένων αλγορίθμων κωδικοποίησης των εικόνων ΟΦ, οι οποίες καταφέρνουν να μειώσουν σε εξαιρετικό βαθμό τον εγγενή πλεονασμό που περιέχουν αυτές

    Spectacularly Binocular: Exploiting Binocular Luster Effects for HCI Applications

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore