12 research outputs found

    Multiperspective Modeling and Rendering Using General Linear Cameras

    Full text link

    Analysis of the depth of field of integral imaging displays based on wave optics

    Get PDF
    In this paper, we analyze the depth of field (DOF) of integral imaging displays based on wave optics. With considering the diffraction effect, we analyze the intensity distribution of light with multiple microlenses and derive a DOF calculation formula for integral imaging display system. We study the variations of DOF values with different system parameters. Experimental results are provided to verify the accuracy of the theoretical analysis. The analyses and experimental results presented in this paper could be beneficial for better understanding and designing of integral imaging displays

    GPS-MIV: The General Purpose System for Multi-display Interactive Visualization

    Get PDF
    The new age of information has created opportunities for inventions like the internet. These inventions allow us access to tremendous quantities of data. But, with the increase in information there is need to make sense of such vast quantities of information by manipulating that information to reveal hidden patterns to aid in making sense of it. Data visualization systems provide the tools to reveal patterns and filter information, aiding the processes of insight and decision making. The purpose of this thesis is to develop and test a data visualization system, The General Purpose System for Multi-display Interactive Visualization (GPS-MIV). GPS-MIV is a software system allowing the user to visualize data graphically and interact with it. At the core of the system is a graphics system that displays different computer generated scenes from multiple perspectives and with multiple views. Additionally, GSP-MIV provides interaction for the user to explore the scene

    Situated Displays in Telecommunication

    Get PDF
    In face to face conversation, numerous cues of attention, eye contact, and gaze direction provide important channels of information. These channels create cues that include turn taking, establish a sense of engagement, and indicate the focus of conversation. However, some subtleties of gaze can be lost in common videoconferencing systems, because the single perspective view of the camera doesn't preserve the spatial characteristics of the face to face situation. In particular, in group conferencing, the `Mona Lisa effect' makes all observers feel that they are looked at when the remote participant looks at the camera. In this thesis, we present designs and evaluations of four novel situated teleconferencing systems, which aim to improve the teleconferencing experience. Firstly, we demonstrate the effectiveness of a spherical video telepresence system in that it allows a single observer at multiple viewpoints to accurately judge where the remote user is placing their gaze. Secondly, we demonstrate the gaze-preserving capability of a cylindrical video telepresence system, but for multiple observers at multiple viewpoints. Thirdly, we demonstrated the further improvement of a random hole autostereoscopic multiview telepresence system in conveying gaze by adding stereoscopic cues. Lastly, we investigate the influence of display type and viewing angle on how people place their trust during avatar-mediated interaction. The results show the spherical avatar telepresence system has the ability to be viewed qualitatively similarly from all angles and demonstrate how trust can be altered depending on how one views the avatar. Together these demonstrations motivate the further study of novel display configurations and suggest parameters for the design of future teleconferencing systems

    Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis

    Get PDF
    Anamorphosis for 2D displays can provide viewer centric perspective viewing, enabling 3D appearance, eye contact and engagement, by adapting dynamically in real time to a single moving viewer’s viewpoint, but at the cost of distorted viewing for other viewers. We present a method for constructing non-linear projections as a combination of anamorphic rendering of selective objects whilst reverting to normal perspective rendering of the rest of the scene. Our study defines a scene consisting of five characters, with one of these characters selectively rendered in anamorphic perspective. We conducted an evaluation experiment and demonstrate that the tracked viewer-centric imagery for the selected character results in an improved gaze and engagement estimation. Critically, this is performed without sacrificing the other viewers’ viewing experience. In addition, we present findings on the perception of gaze direction for regularly viewed characters located off-center to the origin, where perceived gaze shifts from being aligned to misalignment increasingly as the distance between viewer and character increases. Finally, we discuss different viewpoints and the spatial relationship between objects

    A Review and Selective Analysis of 3D Display Technologies for Anatomical Education

    Get PDF
    The study of anatomy is complex and difficult for students in both graduate and undergraduate education. Researchers have attempted to improve anatomical education with the inclusion of three-dimensional visualization, with the prevailing finding that 3D is beneficial to students. However, there is limited research on the relative efficacy of different 3D modalities, including monoscopic, stereoscopic, and autostereoscopic displays. This study analyzes educational performance, confidence, cognitive load, visual-spatial ability, and technology acceptance in participants using autostereoscopic 3D visualization (holograms), monoscopic 3D visualization (3DPDFs), and a control visualization (2D printed images). Participants were randomized into three treatment groups: holograms (n=60), 3DPDFs (n=60), and printed images (n=59). Participants completed a pre-test followed by a self-study period using the treatment visualization. Immediately following the study period, participants completed the NASA TLX cognitive load instrument, a technology acceptance instrument, visual-spatial ability instruments, a confidence instrument, and a post-test. Post-test results showed the hologram treatment group (Mdn=80.0) performed significantly better than both 3DPDF (Mdn=66.7, p=.008) and printed images (Mdn=66.7, p=.007). Participants in the hologram and 3DPDF treatment groups reported lower cognitive load compared to the printed image treatment (p \u3c .01). Participants also responded more positively towards the holograms than printed images (p \u3c .001). Overall, the holograms demonstrated significant learning improvement over printed images and monoscopic 3DPDF models. This finding suggests additional depth cues from holographic visualization, notably head-motion parallax and stereopsis, provide substantial benefit towards understanding spatial anatomy. The reduction in cognitive load suggests monoscopic and autostereoscopic 3D may utilize the visual system more efficiently than printed images, thereby reducing mental effort during the learning process. Finally, participants reported positive perceptions of holograms suggesting implementation of holographic displays would be met with enthusiasm from student populations. These findings highlight the need for additional studies regarding the effect of novel 3D technologies on learning performance

    Vers un modèle unifié pour l'affichage autostéréoscopique d'images

    Get PDF
    Dans un premier chapitre, nous décrivons un modèle de formation d’image affichée sur un écran en revenant sur les concepts que sont la lumière, la géométrie et l’optique. Nous détaillons ensuite les différentes techniques d’affichage stéréoscopique utilisées à l’heure actuelle, en parcourant la stéréoscopie, l’autostéréoscopie et plus particulièrement le principe d’imagerie intégrale. Le deuxième chapitre introduit un nouveau modèle de formation d’image stéréoscopique. Ce dernier nous permet d’observer deux images d’une paire stéréoscopique soumises à des éventuelles transformations et à l’effet d’une ou de plusieurs optiques particulières, pour reproduire la perception de trois dimensions. Nous abordons l’aspect unificateur de ce modèle. En effet il permet de décrire et d’expliquer de nombreuses techniques d’affichage stéréoscopique existantes. Enfin, dans un troisième chapitre nous discutons d’une méthode particulière de création de paires d’images stéréoscopiques à l’aide d’un champ lumineux

    Multidimensional Optical Sensing and Imaging Systems (MOSIS): From Macro to Micro Scales

    Get PDF
    Multidimensional optical imaging systems for information processing and visualization technologies have numerous applications in fields such as manufacturing, medical sciences, entertainment, robotics, surveillance, and defense. Among different three-dimensional (3-D) imaging methods, integral imaging is a promising multiperspective sensing and display technique. Compared with other 3-D imaging techniques, integral imaging can capture a scene using an incoherent light source and generate real 3-D images for observation without any special viewing devices. This review paper describes passive multidimensional imaging systems combined with different integral imaging configurations. One example is the integral-imaging-based multidimensional optical sensing and imaging systems (MOSIS), which can be used for 3-D visualization, seeing through obscurations, material inspection, and object recognition from microscales to long range imaging. This system utilizes many degrees of freedom such as time and space multiplexing, depth information, polarimetric, temporal, photon flux and multispectral information based on integral imaging to record and reconstruct the multidimensionally integrated scene. Image fusion may be used to integrate the multidimensional images obtained by polarimetric sensors, multispectral cameras, and various multiplexing techniques. The multidimensional images contain substantially more information compared with two-dimensional (2-D) images or conventional 3-D images. In addition, we present recent progress and applications of 3-D integral imaging including human gesture recognition in the time domain, depth estimation, mid-wave-infrared photon counting, 3-D polarimetric imaging for object shape and material identification, dynamic integral imaging implemented with liquid-crystal devices, and 3-D endoscopy for healthcare applications.B. Javidi wishes to acknowledge support by the National Science Foundation (NSF) under Grant NSF/IIS-1422179, and DARPA and US Army under contract number W911NF-13-1-0485. The work of P. Latorre Carmona, A. Martínez-Uso, J. M. Sotoca and F. Pla was supported by the Spanish Ministry of Economy under the project ESP2013-48458-C4-3-P, and by MICINN under the project MTM2013-48371-C2-2-PDGI, by Generalitat Valenciana under the project PROMETEO-II/2014/062, and by Universitat Jaume I through project P11B2014-09. The work of M. Martínez-Corral and G. Saavedra was supported by the Spanish Ministry of Economy and Competitiveness under the grant DPI2015-66458-C2-1R, and by the Generalitat Valenciana, Spain under the project PROMETEOII/2014/072

    Coherent and Holographic Imaging Methods for Immersive Near-Eye Displays

    Get PDF
    Lähinäytöt on suunniteltu tarjoamaan realistisia kolmiulotteisia katselukokemuksia, joille on merkittävää tarvetta esimerkiksi työkoneiden etäkäytössä ja 3D-suunnittelussa. Nykyaikaiset lähinäytöt tuottavat kuitenkin edelleen ristiriitaisia visuaalisia vihjeitä, jotka heikentävät immersiivistä kokemusta ja haittaavat niiden miellyttävää käyttöä. Merkittävänä ratkaisuvaihtoehtona pidetään koherentin valon, kuten laservalon, käyttöä näytön valaistukseen, millä voidaan korjata nykyisten lähinäyttöjen puutteita. Erityisesti koherentti valaistus mahdollistaa holografisen kuvantamisen, jota käyttävät holografiset näytöt voivat tarkasti jäljitellä kolmiulotteisten mallien todellisia valoaaltoja. Koherentin valon käyttäminen näyttöjen valaisemiseen aiheuttaa kuitenkin huomiota vaativaa korkean kontrastin häiriötä pilkkukuvioiden muodossa. Lisäksi holografisten näyttöjen laskentamenetelmät ovat laskennallisesti vaativia ja asettavat uusia haasteita analyysin, pilkkuhäiriön ja valon mallintamisen suhteen. Tässä väitöskirjassa tutkitaan laskennallisia menetelmiä lähinäytöille koherentissa kuvantamisjärjestelmässä käyttäen signaalinkäsittelyä, koneoppimista sekä geometrista (säde) ja fysikaalista (aalto) optiikan mallintamista. Työn ensimmäisessä osassa keskitytään holografisten kuvantamismuotojen analysointiin sekä kehitetään hologrammien laskennallisia menetelmiä. Holografian korkeiden laskentavaatimusten ratkaisemiseksi otamme käyttöön holografiset stereogrammit holografisen datan likimääräisenä esitysmuotona. Tarkastelemme kyseisen esitysmuodon visuaalista oikeellisuutta kehittämällä analyysikehyksen holografisen stereogrammin tarjoamien visuaalisten vihjeiden tarkkuudelle akkommodaatiota varten suhteessa sen suunnitteluparametreihin. Lisäksi ehdotamme signaalinkäsittelyratkaisua pilkkuhäiriön vähentämiseksi, ratkaistaksemme nykyisten menetelmien valon mallintamiseen liittyvät visuaalisia artefakteja aiheuttavat ongelmat. Kehitämme myös uudenlaisen holografisen kuvantamismenetelmän, jolla voidaan mallintaa tarkasti valon käyttäytymistä haastavissa olosuhteissa, kuten peiliheijastuksissa. Väitöskirjan toisessa osassa lähestytään koherentin näyttökuvantamisen laskennallista taakkaa koneoppimisen avulla. Kehitämme koherentin akkommodaatioinvariantin lähinäytön suunnittelukehyksen, jossa optimoidaan yhtäaikaisesti näytön staattista optiikka ja näytön kuvan esikäsittelyverkkoa. Lopuksi nopeutamme ehdottamaamme uutta holografista kuvantamismenetelmää koneoppimisen avulla reaaliaikaisia sovelluksia varten. Kyseiseen ratkaisuun sisältyy myös tehokkaan menettelyn kehittäminen funktionaalisten satunnais-3D-ympäristöjen tuottamiseksi. Kehittämämme menetelmä mahdollistaa suurten synteettisten moninäkökulmaisten kuvien datasettien tuottamisen, joilla voidaan kouluttaa sopivia neuroverkkoja mallintamaan holografista kuvantamismenetelmäämme reaaliajassa. Kaiken kaikkiaan tässä työssä kehitettyjen menetelmien osoitetaan olevan erittäin kilpailukykyisiä uusimpien koherentin valon lähinäyttöjen laskentamenetelmien kanssa. Työn tuloksena nähdään kaksi vaihtoehtoista lähestymistapaa ristiriitaisten visuaalisten vihjeiden aiheuttamien nykyisten lähinäyttöongelmien ratkaisemiseksi joko staattisella tai dynaamisella optiikalla ja reaaliaikaiseen käyttöön soveltuvilla laskentamenetelmillä. Esitetyt tulokset ovat näin ollen tärkeitä seuraavan sukupolven immersiivisille lähinäytöille.Near-eye displays have been designed to provide realistic 3D viewing experience, strongly demanded in applications, such as remote machine operation, entertainment, and 3D design. However, contemporary near-eye displays still generate conflicting visual cues which degrade the immersive experience and hinders their comfortable use. Approaches using coherent, e.g., laser light for display illumination have been considered prominent for tackling the current near-eye display deficiencies. Coherent illumination enables holographic imaging whereas holographic displays are expected to accurately recreate the true light waves of a desired 3D scene. However, the use of coherent light for driving displays introduces additional high contrast noise in the form of speckle patterns, which has to be taken care of. Furthermore, imaging methods for holographic displays are computationally demanding and impose new challenges in analysis, speckle noise and light modelling. This thesis examines computational methods for near-eye displays in the coherent imaging regime using signal processing, machine learning, and geometrical (ray) and physical (wave) optics modeling. In the first part of the thesis, we concentrate on analysis of holographic imaging modalities and develop corresponding computational methods. To tackle the high computational demands of holography, we adopt holographic stereograms as an approximative holographic data representation. We address the visual correctness of such representation by developing a framework for analyzing the accuracy of accommodation visual cues provided by a holographic stereogram in relation to its design parameters. Additionally, we propose a signal processing solution for speckle noise reduction to overcome existing issues in light modelling causing visual artefacts. We also develop a novel holographic imaging method to accurately model lighting effects in challenging conditions, such as mirror reflections. In the second part of the thesis, we approach the computational complexity aspects of coherent display imaging through deep learning. We develop a coherent accommodation-invariant near-eye display framework to jointly optimize static display optics and a display image pre-processing network. Finally, we accelerate the corresponding novel holographic imaging method via deep learning aimed at real-time applications. This includes developing an efficient procedure for generating functional random 3D scenes for forming a large synthetic data set of multiperspective images, and training a neural network to approximate the holographic imaging method under the real-time processing constraints. Altogether, the methods developed in this thesis are shown to be highly competitive with the state-of-the-art computational methods for coherent-light near-eye displays. The results of the work demonstrate two alternative approaches for resolving the existing near-eye display problems of conflicting visual cues using either static or dynamic optics and computational methods suitable for real-time use. The presented results are therefore instrumental for the next-generation immersive near-eye displays
    corecore