34,918 research outputs found

    A Virtual Testbed for Fish-Tank Virtual Reality: Improving Calibration with a Virtual-in-Virtual Display

    Get PDF
    With the development of novel calibration techniques for multimedia projectors and curved projection surfaces, volumetric 3D displays are becoming easier and more affordable to build. The basic requirements include a display shape that defines the volume (e.g. a sphere, cylinder, or cuboid) and a tracking system to provide each user's location for the perspective corrected rendering. When coupled with modern graphics cards, these displays are capable of high resolution, low latency, high frame rate, and even stereoscopic rendering; however, like many previous studies have shown, every component must be precisely calibrated for a compelling 3D effect. While human perceptual requirements have been extensively studied for head-tracked displays, most studies featured seated users in front of a flat display. It remains unclear if results from these flat display studies are applicable to newer, walk-around displays with enclosed or curved shapes. To investigate these issues, we developed a virtual testbed for volumetric head-tracked displays that can measure calibration accuracy of the entire system in real-time. We used this testbed to investigate visual distortions of prototype curved displays, improve existing calibration techniques, study the importance of stereo to performance and perception, and validate perceptual calibration with novice users. Our experiments show that stereo is important for task performance, but requires more accurate calibration, and that novice users can make effective use of perceptual calibration tools. We also propose a novel, real-time calibration method that can be used to fine-tune an existing calibration using perceptual feedback. The findings from this work can be used to build better head-tracked volumetric displays with an unprecedented amount of 3D realism and intuitive calibration tools for novice users

    Volumetric display

    Get PDF
    A volumetric display device is a graphical display device that forms a visual representation of an object in three physical dimensions, as opposed to the planar image of traditional screens that simulate depth through a number of different visual effects. One definition offered by pioneers in the field is that volumetric displays create 3D imagery via the emission, scattering, or relaying of illumination from well-defined regions in (x,y,z) space. Though there is no consensus among researchers in the field, it may be reasonable to admit holographic and highly multiview displays to the volumetric display family if they do a reasonable job of projecting a three-dimensional light field within a volume. When you are citing the document, use the following link http://essuir.sumdu.edu.ua/handle/123456789/3354

    Volumetric display

    Get PDF
    A volumetric display device is a graphical display device that forms a visual representation of an object in three physical dimensions, as opposed to the planar image of traditional screens that simulate depth through a number of different visual effects. One definition offered by pioneers in the field is that volumetric displays create 3D imagery via the emission, scattering, or relaying of illumination from well-defined regions in (x,y,z) space. Though there is no consensus among researchers in the field, it may be reasonable to admit holographic and highly multiview displays to the volumetric display family if they do a reasonable job of projecting a three-dimensional light field within a volume. When you are citing the document, use the following link http://essuir.sumdu.edu.ua/handle/123456789/3354

    Incorporating 3-dimensional models in online articles

    Get PDF
    Introduction The aims of this article are to introduce the capability to view and interact with 3-dimensional (3D) surface models in online publications, and to describe how to prepare surface models for such online 3D visualizations. Methods Three-dimensional image analysis methods include image acquisition, construction of surface models, registration in a common coordinate system, visualization of overlays, and quantification of changes. Cone-beam computed tomography scans were acquired as volumetric images that can be visualized as 3D projected images or used to construct polygonal meshes or surfaces of specific anatomic structures of interest. The anatomic structures of interest in the scans can be labeled with color (3D volumetric label maps), and then the scans are registered in a common coordinate system using a target region as the reference. The registered 3D volumetric label maps can be saved in.obj,.ply,.stl, or.vtk file formats and used for overlays, quantification of differences in each of the 3 planes of space, or color-coded graphic displays of 3D surface distances. Results All registered 3D surface models in this study were saved in.vtk file format and loaded in the Elsevier 3D viewer. In this study, we describe possible ways to visualize the surface models constructed from cone-beam computed tomography images using 2D and 3D figures. The 3D surface models are available in the article's online version for viewing and downloading using the reader's software of choice. These 3D graphic displays are represented in the print version as 2D snapshots. Overlays and color-coded distance maps can be displayed using the reader's software of choice, allowing graphic assessment of the location and direction of changes or morphologic differences relative to the structure of reference. The interpretation of 3D overlays and quantitative color-coded maps requires basic knowledge of 3D image analysis. Conclusions When submitting manuscripts, authors can now upload 3D models that will allow readers to interact with or download them. Such interaction with 3D models in online articles now will give readers and authors better understanding and visualization of the results

    Volumetric reach-through displays for direct manipulation of 3D content

    Get PDF
    In my PhD, I aim at developing a reach-through volumetric display where points of light are emitted from each 3d position of the display volume, and yet it allows people to introduce theirs hands inside to directly interact with the rendered content. Here, I present TomoLit, an inverse tomographic display, where multiple emitters project rays of different intensities for each angle, rendering a target image in mid-air. We have analysed the effect on image quality of the number of emitters, their locations, the angular resolution and the levels of intensities. We have developed a simple emitter and we are in the process of putting together multiple of them. And what I plan to do next, e.g. moving from 2D to 3D and exploring interaction techniques. The feedback obtained in this symposium will clearly dissipate some of of my doubts and guide my research career.This work has been funded by Government of Navarre (FEDER) 0011-1365-2019-000086; and by Jóvenes Investigadores UPNA PJUPNA1923

    Free-Space Graphics with Electrically Driven Levitated Light Scatterers

    Full text link
    Levitation of optical scatterers provides a new mean to develop free-space volumetric displays. The principle is to illuminate a levitating particle displaced at high velocity in three dimensions (3D) to create images based on persistence of vision (POV). Light scattered by the particle can be observed all around the volumetric display and therefore provides a true 3D image that does not rely on interference effects and remains insensitive to the angle of observation. The challenge is to control with a high accuracy and at high speed the trajectory of the particle in three dimensions. Systems that use light to generate free-space images either in plasma or with a bead are strictly dependent of the scanning method used. Mechanical systems are required to scan the particles in the volume which weakens the time dynamics. Here we use electrically driven planar Paul traps (PPTs) to control the trajectory of electrically charged particles. A single gold particle colloid is manipulated in three dimensions through AC and DC electrical voltages applied to a PPT. Electric voltages can be modulated at high frequencies (150 kHz) and allow for a high speed displacement of particles without moving any other system component. The optical scattering of the particle in levitation yields free-space images that are imaged with conventional optics. The trajectory of the particle is entirely encoded in the electric voltage and driven through stationary planar electrodes. We show in this paper, the proof-of-concept for the generation of 3D free space graphics with a single electrically scanned particle

    Channelized hotelling observers for signal detection in stack-mode reading of volumetric images on medical displays with slow response time

    Get PDF
    Volumetric medical images are commonly read in stack-browsing mode. However, previous studies suggest that slow temporal response of medical liquid crystal displays may degrade the diagnostic accuracy (lesion detectability) at browsing rates as low as 10 frames per second (fps). Recently, a multi-slice channelized Hotelling observer (msCHO) model was proposed to estimate the detection performance in 3D images. This implementation of the msCHO restricted the analysis to the luminance of a display pixel at the end of the frame time (end-of-frame luminance) while ignoring the luminance transition within the frame time (intra-frame luminance). Such an approach fails to differentiate between, for example, the commonly found case of two displays with different temporal profiles of luminance as long as their end-of-frame luminance levels are the same. In order to overcome this limitation of the msCHO, we propose a new upsampled msCHO (umsCHO) which acts on images obtained using both the intra-frame and the end-of-frame luminance information. The two models are compared on a set of synthesized 3D images for a range of browsing rates (16.67, 25 and 50 fps). Our results demonstrate that, depending on the details of the luminance transition profiles, neglecting the intra-frame luminance information may lead to over- or underestimation of lesion detectability. Therefore, we argue that using the umsCHO rather than msCHO model is more appropriate for estimating the detection performance in the stack-browsing mode
    corecore