1,137 research outputs found

    Head tracked retroreflecting 3D display

    Get PDF
    In this paper, we describe a single-user glasses-free (autostereoscopic) 3D display where images from a pair of picoprojectors are projected on to a retroreflecting screen. Real images of the projector lenses formed at the viewer's eyes produce exit pupils that follow the eye positions by the projectors moving laterally under the control of a head tracker. This provides the viewer with a comfortable degree of head movement. The retroreflecting screen, display hardware, infrared head tracker, and means of stabilizing the image position on the screen are explained. The performance of the display in terms of crosstalk, resolution, image distortion, and other parameters is described. Finally, applications of this display type are suggested

    Assessment of VR Technology and its Applications to Engineering Problems

    Get PDF
    Virtual reality applications are making valuable contributions to the field of product realization. This paper presents an assessment of the hardware and software capabilities of VR technology needed to support a meaningful integration of VR applications in the product life cycle analysis. Several examples of VR applications for the various stages of the product life cycle engineering are presented as case studies. These case studies describe research results, fielded systems, technical issues, and implementation issues in the areas of virtual design, virtual manufacturing, virtual assembly, engineering analysis, visualization of analysis results, and collaborative virtual environments. Current issues and problems related to the creation, use, and implementation of virtual environments for engineering design, analysis, and manufacturing are also discussed

    Focus 3D: Compressive Accommodation Display

    Get PDF
    We present a glasses-free 3D display design with the potential to provide viewers with nearly correct accommodative depth cues, as well as motion parallax and binocular cues. Building on multilayer attenuator and directional backlight architectures, the proposed design achieves the high angular resolution needed for accommodation by placing spatial light modulators about a large lens: one conjugate to the viewer's eye, and one or more near the plane of the lens. Nonnegative tensor factorization is used to compress a high angular resolution light field into a set of masks that can be displayed on a pair of commodity LCD panels. By constraining the tensor factorization to preserve only those light rays seen by the viewer, we effectively steer narrow high-resolution viewing cones into the user's eyes, allowing binocular disparity, motion parallax, and the potential for nearly correct accommodation over a wide field of view. We verify the design experimentally by focusing a camera at different depths about a prototype display, establish formal upper bounds on the design's accommodation range and diffraction-limited performance, and discuss practical limitations that must be overcome to allow the device to be used with human observers

    INTERFACE DESIGN FOR A VIRTUAL REALITY-ENHANCED IMAGE-GUIDED SURGERY PLATFORM USING SURGEON-CONTROLLED VIEWING TECHNIQUES

    Get PDF
    Initiative has been taken to develop a VR-guided cardiac interface that will display and deliver information without affecting the surgeons’ natural workflow while yielding better accuracy and task completion time than the existing setup. This paper discusses the design process, the development of comparable user interface prototypes as well as an evaluation methodology that can measure user performance and workload for each of the suggested display concepts. User-based studies and expert recommendations are used in conjunction to es­ tablish design guidelines for our VR-guided surgical platform. As a result, a better understanding of autonomous view control, depth display, and use of virtual context, is attained. In addition, three proposed interfaces have been developed to allow a surgeon to control the view of the virtual environment intra-operatively. Comparative evaluation of the three implemented interface prototypes in a simulated surgical task scenario, revealed performance advantages for stereoscopic and monoscopic biplanar display conditions, as well as the differences between three types of control modalities. One particular interface prototype demonstrated significant improvement in task performance. Design recommendations are made for this interface as well as the others as we prepare for prospective development iterations

    HOLOGRAPHICS: Combining Holograms with Interactive Computer Graphics

    Get PDF
    Among all imaging techniques that have been invented throughout the last decades, computer graphics is one of the most successful tools today. Many areas in science, entertainment, education, and engineering would be unimaginable without the aid of 2D or 3D computer graphics. The reason for this success story might be its interactivity, which is an important property that is still not provided efficiently by competing technologies – such as holography. While optical holography and digital holography are limited to presenting a non-interactive content, electroholography or computer generated holograms (CGH) facilitate the computer-based generation and display of holograms at interactive rates [2,3,29,30]. Holographic fringes can be computed by either rendering multiple perspective images, then combining them into a stereogram [4], or simulating the optical interference and calculating the interference pattern [5]. Once computed, such a system dynamically visualizes the fringes with a holographic display. Since creating an electrohologram requires processing, transmitting, and storing a massive amount of data, today’s computer technology still sets the limits for electroholography. To overcome some of these performance issues, advanced reduction and compression methods have been developed that create truly interactive electroholograms. Unfortunately, most of these holograms are relatively small, low resolution, and cover only a small color spectrum. However, recent advances in consumer graphics hardware may reveal potential acceleration possibilities that can overcome these limitations [6]. In parallel to the development of computer graphics and despite their non-interactivity, optical and digital holography have created new fields, including interferometry, copy protection, data storage, holographic optical elements, and display holograms. Especially display holography has conquered several application domains. Museum exhibits often use optical holograms because they can present 3D objects with almost no loss in visual quality. In contrast to most stereoscopic or autostereoscopic graphics displays, holographic images can provide all depth cues—perspective, binocular disparity, motion parallax, convergence, and accommodation—and theoretically can be viewed simultaneously from an unlimited number of positions. Displaying artifacts virtually removes the need to build physical replicas of the original objects. In addition, optical holograms can be used to make engineering, medical, dental, archaeological, and other recordings—for teaching, training, experimentation and documentation. Archaeologists, for example, use optical holograms to archive and investigate ancient artifacts [7,8]. Scientists can use hologram copies to perform their research without having access to the original artifacts or settling for inaccurate replicas. Optical holograms can store a massive amount of information on a thin holographic emulsion. This technology can record and reconstruct a 3D scene with almost no loss in quality. Natural color holographic silver halide emulsion with grain sizes of 8nm is today’s state-of-the-art [14]. Today, computer graphics and raster displays offer a megapixel resolution and the interactive rendering of megabytes of data. Optical holograms, however, provide a terapixel resolution and are able to present an information content in the range of terabytes in real-time. Both are dimensions that will not be reached by computer graphics and conventional displays within the next years – even if Moore’s law proves to hold in future. Obviously, one has to make a decision between interactivity and quality when choosing a display technology for a particular application. While some applications require high visual realism and real-time presentation (that cannot be provided by computer graphics), others depend on user interaction (which is not possible with optical and digital holograms). Consequently, holography and computer graphics are being used as tools to solve individual research, engineering, and presentation problems within several domains. Up until today, however, these tools have been applied separately. The intention of the project which is summarized in this chapter is to combine both technologies to create a powerful tool for science, industry and education. This has been referred to as HoloGraphics. Several possibilities have been investigated that allow merging computer generated graphics and holograms [1]. The goal is to combine the advantages of conventional holograms (i.e. extremely high visual quality and realism, support for all depth queues and for multiple observers at no computational cost, space efficiency, etc.) with the advantages of today’s computer graphics capabilities (i.e. interactivity, real-time rendering, simulation and animation, stereoscopic and autostereoscopic presentation, etc.). The results of these investigations are presented in this chapter

    Contributions to virtual reality

    Get PDF
    153 p.The thesis contributes in three Virtual Reality areas: Âż Visual perception: a calibration algorithm is proposed to estimate stereo projection parameters in head-mounted displays, so that correct shapes and distances can be perceived, and calibration and control procedures are proposed to obtain desired accommodation stimuli at different virtual distances.Âż Immersive scenarios: the thesis analyzes several use cases demanding varying degrees of immersion and special, innovative visualization solutions are proposed to fulfil their requirements. Contributions focus on machinery simulators, weather radar volumetric visualization and manual arc welding simulation.Âż Ubiquitous visualization: contributions are presented to scenarios where users access interactive 3D applications remotely. The thesis follows the evolution of Web3D standards and technologies to propose original visualization solutions for volume rendering of weather radar data, e-learning on energy efficiency, virtual e-commerce and visual product configurators

    Design and Evaluation of a Contact-Free Interface for Minimally Invasive Robotics Assisted Surgery

    Get PDF
    Robotic-assisted minimally invasive surgery (RAMIS) is becoming increasingly more common for many surgical procedures. These minimally invasive techniques offer the benefit of reduced patient recovery time, mortality and scarring compared to traditional open surgery. Teleoperated procedures have the added advantage of increased visualization, and enhanced accuracy for the surgeon through tremor filtering and scaling down hand motions. There are however still limitations in these techniques preventing the widespread growth of the technology. In RAMIS, the surgeon is limited in their movement by the operating console or master device, and the cost of robotic surgery is often too high to justify for many procedures. Sterility issues arise as well, as the surgeon must be in contact with the master device, preventing a smooth transition between traditional and robotic modes of surgery. This thesis outlines the design and analysis of a novel method of interaction with the da Vinci Surgical Robot. Using the da Vinci Research Kit (DVRK), an open source research platform for the da Vinci robot, an interface was developed for controlling the robotic arms with the Leap Motion Controller. This small device uses infrared LEDs and two cameras to detect the 3D positions of the hand and fingers. This data from the hands is mapped to the da Vinci surgical tools in real time, providing the surgeon with an intuitive method of controlling the instruments. An analysis of the tracking workspace is provided, to give a solution to occlusion issues. Multiple sensors are fused together in order to increase the range of trackable motion over a single sensor. Additional work involves replacing the current viewing screen with a virtual reality (VR) headset (Oculus Rift), to provide the surgeon with a stereoscopic 3D view of the surgical site without the need for a large monitor. The headset also provides the user with a more intuitive and natural method of positioning the camera during surgery, using the natural motions of the head. The large master console of the da Vinci system has been replaced with an inexpensive vision based tracking system, and VR headset, allowing the surgeon to operate the da Vinci Surgical Robot with more natural movements for the user. A preliminary evaluation of the system is provided, with recommendations for future work

    Head Tracked Multi User Autostereoscopic 3D Display Investigations

    Get PDF
    The research covered in this thesis encompasses a consideration of 3D television requirements and a survey of stereoscopic and autostereoscopic methods. This confirms that although there is a lot of activity in this area, very little of this work could be considered suitable for television. The principle of operation, design of the components of the optical system and evaluation of two EU-funded (MUTED & HELIUM3D projects) glasses-free (autostereoscopic) displays is described. Four iterations of the display were built in MUTED, with the results of the first used in designing the second, third and fourth versions. The first three versions of the display use two-49 element arrays, one for the left eye and one for the right. A pattern of spots is projected onto the back of the arrays and these are converted into a series of collimated beams that form exit pupils after passing through the LCD. An exit pupil is a region in the viewing field where either a left or a right image is seen across the complete area of the screen; the positions of these are controlled by a multi-user head tracker. A laser projector was used in the first two versions and, although this projector operated on holographic principles in order to obtain the spot pattern required to produce the exit pupils, it should be noted that images seen by the viewers are not produced holographically so the overall display cannot be described as holographic. In the third version, the laser projector is replaced with a conventional LCOS projector to address the stability and brightness issues discovered in the second version. In 2009, true 120Hz displays became available; this led to the development of a fourth version of the MUTED display that uses 120Hz projector and LCD to overcome the problems of projector instability, produces full-resolution images and simplifies the display hardware. HELIUM3D: A multi-user autostereoscopic display based on laser scanning is also described in this thesis. This display also operates by providing head-tracked exit pupils. It incorporates a red, green and blue (RGB) laser illumination source that illuminates a light engine. Light directions are controlled by a spatial light modulator and are directed to the users’ eyes via a front screen assembly incorporating a novel Gabor superlens. In this work is described that covered the development of demonstrators that showed the principle of temporal multiplexing and a version of the final display that had limited functionality; the reason for this was the delivery of components required for a display with full functionality

    Interactive interrogation of computational mixing data in a virtual environment

    Get PDF
    Mixing processes are essential in the chemical process industries, including food processors, consumer products corporations, and pharmaceutical manufacturers. The increased use of computational fluid dynamics (CFD) during the design and analysis of static and stirred mixers has provided increased insight into mixing processes. However, the velocities, temperatures, and pressures are insufficient to completely quantify a mixing process. A more complete understanding of mixing processes is given by the material spatial distribution of massless particles as they move through the flow field. This research seeks to combine surround-screen virtual reality and particle tracing of massless particles into an interactive virtual environment to explore the benefits these tools bring to engineers seeking to understand the behavior of fluids in mixing processes. Surround-screen virtual reality (VR) provides a means to immerse users into the mixing data where they can collaboratively investigate the flow features as displayed on a large scale stereo-projection system. This work integrates the particle tracing computation power of the HyperTrace[Superscript TM] commercial software application with new data interrogation techniques made possible by the use of virtual reality technology. Parallel processing to facilitate interactive placement of particles in the flow, volume data selection using a convex hull approach, cutting plane generation, and the integration of voice control and a tablet PC will be presented. Both a stirred mixing vessel and flow through a duct will be used as examples. Finally, the benefits of VR applied to mixing analysis are presented, along with some suggestions for future work in this area

    The matrix revisited: A critical assessment of virtual reality technologies for modeling, simulation, and training

    Get PDF
    A convergence of affordable hardware, current events, and decades of research have advanced virtual reality (VR) from the research lab into the commercial marketplace. Since its inception in the 1960s, and over the next three decades, the technology was portrayed as a rarely used, high-end novelty for special applications. Despite the high cost, applications have expanded into defense, education, manufacturing, and medicine. The promise of VR for entertainment arose in the early 1990\u27s and by 2016 several consumer VR platforms were released. With VR now accessible in the home and the isolationist lifestyle adopted due to the COVID-19 global pandemic, VR is now viewed as a potential tool to enhance remote education. Drawing upon over 17 years of experience across numerous VR applications, this dissertation examines the optimal use of VR technologies in the areas of visualization, simulation, training, education, art, and entertainment. It will be demonstrated that VR is well suited for education and training applications, with modest advantages in simulation. Using this context, the case is made that VR can play a pivotal role in the future of education and training in a globally connected world
    • …
    corecore