582 research outputs found

    Visual Distortions in 360-degree Videos.

    Get PDF
    Omnidirectional (or 360°) images and videos are emergent signals being used in many areas, such as robotics and virtual/augmented reality. In particular, for virtual reality applications, they allow an immersive experience in which the user can interactively navigate through a scene with three degrees of freedom, wearing a head-mounted display. Current approaches for capturing, processing, delivering, and displaying 360° content, however, present many open technical challenges and introduce several types of distortions in the visual signal. Some of the distortions are specific to the nature of 360° images and often differ from those encountered in classical visual communication frameworks. This paper provides a first comprehensive review of the most common visual distortions that alter 360° signals going through the different processing elements of the visual communication pipeline. While their impact on viewers' visual perception and the immersive experience at large is still unknown-thus, it is an open research topic-this review serves the purpose of proposing a taxonomy of the visual distortions that can be encountered in 360° signals. Their underlying causes in the end-to-end 360° content distribution pipeline are identified. This taxonomy is essential as a basis for comparing different processing techniques, such as visual enhancement, encoding, and streaming strategies, and allowing the effective design of new algorithms and applications. It is also a useful resource for the design of psycho-visual studies aiming to characterize human perception of 360° content in interactive and immersive applications

    Casimir probe based upon metallized high Q SiN nanomembrane resonator

    Full text link
    We present the instrumentation and measurement scheme of a new Casimir force probe that bridges Casimir force measurements at microscale and macroscale. A metallized high Q silicon nitride nanomembrane resonator is employed as a sensitive force probe. The high tensile stress present in the nanomembrane not only enhances the quality factor but also maintains high flatness over large area serving as the bottom electrode in a sphere-plane configuration. A fiber interferometer is used to readout the oscillation of the nanomembrane and a phase-locked loop scheme is applied to track the change of the resonance frequency. Because of the high quality factor of the nanomembrane and the high stability of the setup, a frequency resolution down to 2×1092\times10^{-9} and a corresponding force gradient resolution of 3 μ\muN/m is achieved. Besides sensitive measurement of Casimir force, our measurement technique simultaneously offers Kelvin probe measurement capability that allows in situ imaging of the surface potentials

    From Capture to Display: A Survey on Volumetric Video

    Full text link
    Volumetric video, which offers immersive viewing experiences, is gaining increasing prominence. With its six degrees of freedom, it provides viewers with greater immersion and interactivity compared to traditional videos. Despite their potential, volumetric video services poses significant challenges. This survey conducts a comprehensive review of the existing literature on volumetric video. We firstly provide a general framework of volumetric video services, followed by a discussion on prerequisites for volumetric video, encompassing representations, open datasets, and quality assessment metrics. Then we delve into the current methodologies for each stage of the volumetric video service pipeline, detailing capturing, compression, transmission, rendering, and display techniques. Lastly, we explore various applications enabled by this pioneering technology and we present an array of research challenges and opportunities in the domain of volumetric video services. This survey aspires to provide a holistic understanding of this burgeoning field and shed light on potential future research trajectories, aiming to bring the vision of volumetric video to fruition.Comment: Submitte

    Substrate curvature measurement system

    Get PDF
    Industry often requires, in a variety of processes, the measurement of deformation induced in a solid object by mechanical stress. One such process is during the manufacture of very large scale integrated circuits (VLSI). During this process a substrate is coated with a thin film to protect the micro circuitry formed on the substrate. Due to the differences in thermal expansions between film and substrate, mechanical stresses can develop which may lead to deformation of the substrate surface. Any deformation of the substrate surface will result in mechanical stress in the interconnections of the circuitry, which could result in severe damage to the operation of the circuit. Different measurement techniques are available to measure the spherical deformation of substrates, with the latest known technique being a combination of a laser beam deflection and light scattering techniques. Many of the existing techniques reveal shortcomings, one of which is a 2-dimensional scanning capability with a minimum of moving components. Another shortcoming is the incapability of previous techniques to calculate the relative error which the measuring technique induces into the results. The aim of this study has been to develop an electro-optical system embodying the successful principles of these techniques in a system which will eliminate the shortcomings and produce results in excess of those previously recorded. In this work, we have concentrated on discussing the development of a system which will produce in situ real time monitoring of mechanical stresses in a solid. The system includes the minimization of system induced errors through the calculation of error voltage gains, and the introduction of a 2-dimensional scanning capability to determine the true position of the laser beam without prior knowledge of the initial substrate curvature. A four-quadrant position sensitive detector (PSD) with relevant Lab View software and programs were also introduced into the system

    Visual Distortions in 360-degree Videos

    Get PDF
    Omnidirectional (or 360-degree) images and videos are emergent signals in many areas such as robotics and virtual/augmented reality. In particular, for virtual reality, they allow an immersive experience in which the user is provided with a 360-degree field of view and can navigate throughout a scene, e.g., through the use of Head Mounted Displays. Since it represents the full 360-degree field of view from one point of the scene, omnidirectional content is naturally represented as spherical visual signals. Current approaches for capturing, processing, delivering, and displaying 360-degree content, however, present many open technical challenges and introduce several types of distortions in these visual signals. Some of the distortions are specific to the nature of 360-degree images, and often different from those encountered in the classical image communication framework. This paper provides a first comprehensive review of the most common visual distortions that alter 360-degree signals undergoing state of the art processing in common applications. While their impact on viewers' visual perception and on the immersive experience at large is still unknown ---thus, it stays an open research topic--- this review serves the purpose of identifying the main causes of visual distortions in the end-to-end 360-degree content distribution pipeline. It is essential as a basis for benchmarking different processing techniques, allowing the effective design of new algorithms and applications. It is also necessary to the deployment of proper psychovisual studies to characterise the human perception of these new images in interactive and immersive applications

    Investigation of a Space Delta Technology Facility (SDTF) for Spacelab

    Get PDF
    The Space Data Technology Facility (SDTF) would have the role of supporting a wide range of data technology related demonstrations which might be performed on Spacelab. The SDTF design is incorporated primarily in one single width standardized Spacelab rack. It consists of various display, control and data handling components together with interfaces with the demonstration-specific equipment and Spacelab. To arrive at this design a wide range of data related technologies and potential demonstrations were also investigated. One demonstration concerned with online image rectification and registration was developed in some depth

    Exploratory Visualization of Astronomical Data on Ultra-high-resolution Wall Displays

    Get PDF
    International audienceUltra-high-resolution wall displays feature a very high pixel density over a large physical surface, which makes them well-suited to the collaborative, exploratory visualization of large datasets. We introduce FITS-OW, an application designed for such wall displays, that enables astronomers to navigate in large collections of FITS images, query astronomical databases, and display detailed, complementary data and documents about multiple sources simultaneously. We describe how astronomers interact with their data using both the wall's touch-sensitive surface and handheld devices. We also report on the technical challenges we addressed in terms of distributed graphics rendering and data sharing over the computer clusters that drive wall displays

    Efficient rendering for three-dimensional displays

    Get PDF
    This thesis explores more efficient methods for visualizing point data sets on three-dimensional (3D) displays. Point data sets are used in many scientific applications, e.g. cosmological simulations. Visualizing these data sets in {3D} is desirable because it can more readily reveal structure and unknown phenomena. However, cutting-edge scientific point data sets are very large and producing/rendering even a single image is expensive. Furthermore, current literature suggests that the ideal number of views for 3D (multiview) displays can be in the hundreds, which compounds the costs. The accepted notion that many views are required for {3D} displays is challenged by carrying out a novel human factor trials study. The results suggest that humans are actually surprisingly insensitive to the number of viewpoints with regard to their task performance, when occlusion in the scene is not a dominant factor. Existing stereoscopic rendering algorithms can have high set-up costs which limits their use and none are tuned for uncorrelated {3D} point rendering. This thesis shows that it is possible to improve rendering speeds for a low number of views by perspective reprojection. The novelty in the approach described lies in delaying the reprojection and generation of the viewpoints until the fragment stage of the pipeline and streamlining the rendering pipeline for points only. Theoretical analysis suggests a fragment reprojection scheme will render at least 2.8 times faster than na\"{i}vely re-rendering the scene from multiple viewpoints. Building upon the fragment reprojection technique, further rendering performance is shown to be possible (at the cost of some rendering accuracy) by restricting the amount of reprojection required according to the stereoscopic resolution of the display. A significant benefit is that the scene depth can be mapped arbitrarily to the perceived depth range of the display at no extra cost than a single region mapping approach. Using an average case-study (rendering from a 500k points for a 9-view High Definition 3D display), theoretical analysis suggests that this new approach is capable of twice the performance gains than simply reprojecting every single fragment, and quantitative measures show the algorithm to be 5 times faster than a naïve rendering approach. Further detailed quantitative results, under varying scenarios, are provided and discussed

    Collaborating on Affinity Diagrams Using Large Displays

    Get PDF
    Gathering and understanding user requirements is an essential part of design. Techniques like affinity diagramming are useful for gathering and understanding user data but have shortcomings such as the difficulty to preserve the diagram after its creation, problems during the process such as searching for notes, and loss of shared awareness. We propose an early prototype that solves problems in the process of creating an affinity diagram and enhances it using a large screen display in combination with individual PDAs
    corecore