52 research outputs found

    Towards remote pixelless displays

    Get PDF
    Next generation displays have to resolve major design challenges for providing frictionless user experiences. To address these issues, we introduce two concepts named as “Beaming Displays” and “Patch Scanning Displays”

    Learned holographic light transport: Invited

    Get PDF
    Computer-generated holography algorithms often fall short in matching simulations with results from a physical holographic display.Our work addresses this mismatch by learning the holographic light transport in holographic displays. Using a camera and a holographic display, we capture the image reconstructions of optimized holograms that rely on ideal simulations to generate a dataset. Inspired by the ideal simulations, we learn a complex-valued convolution kernel that can propagate given holograms to captured photographs in our dataset. Our method can dramatically improve simulation accuracy and image quality in holographic displays while paving the way for physically informed learning approaches

    Beaming Displays: Towards Displayless Augmented Reality Near-eye Displays

    Get PDF
    Augmented Reality (AR) near-eye displays promise new human-computer interactions that can positively impact people’s lives. However, the current generation of AR near-eye displays fails to provide ergonomic solutions that counter design trade-offs such as form factor, weight, computational requirements, and battery life. Unfortunately, these design trade-offs are significant obstacles on the path towards an all-day usable near-eye display. We argue that a new way of designing AR near-eye displays that remove active components from a near-eye display could be a key to solving trade-off related issues. We propose the beaming display,1 a new near-eye display system that uses a projector and an all passive wearable headset. In our proposal, we project images from a distance to a passive wearable near-eye display as we track the location of that near-eye display. This presentation will present the latest version of our prototype while we discuss the potential future directions for beaming displays

    Unrolled primal-dual networks for lensless cameras

    Get PDF
    Conventional models for lensless imaging assume that each measurement results from convolving a given scene with a single experimentally measured point-spread function. These models fail to simulate lensless cameras truthfully, as these models do not account for optical aberrations or scenes with depth variations. Our work shows that learning a supervised primal-dual reconstruction method results in image quality matching state of the art in the literature without demanding a large network capacity. We show that embedding learnable forward and adjoint models improves the reconstruction quality of lensless images (+5dB PSNR) compared to works that assume a fixed point-spread function

    SensiCut: Material-Aware Laser Cutting Using Speckle Sensing and Deep Learning

    Get PDF
    Laser cutter users face difficulties distinguishing between visually similar materials. This can lead to problems, such as using the wrong power/speed settings or accidentally cutting hazardous materials. To support users, we present SensiCut, an integrated material sensing platform for laser cutters. SensiCut enables material awareness beyond what users are able to see and reliably differentiates among similar-looking types. It achieves this by detecting materials' surface structures using speckle sensing and deep learning. SensiCut consists of a compact hardware add-on for laser cutters and a user interface that integrates material sensing into the laser cutting workflow. In addition to improving the traditional workflow and its safety1, SensiCut enables new applications, such as automatically partitioning designs when engraving on multi-material objects or adjusting their geometry based on the kerf of the identified material. We evaluate SensiCut's accuracy for different types of materials under different sheet orientations and illumination conditions

    Investigation of heavy-heavy pseudoscalar mesons in thermal QCD Sum Rules

    Get PDF
    We investigate the mass and decay constant of the heavy-heavy pseudoscalar, BcB_c, ηc\eta_c and ηb\eta_b mesons in the framework of finite temperature QCD sum rules. The annihilation and scattering parts of spectral density are calculated in the lowest order of perturbation theory. Taking into account the additional operators arising at finite temperature, the nonperturbative corrections are also evaluated. The masses and decay constants remain unchanged under T100 MeVT\cong 100 ~MeV, but after this point, they start to diminish with increasing the temperature. At critical or deconfinement temperature, the decay constants reach approximately to 35% of their values in the vacuum, while the masses are decreased about 7%, 12% and 2% for BcB_c, ηc\eta_c and ηb\eta_b states, respectively. The results at zero temperature are in a good consistency with the existing experimental values as well as predictions of the other nonperturbative approaches.Comment: 11 Pages, 2 Tables and 6 Figure

    Perceptually guided Computer-Generated Holography

    Get PDF
    Computer-Generated Holography (CGH) promises to deliver genuine, high-quality visuals at any depth. We argue that combining CGH and perceptually guided graphics can soon lead to practical holographic display systems that deliver perceptually realistic images. We propose a new CGH method called metameric varifocal holograms. Our CGH method generates images only at a user’s focus plane while displayed images are statistically correct and indistinguishable from actual targets across peripheral vision (metamers). Thus, a user observing our holograms is set to perceive a high quality visual at their gaze location. At the same time, the integrity of the image follows a statistically correct trend in the remaining peripheral parts. We demonstrate our differentiable CGH optimization pipeline on modern GPUs, and we support our findings with a display prototype. Our method will pave the way towards realistic visuals free from classical CGH problems, such as speckle noise or poor visual quality

    A perceptual model of motion quality for rendering with adaptive refresh-rate and resolution

    Get PDF
    Limited GPU performance budgets and transmission bandwidths mean that real-time rendering often has to compromise on the spatial resolution or temporal resolution (refresh rate). A common practice is to keep either the resolution or the refresh rate constant and dynamically control the other variable. But this strategy is non-optimal when the velocity of displayed content varies. To find the best trade-off between the spatial resolution and refresh rate, we propose a perceptual visual model that predicts the quality of motion given an object velocity and predictability of motion. The model considers two motion artifacts to establish an overall quality score: non-smooth (juddery) motion, and blur. Blur is modeled as a combined effect of eye motion, finite refresh rate and display resolution. To fit the free parameters of the proposed visual model, we measured eye movement for predictable and unpredictable motion, and conducted psychophysical experiments to measure the quality of motion from 50 Hz to 165 Hz. We demonstrate the utility of the model with our on-the-fly motion-adaptive rendering algorithm that adjusts the refresh rate of a G-Sync-capable monitor based on a given rendering budget and observed object motion. Our psychophysical validation experiments demonstrate that the proposed algorithm performs better than constant-refresh-rate solutions, showing that motion-adaptive rendering is an attractive technique for driving variable-refresh-rate displays.</jats:p

    Beyond blur: real-time ventral metamers for foveated rendering

    Get PDF
    To peripheral vision, a pair of physically different images can look the same. Such pairs are metamers relative to each other, just as physically-different spectra of light are perceived as the same color. We propose a real-time method to compute such ventral metamers for foveated rendering where, in particular for near-eye displays, the largest part of the framebuffer maps to the periphery. This improves in quality over state-of-the-art foveation methods which blur the periphery. Work in Vision Science has established how peripheral stimuli are ventral metamers if their statistics are similar. Existing methods, however, require a costly optimization process to find such metamers. To this end, we propose a novel type of statistics particularly well-suited for practical real-time rendering: smooth moments of steerable filter responses. These can be extracted from images in time constant in the number of pixels and in parallel over all pixels using a GPU. Further, we show that they can be compressed effectively and transmitted at low bandwidth. Finally, computing realizations of those statistics can again be performed in constant time and in parallel. This enables a new level of quality for foveated applications such as such as remote rendering, level-of-detail and Monte-Carlo denoising. In a user study, we finally show how human task performance increases and foveation artifacts are less suspicious, when using our method compared to common blurring
    corecore