261 research outputs found

    Importance driven environment map sampling

    Get PDF
    In this paper we present an automatic and efficient method for supporting Image Based Lighting (IBL) for bidirectional methods which improves both the sampling of the environment, and the detection and sampling of important regions of the scene, such as windows and doors. These often have a small area proportional to that of the entire scene, so paths which pass through them are generated with a low probability. The method proposed in this paper improves this by taking into account view importance, and modifies the lighting distribution to use light transport information. This also automatically constructs a sampling distribution in locations which are relevant to the camera position, thereby improving sampling. Results are presented when our method is applied to bidirectional rendering techniques, in particular we show results for Bidirectional Path Tracing, Metropolis Light Transport and Progressive Photon Mapping. Efficiency results demonstrate speed up of orders of magnitude (depending on the rendering method used), when compared to other methods

    Deep Dynamic Cloud Lighting

    Full text link
    Sky illumination is a core source of lighting in rendering, and a substantial amount of work has been developed to simulate lighting from clear skies. However, in reality, clouds substantially alter the appearance of the sky and subsequently change the scene's illumination. While there have been recent advances in developing sky models which include clouds, these all neglect cloud movement which is a crucial component of cloudy sky appearance. In any sort of video or interactive environment, it can be expected that clouds will move, sometimes quite substantially in a short period of time. Our work proposes a solution to this which enables whole-sky dynamic cloud synthesis for the first time. We achieve this by proposing a multi-timescale sky appearance model which learns to predict the sky illumination over various timescales, and can be used to add dynamism to previous static, cloudy sky lighting approaches.Comment: Project page: https://pinarsatilmis.github.io/DDC

    A quantum algorithm for ray casting using an orthographic camera

    Get PDF
    Quantum computing has the potential to provide solutions to many problems which are challenging or out of reach of classical computers. There are several problems in rendering which are amenable to being solved in quantum computers, but these have yet to be demonstrated in practice. This work takes a first step in applying quantum computing to one of the most fundamental operations in rendering: ray casting. This technique computes visibility between two points in a 3D model of the world which is described by a collection of geometric primitives. The algorithm returns, for a given ray, which primitive it intersects closest to its origin. Without a spatial acceleration structure, the classical complexity for this operation is O(N). In this paper, we propose an implementation of Grover's Algorithm (a quantum search algorithm) for ray casting. This provides a quadratic speed up allowing for visibility evaluation for unstructured primitives in O(√N). However, due to technological limitations associated with current quantum computers, in this work the geometrical setup is limited to rectangles and parallel rays (orthographic projection).This work was partially financed by National Funds through the Portuguese funding agency, FCT – Fundação para a Ciência e a Tecnologia – within project: UID/EEA/50014/2019. This work was partially funded by SmartEGOV/NORTE-01-0145-FEDER-000037, supported by Norte Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development Fund (EFDR)

    An evaluation of power transfer functions for HDR video compression

    Get PDF
    High dynamic range (HDR) imaging enables the full range of light in a scene to be captured, transmitted and displayed. However, uncompressed 32-bit HDR is four times larger than traditional low dynamic range (LDR) imagery. If HDR is to fulfil its potential for use in live broadcasts and interactive remote gaming, fast, efficient compression is necessary for HDR video to be manageable on existing communications infrastructure. A number of methods have been put forward for HDR video compression. However, these can be relatively complex and frequently require the use of multiple video streams. In this paper, we propose the use of a straightforward Power Transfer Function (PTF) as a practical, computationally fast, HDR video compression solution. The use of PTF is presented and evaluated against four other HDR video compression methods. An objective evaluation shows that PTF exhibits improved quality at a range of bit-rates and, due to its straightforward nature, is highly suited for real-time HDR video applications

    A study on user preference of high dynamic range over low dynamic range video

    Get PDF
    The increased interest in High Dynamic Range (HDR) video over existing Low Dynamic Range (LDR) video during the last decade or so was primarily due to its inherent capability to capture, store and display the full range of real-world lighting visible to the human eye with increased precision. This has led to an inherent assumption that HDR video would be preferable by the end-user over LDR video due to the more immersive and realistic visual experience provided by HDR. This assumption has led to a considerable body of research into efficient capture, processing, storage and display of HDR video. Although, this is beneficial for scientific research and industrial purposes, very little research has been conducted in order to test the veracity of this assumption. In this paper, we conduct two subjective studies by means of a ranking and a rating based experiment where 60 participants in total, 30 in each experiment, were tasked to rank and rate several reference HDR video scenes along with three mapped LDR versions of each scene on an HDR display, in order of their viewing preference. Results suggest that given the option, end-users prefer the HDR representation of the scene over its LDR counterpart

    Optimal exposure compression for high dynamic range content

    Get PDF
    High dynamic range (HDR) imaging has become one of the foremost imaging methods capable of capturing and displaying the full range of lighting perceived by the human visual system in the real world. A number of HDR compression methods for both images and video have been developed to handle HDR data, but none of them has yet been adopted as the method of choice. In particular, the backwards-compatible methods that always maintain a stream/image that allow part of the content to be viewed on conventional displays make use of tone mapping operators which were developed to view HDR images on traditional displays. There are a large number of tone mappers, none of which is considered the best as the images produced could be deemed subjective. This work presents an alternative to tone mapping-based HDR content compression by identifying a single exposure that can reproduce the most information from the original HDR image. This single exposure can be adapted to fit within the bit depth of any traditional encoder. Any additional information that may be lost is stored as a residual. Results demonstrate quality is maintained as well, and better, than other traditional methods. Furthermore, the presented method is backwards-compatible, straightforward to implement, fast and does not require choosing tone mappers or settings

    Uniform Color Space-Based High Dynamic Range Video Compression

    Get PDF
    © 1991-2012 IEEE. Recently, there has been a significant progress in the research and development of the high dynamic range (HDR) video technology and the state-of-the-art video pipelines are able to offer a higher bit depth support to capture, store, encode, and display HDR video content. In this paper, we introduce a novel HDR video compression algorithm, which uses a perceptually uniform color opponent space, a novel perceptual transfer function to encode the dynamic range of the scene, and a novel error minimization scheme for accurate chroma reproduction. The proposed algorithm was objectively and subjectively evaluated against four state-of-the-art algorithms. The objective evaluation was conducted across a set of 39 HDR video sequences, using the latest x265 10-bit video codec along with several perceptual and structural quality assessment metrics at 11 different quality levels. Furthermore, a rating-based subjective evaluation ( n=40n=40 ) was conducted with six sequences at two different output bitrates. Results suggest that the proposed algorithm exhibits the lowest coding error amongst the five algorithms evaluated. Additionally, the rate-distortion characteristics suggest that the proposed algorithm outperforms the existing state-of-the-art at bitrates ≥ 0.4 bits/pixel

    Anomaly detection using pattern-of-life visual metaphors

    Get PDF
    Complex dependencies exist across the technology estate, users and purposes of machines. This can make it difficult to efficiently detect attacks. Visualization to date is mainly used to communicate patterns of raw logs, or to visualize the output of detection systems. In this paper we explore a novel approach to presenting cybersecurity-related information to analysts. Specifically, we investigate the feasibility of using visualizations to make analysts become anomaly detectors using Pattern-of-Life Visual Metaphors. Unlike glyph metaphors, the visualizations themselves (rather than any single visual variable on screen) transform complex systems into simpler ones using different mapping strategies. We postulate that such mapping strategies can yield new, meaningful ways to showing anomalies in a manner that can be easily identified by analysts. We present a classification system to describe machine and human activities on a host machine, a strategy to map machine dependencies and activities to a metaphor. We then present two examples, each with three attack scenarios, running data generated from attacks that affect confidentiality, integrity and availability of machines. Finally, we present three in-depth use-case studies to assess feasibility (i.e. can this general approach be used to detect anomalies in systems?), usability and detection abilities of our approach. Our findings suggest that our general approach is easy to use to detect anomalies in complex systems, but the type of metaphor has an impact on user's ability to detect anomalies. Similar to other anomaly-detection techniques, false positives do exist in our general approach as well. Future work will need to investigate optimal mapping strategies, other metaphors, and examine how our approach compares to and can complement existing techniques

    Selective BRDFs for High Fidelity Rendering

    Get PDF
    High fidelity rendering systems rely on accurate material representations to produce a realistic visual appearance. However, these accurate models can be slow to evaluate. This work presents an approach for approximating these high accuracy reflectance models with faster, less complicated functions in regions of an image which possess low visual importance. A subjective rating experiment was conducted in which thirty participants were asked to assess the similarity of scenes rendered with low quality reflectance models, a high quality data-driven model and saliency based hybrids of those images. In two out of the three scenes that were evaluated significant differences were not found between the hybrid and reference images. This implies that in less visually salient regions of an image computational gains can be achieved by approximating computationally expensive materials with simpler analytic models

    Deep HDR hallucination for inverse tone mapping

    Get PDF
    Inverse Tone Mapping (ITM) methods attempt to reconstruct High Dynamic Range (HDR) information from Low Dynamic Range (LDR) image content. The dynamic range of well-exposed areas must be expanded and any missing information due to over/under-exposure must be recovered (hallucinated). The majority of methods focus on the former and are relatively successful, while most attempts on the latter are not of sufficient quality, even ones based on Convolutional Neural Networks (CNNs). A major factor for the reduced inpainting quality in some works is the choice of loss function. Work based on Generative Adversarial Networks (GANs) shows promising results for image synthesis and LDR inpainting, suggesting that GAN losses can improve inverse tone mapping results. This work presents a GAN-based method that hallucinates missing information from badly exposed areas in LDR images and compares its efficacy with alternative variations. The proposed method is quantitatively competitive with state-of-the-art inverse tone mapping methods, providing good dynamic range expansion for well-exposed areas and plausible hallucinations for saturated and under-exposed areas. A density-based normalisation method, targeted for HDR content, is also proposed, as well as an HDR data augmentation method targeted for HDR hallucination
    • …
    corecore