119 research outputs found

    Quantifying spatial, temporal, angular and spectral structure of effective daylight in perceptually meaningful ways

    Full text link
    We present a method to capture the 7-dimensional light field structure, and translate it into perceptually-relevant information. Our spectral cubic illumination method quantifies objective correlates of perceptually relevant diffuse and directed light components, including their variations over time, space, in color and direction, and the environment's response to sky and sunlight. We applied it 'in the wild', capturing how light on a sunny day differs between light and shadow, and how light varies over sunny and cloudy days. We discuss the added value of our method for capturing nuanced lighting effects on scene and object appearance, such as chromatic gradients

    Physically Based Rendering of Synthetic Objects in Real Environments

    Full text link

    A New Control Framework For The Visual Environment Based On Low-Cost HDR Luminance Acquisition

    Get PDF
    This study introduces a new control framework, based on a low-cost programmable luminance acquisition (HDR) sensor placed on the interior surface of the window. The new sensor, photometrically and geometrically calibrated, can capture luminance and geometry details of potential glare sources within its entire visual span in real time, while also providing feedback about transmitted illuminance. Real-time processing of the sensor data enables an alternate, low-cost glare sensing system that can be directly used in daylighting controls and building automation systems. This novel framework is the first proposed solution to address direct and reflective glare in a straightforward and efficient way and therefore it is a significant step towards improving the visual environment in perimeter building zones

    Beyond the Pixel: a Photometrically Calibrated HDR Dataset for Luminance and Color Prediction

    Full text link
    Light plays an important role in human well-being. However, most computer vision tasks treat pixels without considering their relationship to physical luminance. To address this shortcoming, we introduce the Laval Photometric Indoor HDR Dataset, the first large-scale photometrically calibrated dataset of high dynamic range 360{\deg} panoramas. Our key contribution is the calibration of an existing, uncalibrated HDR Dataset. We do so by accurately capturing RAW bracketed exposures simultaneously with a professional photometric measurement device (chroma meter) for multiple scenes across a variety of lighting conditions. Using the resulting measurements, we establish the calibration coefficients to be applied to the HDR images. The resulting dataset is a rich representation of indoor scenes which displays a wide range of illuminance and color, and varied types of light sources. We exploit the dataset to introduce three novel tasks, where: per-pixel luminance, per-pixel color and planar illuminance can be predicted from a single input image. Finally, we also capture another smaller photometric dataset with a commercial 360{\deg} camera, to experiment on generalization across cameras. We are optimistic that the release of our datasets and associated code will spark interest in physically accurate light estimation within the community. Dataset and code are available at https://lvsn.github.io/beyondthepixel/

    Perceptual video quality assessment: the journey continues!

    Get PDF
    Perceptual Video Quality Assessment (VQA) is one of the most fundamental and challenging problems in the field of Video Engineering. Along with video compression, it has become one of two dominant theoretical and algorithmic technologies in television streaming and social media. Over the last 2 decades, the volume of video traffic over the internet has grown exponentially, powered by rapid advancements in cloud services, faster video compression technologies, and increased access to high-speed, low-latency wireless internet connectivity. This has given rise to issues related to delivering extraordinary volumes of picture and video data to an increasingly sophisticated and demanding global audience. Consequently, developing algorithms to measure the quality of pictures and videos as perceived by humans has become increasingly critical since these algorithms can be used to perceptually optimize trade-offs between quality and bandwidth consumption. VQA models have evolved from algorithms developed for generic 2D videos to specialized algorithms explicitly designed for on-demand video streaming, user-generated content (UGC), virtual and augmented reality (VR and AR), cloud gaming, high dynamic range (HDR), and high frame rate (HFR) scenarios. Along the way, we also describe the advancement in algorithm design, beginning with traditional hand-crafted feature-based methods and finishing with current deep-learning models powering accurate VQA algorithms. We also discuss the evolution of Subjective Video Quality databases containing videos and human-annotated quality scores, which are the necessary tools to create, test, compare, and benchmark VQA algorithms. To finish, we discuss emerging trends in VQA algorithm design and general perspectives on the evolution of Video Quality Assessment in the foreseeable future

    Spectral skyline separation: Extended landmark databases and panoramic imaging

    Get PDF
    Differt D, Möller R. Spectral skyline separation: Extended landmark databases and panoramic imaging. Sensors. 2016;16(10): 1614

    Light-Field Imaging and Heterogeneous Light Fields

    Get PDF
    In traditional light-field analysis, images have matched spectral content which leads to constant intensity on epipolar plane image (EPI) manifolds. This kind of light field is termed homogeneous light field}. Heterogeneous light fields differ in that contributing images may have varying properties such as exposure selected or color filter applied. To be able to process heterogeneous light fields it is necessary to develop a computation method able to estimate orientations in heterogeneous EPI respectively. One alternative method to estimate orientation is the singular value decomposition. This analysis has resulted in new concepts for improving the structure tensor approach and yielded increased accuracy and greater applicability through exploitation of heterogeneous light fields.While the current structure tensor only estimates orientation with constant pixel intensity along the direction of orientation, the newly designed structure tensor is able to estimate orientations under changing intensity. Additionally, this improved structure tensor makes it possible to process acquired light fields with a higher reliability due to robustness against illumination changes. In order to use this improved structure tensor approach, it is important to design the light-field camera setup that the target scene covers the ±45° orientation range perfectly. This requirement leads directly to a relationship between camera setup for light-field capture and the frustum-shaped volume of interest. We show that higher-precision depth maps are achievable, which has a positive impact on the reliability of subsequent processing methods, especially for sRGB color reconstruction in color-filtered light fields. Aside this, a global shifting process is designed to overcome the basic range limitation of ±45° to estimate larger distances and to increase additionally the achievable precision in light-field processing. That enables the possibility to research spherical light fields, since the orientation range of spherical light fields typically overcomes the ±45° limit. Research in spherically acquired light fields has been conducted in collaboration with the German Center for Artificial Intelligence (DFKI) in Kaiserslautern
    • …
    corecore