1,603 research outputs found

    OutCast: Outdoor Single-image Relighting with Cast Shadows

    Full text link
    We propose a relighting method for outdoor images. Our method mainly focuses on predicting cast shadows in arbitrary novel lighting directions from a single image while also accounting for shading and global effects such the sun light color and clouds. Previous solutions for this problem rely on reconstructing occluder geometry, e.g. using multi-view stereo, which requires many images of the scene. Instead, in this work we make use of a noisy off-the-shelf single-image depth map estimation as a source of geometry. Whilst this can be a good guide for some lighting effects, the resulting depth map quality is insufficient for directly ray-tracing the shadows. Addressing this, we propose a learned image space ray-marching layer that converts the approximate depth map into a deep 3D representation that is fused into occlusion queries using a learned traversal. Our proposed method achieves, for the first time, state-of-the-art relighting results, with only a single image as input. For supplementary material visit our project page at: https://dgriffiths.uk/outcast.Comment: Eurographics 2022 - Accepte

    Shadow detection in still road images using chrominance properties of shadows and spectral power distribution of the illumination

    Get PDF
    A well-known challenge in vision-based driver assistance systems is cast shadows on the road, which makes fundamental tasks such as road and lane detections difficult. In as much as shadow detection relies on shadow features, in this paper, we propose a set of new chrominance properties of shadows based on the skylight and sunlight contributions to the road surface chromaticity. Six constraints on shadow and non-shadowed regions are derived from these properties. The chrominance properties and the associated constraints are used as shadow features in an effective shadow detection method intended to be integrated on an onboard road detection system where the identification of cast shadows on the road is a determinant stage. Onboard systems deal with still outdoor images; thus, the approach focuses on distinguishing shadow boundaries from material changes by considering two illumination sources: sky and sun. A non-shadowed road region is illuminated by both skylight and sunlight, whereas a shadowed one is illuminated by skylight only; thus, their chromaticity varies. The shadow edge detection strategy consists of the identification of image edges separating shadowed and non-shadowed road regions. The classification is achieved by verifying whether the pixel chrominance values of regions on both sides of the image edges satisfy the six constraints. Experiments on real traffc scenes demonstrated the effectiveness of our shadow detection system in detecting shadow edges on the road and material-change edges, outperforming previous shadow detection methods based on physical features, and showing the high potential of the new chrominance properties

    Synthetic image generation and the use of virtual environments for image enhancement tasks

    Get PDF
    Deep learning networks are often difficult to train if there are insufficient image samples. Gathering real-world images tailored for a specific job takes a lot of work to perform. This dissertation explores techniques for synthetic image generation and virtual environments for various image enhancement/ correction/restoration tasks, specifically distortion correction, dehazing, shadow removal, and intrinsic image decomposition. First, given various image formation equations, such as those used in distortion correction and dehazing, synthetic image samples can be produced, provided that the equation is well-posed. Second, using virtual environments to train various image models is applicable for simulating real-world effects that are otherwise difficult to gather or replicate, such as dehazing and shadow removal. Given synthetic images, one cannot train a network directly on it as there is a possible gap between the synthetic and real domains. We have devised several techniques for generating synthetic images and formulated domain adaptation methods where our trained deep-learning networks perform competitively in distortion correction, dehazing, and shadow removal. Additional studies and directions are provided for the intrinsic image decomposition problem and the exploration of procedural content generation, where a virtual Philippine city was created as an initial prototype. Keywords: image generation, image correction, image dehazing, shadow removal, intrinsic image decomposition, computer graphics, rendering, machine learning, neural networks, domain adaptation, procedural content generation

    A Framework for Dynamic Terrain with Application in Off-road Ground Vehicle Simulations

    Get PDF
    The dissertation develops a framework for the visualization of dynamic terrains for use in interactive real-time 3D systems. Terrain visualization techniques may be classified as either static or dynamic. Static terrain solutions simulate rigid surface types exclusively; whereas dynamic solutions can also represent non-rigid surfaces. Systems that employ a static terrain approach lack realism due to their rigid nature. Disregarding the accurate representation of terrain surface interaction is rationalized because of the inherent difficulties associated with providing runtime dynamism. Nonetheless, dynamic terrain systems are a more correct solution because they allow the terrain database to be modified at run-time for the purpose of deforming the surface. Many established techniques in terrain visualization rely on invalid assumptions and weak computational models that hinder the use of dynamic terrain. Moreover, many existing techniques do not exploit the capabilities offered by current computer hardware. In this research, we present a component framework for terrain visualization that is useful in research, entertainment, and simulation systems. In addition, we present a novel method for deforming the terrain that can be used in real-time, interactive systems. The development of a component framework unifies disparate works under a single architecture. The high-level nature of the framework makes it flexible and adaptable for developing a variety of systems, independent of the static or dynamic nature of the solution. Currently, there are only a handful of documented deformation techniques and, in particular, none make explicit use of graphics hardware. The approach developed by this research offloads extra work to the graphics processing unit; in an effort to alleviate the overhead associated with deforming the terrain. Off-road ground vehicle simulation is used as an application domain to demonstrate the practical nature of the framework and the deformation technique. In order to realistically simulate terrain surface interactivity with the vehicle, the solution balances visual fidelity and speed. Accurately depicting terrain surface interactivity in off-road ground vehicle simulations improves visual realism; thereby, increasing the significance and worth of the application. Systems in academia, government, and commercial institutes can make use of the research findings to achieve the real-time display of interactive terrain surfaces

    Shadow detection in videos acquired by stationary and moving cameras

    Get PDF
    Shadow Detection has become a key issue in object detection, tracking and recognition problems. Object appearances might be completely changed by the effects of shading and shadows. Finding good algorithms for shadow detection and reducing shading effects in order to segment objects from video sequences, will enhance the performance of our detection, tracking and recognition algorithms. In this thesis, we present data, physics and model-driven approaches for detecting shadows and correcting shading effects. The effectiveness of these algorithms in video sequences acquired by stationary surveillance cameras and airborne platforms is illustrated

    コンピュータビジョン・グラフィックスのための影の消去と補間

    Get PDF
    University of Tokyo (東京大学

    Doctor of Philosophy

    Get PDF
    dissertationThree-dimensional (3D) models of industrial plant primitives are used extensively in modern asset design, management, and visualization systems. Such systems allow users to efficiently perform tasks in Computer Aided Design (CAD), life-cycle management, construction progress monitoring, virtual reality training, marketing walk-throughs, or other visualization. Thus, capturing industrial plant models has correspondingly become a rapidly growing industry. The purpose of this research was to demonstrate an efficient way to ascertain physical model parameters of reflectance properties of industrial plant primitives for use in CAD and 3D modeling visualization systems. The first part of this research outlines the sources of error corresponding to 3D models created from Light Detection and Ranging (LiDAR) point clouds. Fourier analysis exposes the error due to a LiDAR system's finite sampling rate. Taylor expansion illustrates the errors associated with linearization due to flat polygonal surfaces. Finally, a statistical analysis of the error associated with LiDar scanner hardware is presented. The second part of this research demonstrates a method for determining Phong specular and Oren-Nayar diffuse reflectance parameters for modeling and rendering pipes, the most ubiquitous form of industrial plant primitives. For specular reflectance, the Phong model is used. Estimates of specular and diffuse parameters of two ideal cylinders and one measured cylinder using brightness data acquired from a LiDAR scanner are presented. The estimated reflectance model of the measured cylinder has a mean relative error of 2.88% and a standard deviation of relative error of 4.0%. The final part of this research describes a method for determining specular, diffuse and color material properties and applies the method to seven pipes from an industrial plant. The colorless specular and diffuse properties were estimated by numerically inverting LiDAR brightness data. The color ambient and diffuse properties are estimated using k-means clustering. The colorless properties yielded estimated brightness values that are within an RMS of 3.4% with a maximum of 7.0% and a minimum of 1.6%. The estimated color properties effected an RMS residual of 13.2% with a maximum of 20.3% and a minimum of 9.1%

    Road detection using intrinsic colors in a stereo vision system

    Get PDF
    Master'sMASTER OF ENGINEERIN
    corecore