1,179 research outputs found

    Moving cast shadows detection methods for video surveillance applications

    Get PDF
    Moving cast shadows are a major concern in today’s performance from broad range of many vision-based surveillance applications because they highly difficult the object classification task. Several shadow detection methods have been reported in the literature during the last years. They are mainly divided into two domains. One usually works with static images, whereas the second one uses image sequences, namely video content. In spite of the fact that both cases can be analogously analyzed, there is a difference in the application field. The first case, shadow detection methods can be exploited in order to obtain additional geometric and semantic cues about shape and position of its casting object (’shape from shadows’) as well as the localization of the light source. While in the second one, the main purpose is usually change detection, scene matching or surveillance (usually in a background subtraction context). Shadows can in fact modify in a negative way the shape and color of the target object and therefore affect the performance of scene analysis and interpretation in many applications. This chapter wills mainly reviews shadow detection methods as well as their taxonomies related with the second case, thus aiming at those shadows which are associated with moving objects (moving shadows).Peer Reviewe

    Probeless Illumination Estimation for Outdoor Augmented Reality

    Get PDF

    Static scene illumination estimation from video with applications

    Get PDF
    We present a system that automatically recovers scene geometry and illumination from a video, providing a basis for various applications. Previous image based illumination estimation methods require either user interaction or external information in the form of a database. We adopt structure-from-motion and multi-view stereo for initial scene reconstruction, and then estimate an environment map represented by spherical harmonics (as these perform better than other bases). We also demonstrate several video editing applications that exploit the recovered geometry and illumination, including object insertion (e.g., for augmented reality), shadow detection, and video relighting

    Recovering refined surface normals for relighting clothing in dynamic scenes

    Get PDF
    In this paper we present a method to relight captured 3D video sequences of non-rigid, dynamic scenes, such as clothing of real actors, reconstructed from multiple view video. A view-dependent approach is introduced to refine an initial coarse surface reconstruction using shape-from-shading to estimate detailed surface normals. The prior surface approximation is used to constrain the simultaneous estimation of surface normals and scene illumination, under the assumption of Lambertian surface reflectance. This approach enables detailed surface normals of a moving non-rigid object to be estimated from a single image frame. Refined normal estimates from multiple views are integrated into a single surface normal map. This approach allows highly non-rigid surfaces, such as creases in clothing, to be relit whilst preserving the detailed dynamics observed in video

    The human visual system's representation of light sources and the objects they illuminate

    Full text link
    The light sources in a scene can drastically affect the pattern of intensities falling on the retina. However, it is unclear how the visual system represents the light sources in a scene. One possibility is that a light source is treated as a scene component: an entity that exists within a scene and interacts with other scene components (object shape and object reflectance) to produce the retinal image. The aim of this thesis was to test two key predictions arising from a perceptual framework in which light sources and the objects they illuminate are considered to be scene components by the visual system. We begin examining the first prediction in Chapter 3, focusing on the role of a dynamic shape cue in the interaction between shape, reflectance, and lighting. In two psychophysics experiments, we show that the visual system can "explain away'" alternative interpretations of luminance gradients using the information provided by a dynamic shape cue (kinetic depth). In subsequent chapters, the research focus shifts to the second prediction, investigating whether multiple objects in a scene are integrated to estimate light source direction. In Chapter 4, participants were presented with scenes that contained 1, 9, and 25 objects and asked to judge whether the scenes were illuminated from the left or right, relative to their viewpoint. We found that increasing the number of objects in a scene worsened, if anything, discrimination sensitivity. To further understand this result, we conducted an equivalent noise experiment in Chapter 5 to examine the contributions of internal noise and integration to estimates of light source direction. Our results indicate that participants used only 1 or 2 objects to judge light source direction for scenes with 9 and 25 objects. Chapter 6 presents a shape discrimination experiment that required participants to make an implicit, rather than explicit, judgement of light source direction. Consistent with the results reported in Chapters 4 and 5, we find that shape discrimination sensitivity was comparable for scenes containing 1, 9, and 25 objects. Taken together, the findings presented here suggest that while object shape and reflectance may be represented as scene components, lighting seems to be associated with individual objects rather than having a scene-level representation

    Feature-based image patch classiïŹcation for moving shadow detection

    Get PDF
    Moving object detection is a ïŹrst step towards many computer vision applications, such as human interaction and tracking, video surveillance, and traïŹƒc monitoring systems. Accurate estimation of the target object’s size and shape is often required before higher-level tasks (e.g., object tracking or recog nition) can be performed. However, these properties can be derived only when the foreground object is detected precisely. Background subtraction is a common technique to extract foreground objects from image sequences. The purpose of background subtraction is to detect changes in pixel values within a given frame. The main problem with background subtraction and other related object detection techniques is that cast shadows tend to be misclassiïŹed as either parts of the foreground objects (if objects and their cast shadows are bonded together) or independent foreground objects (if objects and shadows are separated). The reason for this phenomenon is the presence of similar characteristics between the target object and its cast shadow, i.e., shadows have similar motion, attitude, and intensity changes as the moving objects that cast them. Detecting shadows of moving objects is challenging because of problem atic situations related to shadows, for example, chromatic shadows, shadow color blending, foreground-background camouïŹ‚age, nontextured surfaces and dark surfaces. Various methods for shadow detection have been proposed in the liter ature to address these problems. Many of these methods use general-purpose image feature descriptors to detect shadows. These feature descriptors may be eïŹ€ective in distinguishing shadow points from the foreground object in a speciïŹc problematic situation; however, such methods often fail to distinguish shadow points from the foreground object in other situations. In addition, many of these moving shadow detection methods require prior knowledge of the scene condi tions and/or impose strong assumptions, which make them excessively restrictive in practice. The aim of this research is to develop an eïŹƒcient method capable of addressing possible environmental problems associated with shadow detection while simultaneously improving the overall accuracy and detection stability. In this research study, possible problematic situations for dynamic shad ows are addressed and discussed in detail. On the basis of the analysis, a ro bust method, including change detection and shadow detection, is proposed to address these environmental problems. A new set of two local feature descrip tors, namely, binary patterns of local color constancy (BPLCC) and light-based gradient orientation (LGO), is introduced to address the identiïŹed problematic situations by incorporating intensity, color, texture, and gradient information. The feature vectors are concatenated in a column-by-column manner to con struct one dictionary for the objects and another dictionary for the shadows. A new sparse representation framework is then applied to ïŹnd the nearest neighbor of the test image segment by computing a weighted linear combination of the reference dictionary. Image segment classiïŹcation is then performed based on the similarity between the test image and the sparse representations of the two classes. The performance of the proposed framework on common shadow detec tion datasets is evaluated, and the method shows improved performance com pared with state-of-the-art methods in terms of the shadow detection rate, dis crimination rate, accuracy, and stability. By achieving these signiïŹcant improve ments, the proposed method demonstrates its ability to handle various problems associated with image processing and accomplishes the aim of this thesis

    Dynamic Mesh-Aware Radiance Fields

    Full text link
    Embedding polygonal mesh assets within photorealistic Neural Radience Fields (NeRF) volumes, such that they can be rendered and their dynamics simulated in a physically consistent manner with the NeRF, is under-explored from the system perspective of integrating NeRF into the traditional graphics pipeline. This paper designs a two-way coupling between mesh and NeRF during rendering and simulation. We first review the light transport equations for both mesh and NeRF, then distill them into an efficient algorithm for updating radiance and throughput along a cast ray with an arbitrary number of bounces. To resolve the discrepancy between the linear color space that the path tracer assumes and the sRGB color space that standard NeRF uses, we train NeRF with High Dynamic Range (HDR) images. We also present a strategy to estimate light sources and cast shadows on the NeRF. Finally, we consider how the hybrid surface-volumetric formulation can be efficiently integrated with a high-performance physics simulator that supports cloth, rigid and soft bodies. The full rendering and simulation system can be run on a GPU at interactive rates. We show that a hybrid system approach outperforms alternatives in visual realism for mesh insertion, because it allows realistic light transport from volumetric NeRF media onto surfaces, which affects the appearance of reflective/refractive surfaces and illumination of diffuse surfaces informed by the dynamic scene.Comment: ICCV 202

    Robust object-based algorithms for direct shadow simulation

    Get PDF
    En informatique graphique, les algorithmes de gĂ©nĂ©rations d'ombres Ă©valuent la quantitĂ© de lumiĂšre directement perçue par une environnement virtuel. Calculer prĂ©cisĂ©ment des ombres est cependant coĂ»teux en temps de calcul. Dans cette dissertation, nous prĂ©sentons un nouveau systĂšme basĂ© objet robuste, qui permet de calculer des ombres rĂ©alistes sur des scĂšnes dynamiques et ce en temps interactif. Nos contributions incluent notamment le dĂ©veloppement de nouveaux algorithmes de gĂ©nĂ©ration d'ombres douces ainsi que leur mise en oeuvre efficace sur processeur graphique. Nous commençons par formaliser la problĂ©matique du calcul d'ombres directes. Tout d'abord, nous dĂ©finissons ce que sont les ombres directes dans le contexte gĂ©nĂ©ral du transport de la lumiĂšre. Nous Ă©tudions ensuite les techniques interactives qui gĂ©nĂšrent des ombres directes. Suite Ă  cette Ă©tude nous montrons que mĂȘmes les algorithmes dit physiquement rĂ©alistes se reposent sur des approximations. Nous mettons Ă©galement en avant, que malgrĂ© leur contraintes gĂ©omĂ©triques, les algorithmes d'ombres basĂ©es objet sont un bon point de dĂ©part pour rĂ©soudre notre problĂ©matique de gĂ©nĂ©ration efficace et robuste d'ombres directes. BasĂ© sur cette observation, nous Ă©tudions alors le systĂšme basĂ© objet existant et mettons en avant ses problĂšmes de robustesse. Nous proposons une nouvelle technique qui amĂ©liore la qualitĂ© des ombres gĂ©nĂ©rĂ©es par ce systĂšme en lui ajoutant une Ă©tape de mĂ©lange de pĂ©nombres. MalgrĂ© des propriĂ©tĂ©s et des rĂ©sultats convaincants, les limitations thĂ©oriques et de mise en oeuvre limite la qualitĂ© gĂ©nĂ©rale et les performances de cet algorithme. Nous prĂ©sentons ensuite un nouvel algorithme d'ombres basĂ©es objet. Cet algorithme combine l'efficacitĂ© de l'approche basĂ©e objet temps rĂ©el avec la prĂ©cision de sa gĂ©nĂ©ralisation au rendu hors ligne. Notre algorithme repose sur l'Ă©valuation locale du nombre d'objets entre deux points : la complexitĂ© de profondeur. Nous dĂ©crivons comment nous utilisons cet algorithme pour Ă©chantillonner la complexitĂ© de profondeur entre les surfaces visibles d'une scĂšne et une source lumineuse. Nous gĂ©nĂ©rons ensuite des ombres Ă  partir de cette information soit en modulant l'Ă©clairage direct soit en intĂ©grant numĂ©riquement l'Ă©quation d'illumination directe. Nous proposons ensuite une extension de notre algorithme afin qu'il puisse prendre en compte les ombres projetĂ©es par des objets semi-opaque. Finalement, nous prĂ©sentons une mise en oeuvre efficace de notre systĂšme qui dĂ©montre que des ombres basĂ©es objet peuvent ĂȘtre gĂ©nĂ©rĂ©es de façon efficace et ce mĂȘme sur une scĂšne dynamique. En rendu temps rĂ©el, il est commun de reprĂ©senter des objets trĂšs dĂ©taillĂ©s encombinant peu de triangles avec des textures qui reprĂ©sentent l'opacitĂ© binaire de l'objet. Les techniques de gĂ©nĂ©ration d'ombres basĂ©es objet ne traitent pas de tels triangles dit "perforĂ©s". De par leur nature, elles manipulent uniquement les gĂ©omĂ©tries explicitement reprĂ©sentĂ©es par des primitives gĂ©omĂ©triques. Nous prĂ©sentons une nouvel algorithme basĂ© objet qui lĂšve cette limitation. Nous soulignons que notre mĂ©thode peut ĂȘtre efficacement combinĂ©e avec les systĂšmes existants afin de proposer un systĂšme unifiĂ© basĂ© objet qui gĂ©nĂšre des ombres Ă  la fois pour des maillages classiques et des gĂ©omĂ©tries perforĂ©es. La mise en oeuvre proposĂ©e montre finalement qu'une telle combinaison fournit une solution Ă©lĂ©gante, efficace et robuste Ă  la problĂ©matique gĂ©nĂ©rale de l'Ă©clairage direct et ce aussi bien pour des applications temps rĂ©el que des applications sensibles Ă  la la prĂ©cision du rĂ©sultat.Direct shadow algorithms generate shadows by simulating the direct lighting interaction in a virtual environment. The main challenge with the accurate direct shadow problematic is its computational cost. In this dissertation, we develop a new robust object-based shadow framework that provides realistic shadows at interactive frame rate on dynamic scenes. Our contributions include new robust object-based soft shadow algorithms and efficient interactive implementations. We start, by formalizing the direct shadow problematic. Following the light transport problematic, we first formalize what are robust direct shadows. We then study existing interactive direct shadow techniques and outline that the real time direct shadow simulation remains an open problem. We show that even the so called physically plausible soft shadow algorithms still rely on approximations. Nevertheless we exhibit that, despite their geometric constraints, object-based approaches seems well suited when targeting accurate solutions. Starting from the previous analyze, we investigate the existing object-based shadow framework and discuss about its robustness issues. We propose a new technique that drastically improve the resulting shadow quality by improving this framework with a penumbra blending stage. We present a practical implementation of this approach. From the obtained results, we outline that, despite desirable properties, the inherent theoretical and implementation limitations reduce the overall quality and performances of the proposed algorithm. We then present a new object-based soft shadow algorithm. It merges the efficiency of the real time object-based shadows with the accuracy of its offline generalization. The proposed algorithm lies onto a new local evaluation of the number of occluders between twotwo points (\ie{} the depth complexity). We describe how we use this algorithm to sample the depth complexity between any visible receiver and the light source. From this information, we compute shadows by either modulate the direct lighting or numerically solve the direct illumination with an accuracy depending on the light sampling strategy. We then propose an extension of our algorithm in order to handle shadows cast by semi opaque occluders. We finally present an efficient implementation of this framework that demonstrates that object-based shadows can be efficiently used on complex dynamic environments. In real time rendering, it is common to represent highly detailed objects with few triangles and transmittance textures that encode their binary opacity. Object-based techniques do not handle such perforated triangles. Due to their nature, they can only evaluate the shadows cast by models whose their shape is explicitly defined by geometric primitives. We describe a new robust object-based algorithm that addresses this main limitation. We outline that this method can be efficiently combine with object-based frameworks in order to evaluate approximative shadows or simulate the direct illumination for both common meshes and perforated triangles. The proposed implementation shows that such combination provides a very strong and efficient direct lighting framework, well suited to many domains ranging from quality sensitive to performance critical applications

    Soft Textured Shadow Volume

    Get PDF
    International audienceEfficiently computing robust soft shadows is a challenging and time consuming task. On the one hand, the quality of image-based shadows is inherently limited by the discrete property of their framework. On the other hand, object-based algorithms do not exhibit such discretization issues but they can only efficiently deal with triangles having a constant transmittance factor. This paper addresses this limitation. We propose a general algorithm for the computation of robust and accurate soft shadows for triangles with a spatially varying transmittance. We then show how this technique can be efficiently included into object-based soft shadow algorithms. This results in unified object-based frameworks for computing robust direct shadows for both standard and perforated triangles in fully animated scenes
    • 

    corecore