104 research outputs found

    Environment matting by sparse recovery

    Get PDF

    Deep Image Matting: A Comprehensive Survey

    Full text link
    Image matting refers to extracting precise alpha matte from natural images, and it plays a critical role in various downstream applications, such as image editing. Despite being an ill-posed problem, traditional methods have been trying to solve it for decades. The emergence of deep learning has revolutionized the field of image matting and given birth to multiple new techniques, including automatic, interactive, and referring image matting. This paper presents a comprehensive review of recent advancements in image matting in the era of deep learning. We focus on two fundamental sub-tasks: auxiliary input-based image matting, which involves user-defined input to predict the alpha matte, and automatic image matting, which generates results without any manual intervention. We systematically review the existing methods for these two tasks according to their task settings and network structures and provide a summary of their advantages and disadvantages. Furthermore, we introduce the commonly used image matting datasets and evaluate the performance of representative matting methods both quantitatively and qualitatively. Finally, we discuss relevant applications of image matting and highlight existing challenges and potential opportunities for future research. We also maintain a public repository to track the rapid development of deep image matting at https://github.com/JizhiziLi/matting-survey

    The 8th Conference of PhD Students in Computer Science

    Get PDF

    VITRAIL : Acquisition, Modelling and Rendering of Stained Glass

    Get PDF
    Stained glass windows are designed to reveal their powerful artistry under diverse and time-varying lighting conditions; virtual relighting of stained glass, therefore represents an exceptional tool for the appreciation of this age old art form. However, as opposed to most other artifacts, stained glass windows are extremely difficult if not impossible to analyze using controlled illumination because of their size and position. In this paper we present novel methods built upon image based priors to perform virtual relighting of stained glass artwork by acquiring the actual light transport properties of a given artefact. In a preprocessing step we build a material-dependent dictionary for light transport by studying the scattering properties of glass samples in a laboratory setup. We can now use the dictionary to recover a light transport matrix in two ways: under controlled illuminations the dictionary constitutes a sparsifying basis for a compressive sensing acquisition, while in the case of uncontrolled illuminations the dictionary is used to perform sparse regularization. The proposed basis preserves volume impurities and we show that the retrieved light transport matrix is heterogeneous, as in the case of real world objects. We present the rendering results of several stained glass artifacts, including the Rose Window of the cathedral of Lausanne, digitized using the presented methods

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Framework to Create Cloud-Free Remote Sensing Data Using Passenger Aircraft as the Platform

    Get PDF
    Cloud removal in optical remote sensing imagery is essential for many Earth observation applications.Due to the inherent imaging geometry features in satellite remote sensing, it is impossible to observe the ground under the clouds directly; therefore, cloud removal algorithms are always not perfect owing to the loss of ground truth. Passenger aircraft have the advantages of short visitation frequency and low cost. Additionally, because passenger aircraft fly at lower altitudes compared to satellites, they can observe the ground under the clouds at an oblique viewing angle. In this study, we examine the possibility of creating cloud-free remote sensing data by stacking multi-angle images captured by passenger aircraft. To accomplish this, a processing framework is proposed, which includes four main steps: 1) multi-angle image acquisition from passenger aircraft, 2) cloud detection based on deep learning semantic segmentation models, 3) cloud removal by image stacking, and 4) image quality enhancement via haze removal. This method is intended to remove cloud contamination without the requirements of reference images and pre-determination of cloud types. The proposed method was tested in multiple case studies, wherein the resultant cloud- and haze-free orthophotos were visualized and quantitatively analyzed in various land cover type scenes. The results of the case studies demonstrated that the proposed method could generate high quality, cloud-free orthophotos. Therefore, we conclude that this framework has great potential for creating cloud-free remote sensing images when the cloud removal of satellite imagery is difficult or inaccurate
    • …
    corecore