57 research outputs found
Deep Image Matting: A Comprehensive Survey
Image matting refers to extracting precise alpha matte from natural images,
and it plays a critical role in various downstream applications, such as image
editing. Despite being an ill-posed problem, traditional methods have been
trying to solve it for decades. The emergence of deep learning has
revolutionized the field of image matting and given birth to multiple new
techniques, including automatic, interactive, and referring image matting. This
paper presents a comprehensive review of recent advancements in image matting
in the era of deep learning. We focus on two fundamental sub-tasks: auxiliary
input-based image matting, which involves user-defined input to predict the
alpha matte, and automatic image matting, which generates results without any
manual intervention. We systematically review the existing methods for these
two tasks according to their task settings and network structures and provide a
summary of their advantages and disadvantages. Furthermore, we introduce the
commonly used image matting datasets and evaluate the performance of
representative matting methods both quantitatively and qualitatively. Finally,
we discuss relevant applications of image matting and highlight existing
challenges and potential opportunities for future research. We also maintain a
public repository to track the rapid development of deep image matting at
https://github.com/JizhiziLi/matting-survey
User-assisted intrinsic images
For many computational photography applications, the lighting and
materials in the scene are critical pieces of information. We seek
to obtain intrinsic images, which decompose a photo into the product
of an illumination component that represents lighting effects
and a reflectance component that is the color of the observed material.
This is an under-constrained problem and automatic methods
are challenged by complex natural images. We describe a new
approach that enables users to guide an optimization with simple
indications such as regions of constant reflectance or illumination.
Based on a simple assumption on local reflectance distributions, we
derive a new propagation energy that enables a closed form solution
using linear least-squares. We achieve fast performance by introducing
a novel downsampling that preserves local color distributions.
We demonstrate intrinsic image decomposition on a variety
of images and show applications.National Science Foundation (U.S.) (NSF CAREER award 0447561)Institut national de recherche en informatique et en automatique (France) (Associate Research Team βFlexible Renderingβ)Microsoft Research (New Faculty Fellowship)Alfred P. Sloan Foundation (Research Fellowship)Quanta Computer, Inc. (MIT-Quanta T Party
ImageSpirit: Verbal Guided Image Parsing
Humans describe images in terms of nouns and adjectives while algorithms
operate on images represented as sets of pixels. Bridging this gap between how
humans would like to access images versus their typical representation is the
goal of image parsing, which involves assigning object and attribute labels to
pixel. In this paper we propose treating nouns as object labels and adjectives
as visual attribute labels. This allows us to formulate the image parsing
problem as one of jointly estimating per-pixel object and attribute labels from
a set of training images. We propose an efficient (interactive time) solution.
Using the extracted labels as handles, our system empowers a user to verbally
refine the results. This enables hands-free parsing of an image into pixel-wise
object/attribute labels that correspond to human semantics. Verbally selecting
objects of interests enables a novel and natural interaction modality that can
possibly be used to interact with new generation devices (e.g. smart phones,
Google Glass, living room devices). We demonstrate our system on a large number
of real-world images with varying complexity. To help understand the tradeoffs
compared to traditional mouse based interactions, results are reported for both
a large scale quantitative evaluation and a user study.Comment: http://mmcheng.net/imagespirit
Perceptually inspired image estimation and enhancement
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.Includes bibliographical references (p. 137-144).In this thesis, we present three image estimation and enhancement algorithms inspired by human vision. In the first part of the thesis, we propose an algorithm for mapping one image to another based on the statistics of a training set. Many vision problems can be cast as image mapping problems, such as, estimating reflectance from luminance, estimating shape from shading, separating signal and noise, etc. Such problems are typically under-constrained, and yet humans are remarkably good at solving them. Classic computational theories about the ability of the human visual system to solve such under-constrained problems attribute this feat to the use of some intuitive regularities of the world, e.g., surfaces tend to be piecewise constant. In recent years, there has been considerable interest in deriving more sophisticated statistical constraints from natural images, but because of the high-dimensional nature of images, representing and utilizing the learned models remains a challenge. Our techniques produce models that are very easy to store and to query. We show these techniques to be effective for a number of applications: removing noise from images, estimating a sharp image from a blurry one, decomposing an image into reflectance and illumination, and interpreting lightness illusions. In the second part of the thesis, we present an algorithm for compressing the dynamic range of an image while retaining important visual detail. The human visual system confronts a serious challenge with dynamic range, in that the physical world has an extremely high dynamic range, while neurons have low dynamic ranges.(cont.) The human visual system performs dynamic range compression by applying automatic gain control, in both the retina and the visual cortex. Taking inspiration from that, we designed techniques that involve multi-scale subband transforms and smooth gain control on subband coefficients, and resemble the contrast gain control mechanism in the visual cortex. We show our techniques to be successful in producing dynamic-range-compressed images without compromising the visibility of detail or introducing artifacts. We also show that the techniques can be adapted for the related problem of "companding", in which a high dynamic range image is converted to a low dynamic range image and saved using fewer bits, and later expanded back to high dynamic range with minimal loss of visual quality. In the third part of the thesis, we propose a technique that enables a user to easily localize image and video editing by drawing a small number of rough scribbles. Image segmentation, usually treated as an unsupervised clustering problem, is extremely difficult to solve. With a minimal degree of user supervision, however, we are able to generate selection masks with good quality. Our technique learns a classifier using the user-scribbled pixels as training examples, and uses the classifier to classify the rest of the pixels into distinct classes. It then uses the classification results as per-pixel data terms, combines them with a smoothness term that respects color discontinuities, and generates better results than state-of-art algorithms for interactive segmentation.by Yuanzhen Li.Ph.D
κ°μΈν λνν μμ λΆν μκ³ λ¦¬μ¦μ μν μλ μ 보 νμ₯ κΈ°λ²μ λν μ°κ΅¬
νμλ
Όλ¬Έ (λ°μ¬) -- μμΈλνκ΅ λνμ : 곡과λν μ κΈ°Β·μ»΄ν¨ν°κ³΅νλΆ, 2021. 2. μ΄κ²½λ¬΄.Segmentation of an area corresponding to a desired object in an image is essential
to computer vision problems. This is because most algorithms are performed in
semantic units when interpreting or analyzing images. However, segmenting the
desired object from a given image is an ambiguous issue. The target object varies
depending on user and purpose. To solve this problem, an interactive segmentation
technique has been proposed. In this approach, segmentation was performed in the
desired direction according to interaction with the user. In this case, seed information
provided by the user plays an important role. If the seed provided by a user contain
abundant information, the accuracy of segmentation increases. However, providing
rich seed information places much burden on the users. Therefore, the main goal of
the present study was to obtain satisfactory segmentation results using simple seed
information.
We primarily focused on converting the provided sparse seed information to a rich
state so that accurate segmentation results can be derived. To this end, a minimum
user input was taken and enriched it through various seed enrichment techniques.
A total of three interactive segmentation techniques was proposed based on: (1)
Seed Expansion, (2) Seed Generation, (3) Seed Attention. Our seed enriching type
comprised expansion of area around a seed, generation of new seed in a new position,
and attention to semantic information.
First, in seed expansion, we expanded the scope of the seed. We integrated reliable
pixels around the initial seed into the seed set through an expansion step
composed of two stages. Through the extended seed covering a wider area than the
initial seed, the seed's scarcity and imbalance problems was resolved. Next, in seed
generation, we created a seed at a new point, but not around the seed. We trained
the system by imitating the user behavior through providing a new seed point in the
erroneous region. By learning the user's intention, our model could e ciently create
a new seed point. The generated seed helped segmentation and could be used as additional
information for weakly supervised learning. Finally, through seed attention,
we put semantic information in the seed. Unlike the previous models, we integrated
both the segmentation process and seed enrichment process. We reinforced the seed
information by adding semantic information to the seed instead of spatial expansion.
The seed information was enriched through mutual attention with feature maps
generated during the segmentation process.
The proposed models show superiority compared to the existing techniques
through various experiments. To note, even with sparse seed information, our proposed
seed enrichment technique gave by far more accurate segmentation results
than the other existing methods.μμμμ μνλ 물체 μμμ μλΌλ΄λ κ²μ μ»΄ν¨ν° λΉμ λ¬Έμ μμ νμμ μΈ μμμ΄λ€. μμμ ν΄μνκ±°λ λΆμν λ, λλΆλΆμ μκ³ λ¦¬μ¦λ€μ΄ μλ―Έλ‘ μ μΈ λ¨μ κΈ°λ°μΌλ‘ λμνκΈ° λλ¬Έμ΄λ€. κ·Έλ¬λ μμμμ 물체 μμμ λΆν νλ κ²μ λͺ¨νΈν λ¬Έμ μ΄λ€. μ¬μ©μμ λͺ©μ μ λ°λΌ μνλ 물체 μμμ΄ λ¬λΌμ§κΈ° λλ¬Έμ΄λ€. μ΄λ₯Ό ν΄κ²°νκΈ° μν΄ μ¬μ©μμμ κ΅λ₯λ₯Ό ν΅ν΄ μνλ λ°©ν₯μΌλ‘ μμ λΆν μ μ§ννλ λνν μμ λΆν κΈ°λ²μ΄ μ¬μ©λλ€. μ¬κΈ°μ μ¬μ©μκ° μ 곡νλ μλ μ λ³΄κ° μ€μν μν μ νλ€. μ¬μ©μμ μλλ₯Ό λ΄κ³ μλ μλ μ λ³΄κ° μ νν μλ‘ μμ λΆν μ μ νλλ μ¦κ°νκ² λλ€. κ·Έλ¬λ νλΆν μλ μ 보λ₯Ό μ 곡νλ κ²μ μ¬μ©μμκ² λ§μ λΆλ΄μ μ£Όκ² λλ€. κ·Έλ¬λ―λ‘ κ°λ¨ν μλ μ 보λ₯Ό μ¬μ©νμ¬ λ§μ‘±ν λ§ν λΆν κ²°κ³Όλ₯Ό μ»λ κ²μ΄ μ£Όμ λͺ©μ μ΄ λλ€.
μ°λ¦¬λ μ 곡λ ν¬μν μλ μ 보λ₯Ό λ³ννλ μμ
μ μ΄μ μ λμλ€. λ§μ½ μλ μ λ³΄κ° νλΆνκ² λ³νλλ€λ©΄ μ νν μμ λΆν κ²°κ³Όλ₯Ό μ»μ μ μκΈ° λλ¬Έμ΄λ€. κ·Έλ¬λ―λ‘ λ³Έ νμ λ
Όλ¬Έμμλ μλ μ 보λ₯Ό νλΆνκ² νλ κΈ°λ²λ€μ μ μνλ€. μ΅μνμ μ¬μ©μ μ
λ ₯μ κ°μ νκ³ μ΄λ₯Ό λ€μν μλ νμ₯ κΈ°λ²μ ν΅ν΄ λ³ννλ€. μ°λ¦¬λ μλ νλ, μλ μμ±, μλ μ£Όμ μ§μ€μ κΈ°λ°ν μ΄ μΈ κ°μ§μ λνν μμ λΆν κΈ°λ²μ μ μνλ€. κ°κ° μλ μ£Όλ³μΌλ‘μ μμ νλ, μλ‘μ΄ μ§μ μ μλ μμ±, μλ―Έλ‘ μ μ 보μ μ£Όλͺ©νλ ννμ μλ νμ₯ κΈ°λ²μ μ¬μ©νλ€.
λ¨Όμ μλ νλμ κΈ°λ°ν κΈ°λ²μμ μ°λ¦¬λ μλμ μμ νμ₯μ λͺ©νλ‘ νλ€. λ λ¨κ³λ‘ ꡬμ±λ νλ κ³Όμ μ ν΅ν΄ μ²μ μλ μ£Όλ³μ λΉμ·ν ν½μ
λ€μ μλ μμμΌλ‘ νΈμ
νλ€. μ΄λ κ² νμ₯λ μλλ₯Ό μ¬μ©ν¨μΌλ‘μ¨ μλμ ν¬μν¨κ³Ό λΆκ· νμΌλ‘ μΈν λ¬Έμ λ₯Ό ν΄κ²°ν μ μλ€. λ€μμΌλ‘ μλ μμ±μ κΈ°λ°ν κΈ°λ²μμ μ°λ¦¬λ μλ μ£Όλ³μ΄ μλ μλ‘μ΄ μ§μ μ μλλ₯Ό μμ±νλ€. μ°λ¦¬λ μ€μ°¨κ° λ°μν μμμ μ¬μ©μκ° μλ‘μ΄ μλλ₯Ό μ 곡νλ λμμ λͺ¨λ°©νμ¬ μμ€ν
μ νμ΅νμλ€. μ¬μ©μμ μλλ₯Ό νμ΅ν¨μΌλ‘μ¨ ν¨κ³Όμ μΌλ‘ μλλ₯Ό μμ±ν μ μλ€. μμ±λ μλλ μμ λΆν μ μ νλλ₯Ό λμΌ λΏλ§ μλλΌ μ½μ§λνμ΅μ μν λ°μ΄ν°λ‘μ¨ νμ©λ μ μλ€. λ§μ§λ§μΌλ‘ μλ μ£Όμ μ§μ€μ νμ©ν κΈ°λ²μμ μ°λ¦¬λ μλ―Έλ‘ μ μ 보λ₯Ό μλμ λ΄λλ€. κΈ°μ‘΄μ μ μν κΈ°λ²λ€κ³Ό λ¬λ¦¬ μμ λΆν λμκ³Ό μλ νμ₯ λμμ΄ ν΅ν©λ λͺ¨λΈμ μ μνλ€. μλ μ 보λ μμ λΆν λ€νΈμν¬μ νΉμ§λ§΅κ³Ό μνΈ κ΅λ₯νλ©° κ·Έ μ λ³΄κ° νλΆν΄μ§λ€.
μ μν λͺ¨λΈλ€μ λ€μν μ€νμ ν΅ν΄ κΈ°μ‘΄ κΈ°λ² λλΉ μ°μν μ±λ₯μ κΈ°λ‘νμλ€. νΉν μλκ° λΆμ‘±ν μν©μμ μλ νμ₯ κΈ°λ²λ€μ νλ₯ν λνν μμ λΆν μ±λ₯μ 보μλ€.1 Introduction 1
1.1 Previous Works 2
1.2 Proposed Methods 4
2 Interactive Segmentation with Seed Expansion 9
2.1 Introduction 9
2.2 Proposed Method 12
2.2.1 Background 13
2.2.2 Pyramidal RWR 16
2.2.3 Seed Expansion 19
2.2.4 Re nement with Global Information 24
2.3 Experiments 27
2.3.1 Dataset 27
2.3.2 Implement Details 28
2.3.3 Performance 29
2.3.4 Contribution of Each Part 30
2.3.5 Seed Consistency 31
2.3.6 Running Time 33
2.4 Summary 34
3 Interactive Segmentation with Seed Generation 37
3.1 Introduction 37
3.2 Related Works 40
3.3 Proposed Method 41
3.3.1 System Overview 41
3.3.2 Markov Decision Process 42
3.3.3 Deep Q-Network 46
3.3.4 Model Architecture 47
3.4 Experiments 48
3.4.1 Implement Details 48
3.4.2 Performance 49
3.4.3 Ablation Study 53
3.4.4 Other Datasets 55
3.5 Summary 58
4 Interactive Segmentation with Seed Attention 61
4.1 Introduction 61
4.2 Related Works 64
4.3 Proposed Method 65
4.3.1 Interactive Segmentation Network 65
4.3.2 Bi-directional Seed Attention Module 67
4.4 Experiments 70
4.4.1 Datasets 70
4.4.2 Metrics 70
4.4.3 Implement Details 71
4.4.4 Performance 71
4.4.5 Ablation Study 76
4.4.6 Seed enrichment methods 79
4.5 Summary 82
5 Conclusions 87
5.1 Summary 89
Bibliography 90
κ΅λ¬Έμ΄λ‘ 103Docto
ROAM: a Rich Object Appearance Model with Application to Rotoscoping
Rotoscoping, the detailed delineation of scene elements through a video shot,
is a painstaking task of tremendous importance in professional post-production
pipelines. While pixel-wise segmentation techniques can help for this task,
professional rotoscoping tools rely on parametric curves that offer the artists
a much better interactive control on the definition, editing and manipulation
of the segments of interest. Sticking to this prevalent rotoscoping paradigm,
we propose a novel framework to capture and track the visual aspect of an
arbitrary object in a scene, given a first closed outline of this object. This
model combines a collection of local foreground/background appearance models
spread along the outline, a global appearance model of the enclosed object and
a set of distinctive foreground landmarks. The structure of this rich
appearance model allows simple initialization, efficient iterative optimization
with exact minimization at each step, and on-line adaptation in videos. We
demonstrate qualitatively and quantitatively the merit of this framework
through comparisons with tools based on either dynamic segmentation with a
closed curve or pixel-wise binary labelling
Towards Generalizable Deep Image Matting: Decomposition, Interaction, and Merging
Image matting refers to extracting the precise alpha mattes from images, playing a critical role in many downstream applications. Despite extensive attention, key challenges persist and motivate the research presented in this thesis.
One major challenge is the reliance of auxiliary inputs in previous methods, hindering real-time practicality. To address this, we introduce fully automatic image matting by decomposing the task into high-level semantic segmentation and low-level details matting. We then incorporate plug-in modules to enhance the interaction between the sub-tasks through feature integration. Furthermore, we propose an attention-based mechanism to guide the matting process through collaboration merging.
Another challenge lies in limited matting datasets, resulting in reliance on composite images and inferior performance on images in the wild. In response, our research proposes a composition route to mitigate the discrepancies and result in remarkable generalization ability. Additionally, we construct numerous large datasets of high-quality real-world images with manually labeled alpha mattes, providing a solid foundation for training and evaluation.
Moreover, our research uncovers new observations that warrant further investigation. Firstly, we systematically analyze and address privacy issues that have been neglected in previous portrait matting research. Secondly, we explore the adaptation of automatic matting methods to non-salient or transparent categories beyond salient ones. Furthermore, we collaborate with language modality to achieve a more controllable matting process, enabling specific target selection at a low cost. To validate our studies, we conduct extensive experiments and provide all codes and datasets through the link (https://github.com/JizhiziLi/).
We believe that the analyses, methods, and datasets presented in this thesis will offer valuable insights for future research endeavors in the field of image matting
- β¦