23 research outputs found
Development of colorization of grayscale images using CNN-SVM
Nowadays, there is a growing interest in colorizing many grayscales or black and white images dating back to before the colored camera for historical and aesthetic reasons. Image and video colorization can be applied to historical images, natural images, astronomical photography. This paper proposes a fully automated image colorization using a deep learning algorithm. First, the image dataset was selected for training and testing purposes. A convolutional neural network (CNN) was designed with several layers of convolutional and max pooling. Support Vector Machine (SVM) regression was used at the final stage. The proposed algorithm was implemented using Python with Keras and Tensorflow libraries in Google Colab. Results showed that the proposed system could predict the colored image from the training process's learning knowledge. A survey was then conducted to validate our findings
Pixelated Semantic Colorization
While many image colorization algorithms have recently shown the capability
of producing plausible color versions from gray-scale photographs, they still
suffer from limited semantic understanding. To address this shortcoming, we
propose to exploit pixelated object semantics to guide image colorization. The
rationale is that human beings perceive and distinguish colors based on the
semantic categories of objects. Starting from an autoregressive model, we
generate image color distributions, from which diverse colored results are
sampled. We propose two ways to incorporate object semantics into the
colorization model: through a pixelated semantic embedding and a pixelated
semantic generator. Specifically, the proposed convolutional neural network
includes two branches. One branch learns what the object is, while the other
branch learns the object colors. The network jointly optimizes a color
embedding loss, a semantic segmentation loss and a color generation loss, in an
end-to-end fashion. Experiments on PASCAL VOC2012 and COCO-stuff reveal that
our network, when trained with semantic segmentation labels, produces more
realistic and finer results compared to the colorization state-of-the-art
๋ณํ๋ FusionNet์ ์ด์ฉํ ํ์์กฐ ์ด๋ฏธ์ง์ ์์ฐ์ค๋ฌ์ด ์ฑ์
ํ์๋
ผ๋ฌธ (์์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ์์ฐ๊ณผํ๋ํ ํ๋๊ณผ์ ๊ณ์ฐ๊ณผํ์ ๊ณต, 2021. 2. ๊ฐ๋ช
์ฃผ.In this paper, we propose a grayscale image colorizing technique. The colorization task can be divided into three main ways, the Scribble-based method, Exemplar-based method and Fully automatic method. Our proposed method is included in the third one. We use a deep learning model that is widely used in the colorization eld recently. We propose Encoder-Docoder model using Convolutional Neural Networks. In particular, we modify the FusionNet with good performance to suit this purpose.
Also, in order to get better results, we do not use MSE loss function. Instead, we use the loss function suitable for the colorizing purpose. We use a subset of the ImageNet dataset as the training, validation and test dataset. We take some existing methods from Fully automatic Deep Learning method and compared them with our models. Our algorithm is evaluated using a quantitative metric called PSNR (Peak Signal-to-Noise Ratio). In addition, in order to evaluate the results qualitatively, our model was applied to the test dataset and compared with various other models. Our model has better performance both quantitatively and qualitatively than other models. Finally, we apply our model to old black and white photographs.๋ณธ ๋
ผ๋ฌธ์์๋ ํ์์กฐ ์ด๋ฏธ์ง๋ค์ ๋ํ ์ฑ์ ๊ธฐ๋ฒ์ ์ ์ํ๋ค. ์ฑ์ ์์
์ ํฌ๊ฒ Scribble ๊ธฐ๋ฐ ๋ฐฉ๋ฒ, Exemplar ๊ธฐ๋ฐ ๋ฐฉ๋ฒ, ์์ ์๋ ๋ฐฉ๋ฒ์ ์ธ ๊ฐ์ง๋ก ๋๋ ์ ์๋ค. ๋ณธ ๋
ผ๋ฌธ์์๋ ์ธ ๋ฒ์งธ ๋ฐฉ๋ฒ์ ์ฌ์ฉํ๋ค. ์ต๊ทผ์ ์ฑ์ ๋ถ์ผ์์ ๋๋ฆฌ ์ฌ์ฉ๋๋ ๋ฅ ๋ฌ๋ ๋ชจ๋ธ์ ์ฌ์ฉํ๋ค. Convolutional Neural Networks๋ฅผ ์ด์ฉํ Encoder-Docoder ๋ชจ๋ธ์ ์ ์ํ๋ค. ํนํ ๊ธฐ์กด์ image segmetation ๋ถ์ผ์์ ์ข์ ์ฑ๋ฅ์ ๋ณด์ด๋ FusionNet์ ์๋ ์ฑ์ ๋ชฉ์ ์ ๋ง๊ฒ ๋ค์ํ ๋ฐฉ๋ฒ์ผ๋ก ์์ ํ๋ค. ๋ํ ๋ ๋์ ๊ฒฐ๊ณผ๋ฅผ ์ป๊ธฐ ์ํด MSE ์์ค ํจ์๋ฅผ ์ฌ์ฉํ์ง ์์๋ค. ๋์ , ์ฐ๋ฆฌ๋ ์๋ ์ฑ์ ๋ชฉ์ ์ ์ ํฉํ ์์ค ํจ์๋ฅผ ์ฌ์ฉํ์๋ค.
ImageNet ๋ฐ์ดํฐ์
์ ๋ถ๋ถ ์งํฉ์ ํ๋ จ, ๊ฒ์ฆ ๋ฐ ํ
์คํธ ๋ฐ์ดํฐ์
์ผ๋ก ์ฌ์ฉํ๋ค. ์ฐ๋ฆฌ๋ ์์ ์๋ ๋ฅ ๋ฌ๋ ๋ฐฉ๋ฒ์์ ๊ธฐ์กด ๋ฐฉ๋ฒ์ ๊ฐ์ ธ์ ์ฐ๋ฆฌ์ ๋ชจ๋ธ๊ณผ ๋น๊ตํ๋ค. ์ฐ๋ฆฌ์ ์๊ณ ๋ฆฌ์ฆ์ PSNR (Peak Signal-to-Noise Ratio)์ด๋ผ๋ ์ ๋์ ์งํ๋ฅผ ์ฌ์ฉํ์ฌ ํ๊ฐ๋์๋ค. ๋ํ ๊ฒฐ๊ณผ๋ฅผ ์ ์ฑ์ ์ผ๋ก ํ๊ฐํ๊ธฐ ์ํด ํ
์คํธ ๋ฐ์ดํฐ์
์ ๋ชจ๋ธ์ ์ ์ฉํ์ฌ ๋ค์ํ ๋ชจ๋ธ๊ณผ ๋น๊ตํ๋ค. ๊ทธ ๊ฒฐ๊ณผ ๋ค๋ฅธ ๋ชจ๋ธ์ ๋นํด ์ ์ฑ์ ์ผ๋ก๋, ์ ๋์ ์ผ๋ก๋ ์ข์ ์ฑ๋ฅ์ ๋ณด์๋ค. ๋ง์ง๋ง์ผ๋ก ์ค๋๋ ํ๋ฐฑ ์ฌ์ง๊ณผ ๊ฐ์ ๋ค์ํ ์ ํ์ ์ด๋ฏธ์ง์ ์ ์ฉํ ๊ฒฐ๊ณผ๋ฅผ ์ ์ํ๋ค.Abstract i
1 Introduction 1
2 Related Works 4
2.1 Scribble-based method . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Exemplar-based method . . . . . . . . . . . . . . . . . . . . . 5
2.3 Fully automatic method . . . . . . . . . . . . . . . . . . . . . 6
3 Proposed Method 8
3.1 Method Overview . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Architecture detail . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3.1 Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3.2 Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3.3 Bridge . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4 Experiments 14
4.1 CIE Lab Color Space . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.3 Qualitative Evaluation . . . . . . . . . . . . . . . . . . . . . . 17
4.4 Quantitative Evaluation . . . . . . . . . . . . . . . . . . . . . 18
4.5 Legacy Old image Colorization . . . . . . . . . . . . . . . . . . 20
5 Conclusion 23
The bibliography 24
Abstract (in Korean) 28Maste
ChromaGAN: Adversarial Picture Colorization with Semantic Class Distribution
The colorization of grayscale images is an ill-posed problem, with multiple
correct solutions. In this paper, we propose an adversarial learning
colorization approach coupled with semantic information. A generative network
is used to infer the chromaticity of a given grayscale image conditioned to
semantic clues. This network is framed in an adversarial model that learns to
colorize by incorporating perceptual and semantic understanding of color and
class distributions. The model is trained via a fully self-supervised strategy.
Qualitative and quantitative results show the capacity of the proposed method
to colorize images in a realistic way achieving state-of-the-art results.Comment: 8 pages + reference
Pixelated semantic colorization
While many image colorization algorithms have recently shown the capability of producing plausible color versions from grayscale photographs, they still suffer from limited semantic understanding. To address this shortcoming, we propose to exploit
pixelated object semantics to guide image colorization. The rationale is that human beings perceive and distinguish colors
based on the semantic categories of objects. Starting from an autoregressive model, we generate image color distributions, from
which diverse colored results are sampled. We propose two ways to incorporate object semantics into the colorization model:
through a pixelated semantic embedding and a pixelated semantic generator. Specifically, the proposed network includes two
branches. One branch learns what the object is, while the other branch learns the object colors. The network jointly optimizes
a color embedding loss, a semantic segmentation loss and a color generation loss, in an end-to-end fashion. Experiments on
Pascal VOC2012 and COCO-stuff reveal that our network, when trained with semantic segmentation labels, produces more
realistic and finer results compared to the colorization state-of-the-art