18 research outputs found
Color Cerberus
Simple convolutional neural network was able to win ISISPA color constancy
competition. Partial reimplementation of (Bianco, 2017) neural architecture
would have shown even better results in this setup
Artificial Color Constancy via GoogLeNet with Angular Loss Function
Color Constancy is the ability of the human visual system to perceive colors
unchanged independently of the illumination. Giving a machine this feature will
be beneficial in many fields where chromatic information is used. Particularly,
it significantly improves scene understanding and object recognition. In this
paper, we propose transfer learning-based algorithm, which has two main
features: accuracy higher than many state-of-the-art algorithms and simplicity
of implementation. Despite the fact that GoogLeNet was used in the experiments,
given approach may be applied to any CNN. Additionally, we discuss design of a
new loss function oriented specifically to this problem, and propose a few the
most suitable options
Fully Point-wise Convolutional Neural Network for Modeling Statistical Regularities in Natural Images
Modeling statistical regularity plays an essential role in ill-posed image
processing problems. Recently, deep learning based methods have been presented
to implicitly learn statistical representation of pixel distributions in
natural images and leverage it as a constraint to facilitate subsequent tasks,
such as color constancy and image dehazing. However, the existing CNN
architecture is prone to variability and diversity of pixel intensity within
and between local regions, which may result in inaccurate statistical
representation. To address this problem, this paper presents a novel fully
point-wise CNN architecture for modeling statistical regularities in natural
images. Specifically, we propose to randomly shuffle the pixels in the origin
images and leverage the shuffled image as input to make CNN more concerned with
the statistical properties. Moreover, since the pixels in the shuffled image
are independent identically distributed, we can replace all the large
convolution kernels in CNN with point-wise () convolution kernels while
maintaining the representation ability. Experimental results on two
applications: color constancy and image dehazing, demonstrate the superiority
of our proposed network over the existing architectures, i.e., using
1/101/100 network parameters and computational cost while achieving
comparable performance.Comment: 9 pages, 7 figures. To appear in ACM MM 201
Underwater Image Enhancement for Instance Segmentation using Deep Learning Models
Underwater instance segmentation greatly depends on color-blended underwater images. In this work, a combination of Generalized Color Fourier Descriptor (GCFD), Convolutional Neural Network (CNN) and Mask Region-based Convolutional Neural Network (Mask R-CNN) models were employed to generate a mask for each bounding-boxed Region of Interest (ROI) to obtain enhanced individual underwater segmented images from their complex background accurately. By this image enhancement approach, individual underwater instances are segmented from their complex background accurately. The Patch-based Contrast Quality Index (PCQI) evaluation of our proposed image enhancement method (GCFD) after conducting experiment on the employed datasets shows performance accuracy of 1.1336, which is higher than the 1.1126 performance accuracy achieved by the Contrast-enhancement Algorithm (CA)
Monte Carlo Dropout Ensembles for Robust Illumination Estimation
Computational color constancy is a preprocessing step used in many camera
systems. The main aim is to discount the effect of the illumination on the
colors in the scene and restore the original colors of the objects. Recently,
several deep learning-based approaches have been proposed to solve this problem
and they often led to state-of-the-art performance in terms of average errors.
However, for extreme samples, these methods fail and lead to high errors. In
this paper, we address this limitation by proposing to aggregate different deep
learning methods according to their output uncertainty. We estimate the
relative uncertainty of each approach using Monte Carlo dropout and the final
illumination estimate is obtained as the sum of the different model estimates
weighted by the log-inverse of their corresponding uncertainties. The proposed
framework leads to state-of-the-art performance on INTEL-TAU dataset.Comment: 7 pages,6 figure