4 research outputs found
Semi-supervised Skin Detection by Network with Mutual Guidance
In this paper we present a new data-driven method for robust skin detection
from a single human portrait image. Unlike previous methods, we incorporate
human body as a weak semantic guidance into this task, considering acquiring
large-scale of human labeled skin data is commonly expensive and
time-consuming. To be specific, we propose a dual-task neural network for joint
detection of skin and body via a semi-supervised learning strategy. The
dual-task network contains a shared encoder but two decoders for skin and body
separately. For each decoder, its output also serves as a guidance for its
counterpart, making both decoders mutually guided. Extensive experiments were
conducted to demonstrate the effectiveness of our network with mutual guidance,
and experimental results show our network outperforms the state-of-the-art in
skin detection.Comment: Accepted by ICCV 201
A study of the effect of the illumination model on the generation of synthetic training datasets
The use of computer generated images to train Deep Neural Networks is a
viable alternative to real images when the latter are scarce or expensive. In
this paper, we study how the illumination model used by the rendering software
affects the quality of the generated images. We created eight training sets,
each one with a different illumination model, and tested them on three
different network architectures, ResNet, U-Net and a combined architecture
developed by us. The test set consisted of photos of 3D printed objects
produced from the same CAD models used to generate the training set. The effect
of the other parameters of the rendering process, such as textures and camera
position, was randomized.
Our results show that the effect of the illumination model is important,
comparable in significance to the network architecture. We also show that both
light probes capturing natural environmental light, and modelled lighting
environments, can give good results. In the case of light probes, we identified
as two significant factors affecting performance the similarity between the
light probe and the test environment, as well as the light probe's resolution.
Regarding modelled lighting environment, similarity with the test environment
was again identified as a significant factor.Comment: 8 page
Unsupervised Domain Adaptation for Semantic Segmentation of NIR Images through Generative Latent Search
Segmentation of the pixels corresponding to human skin is an essential first
step in multiple applications ranging from surveillance to heart-rate
estimation from remote-photoplethysmography. However, the existing literature
considers the problem only in the visible-range of the EM-spectrum which limits
their utility in low or no light settings where the criticality of the
application is higher. To alleviate this problem, we consider the problem of
skin segmentation from the Near-infrared images. However, Deep learning based
state-of-the-art segmentation techniques demands large amounts of labelled data
that is unavailable for the current problem. Therefore we cast the skin
segmentation problem as that of target-independent Unsupervised Domain
Adaptation (UDA) where we use the data from the Red-channel of the
visible-range to develop skin segmentation algorithm on NIR images. We propose
a method for target-independent segmentation where the 'nearest-clone' of a
target image in the source domain is searched and used as a proxy in the
segmentation network trained only on the source domain. We prove the existence
of 'nearest-clone' and propose a method to find it through an optimization
algorithm over the latent space of a Deep generative model based on variational
inference. We demonstrate the efficacy of the proposed method for NIR skin
segmentation over the state-of-the-art UDA segmentation methods on the two
newly created skin segmentation datasets in NIR domain despite not having
access to the target NIR data. Additionally, we report state-of-the-art results
for adaption from Synthia to Cityscapes which is a popular setting in
Unsupervised Domain Adaptation for semantic segmentation. The code and datasets
are available at https://github.com/ambekarsameer96/GLSS.Comment: ECCV 2020 [Spotlight
Skin disease diagnosis with deep learning: a review
Skin cancer is one of the most threatening diseases worldwide. However,
diagnosing skin cancer correctly is challenging. Recently, deep learning
algorithms have emerged to achieve excellent performance on various tasks.
Particularly, they have been applied to the skin disease diagnosis tasks. In
this paper, we present a review on deep learning methods and their applications
in skin disease diagnosis. We first present a brief introduction to skin
diseases and image acquisition methods in dermatology, and list several
publicly available skin datasets for training and testing algorithms. Then, we
introduce the conception of deep learning and review popular deep learning
architectures. Thereafter, popular deep learning frameworks facilitating the
implementation of deep learning algorithms and performance evaluation metrics
are presented. As an important part of this article, we then review the
literature involving deep learning methods for skin disease diagnosis from
several aspects according to the specific tasks. Additionally, we discuss the
challenges faced in the area and suggest possible future research directions.
The major purpose of this article is to provide a conceptual and systematically
review of the recent works on skin disease diagnosis with deep learning. Given
the popularity of deep learning, there remains great challenges in the area, as
well as opportunities that we can explore in the future