6,455 research outputs found
Modeling Camera Effects to Improve Visual Learning from Synthetic Data
Recent work has focused on generating synthetic imagery to increase the size
and variability of training data for learning visual tasks in urban scenes.
This includes increasing the occurrence of occlusions or varying environmental
and weather effects. However, few have addressed modeling variation in the
sensor domain. Sensor effects can degrade real images, limiting
generalizability of network performance on visual tasks trained on synthetic
data and tested in real environments. This paper proposes an efficient,
automatic, physically-based augmentation pipeline to vary sensor effects
--chromatic aberration, blur, exposure, noise, and color cast-- for synthetic
imagery. In particular, this paper illustrates that augmenting synthetic
training datasets with the proposed pipeline reduces the domain gap between
synthetic and real domains for the task of object detection in urban driving
scenes
Ultrasound segmentation using U-Net: learning from simulated data and testing on real data
Segmentation of ultrasound images is an essential task in both diagnosis and
image-guided interventions given the ease-of-use and low cost of this imaging
modality. As manual segmentation is tedious and time consuming, a growing body
of research has focused on the development of automatic segmentation
algorithms. Deep learning algorithms have shown remarkable achievements in this
regard; however, they need large training datasets. Unfortunately, preparing
large labeled datasets in ultrasound images is prohibitively difficult.
Therefore, in this study, we propose the use of simulated ultrasound (US)
images for training the U-Net deep learning segmentation architecture and test
on tissue-mimicking phantom data collected by an ultrasound machine. We
demonstrate that the trained architecture on the simulated data is
transferrable to real data, and therefore, simulated data can be considered as
an alternative training dataset when real datasets are not available. The
second contribution of this paper is that we train our U- Net network on
envelope and B-mode images of the simulated dataset, and test the trained
network on real envelope and B- mode images of phantom, respectively. We show
that test results are superior for the envelope data compared to B-mode image.Comment: Accepted in EMBC 201
A Multiple Radar Approach for Automatic Target Recognition of Aircraft using Inverse Synthetic Aperture Radar
Along with the improvement of radar technologies, Automatic Target
Recognition (ATR) using Synthetic Aperture Radar (SAR) and Inverse SAR (ISAR)
has come to be an active research area. SAR/ISAR are radar techniques to
generate a two-dimensional high-resolution image of a target. Unlike other
similar experiments using Convolutional Neural Networks (CNN) to solve this
problem, we utilize an unusual approach that leads to better performance and
faster training times. Our CNN uses complex values generated by a simulation to
train the network; additionally, we utilize a multi-radar approach to increase
the accuracy of the training and testing processes, thus resulting in higher
accuracies than the other papers working on SAR/ISAR ATR. We generated our
dataset with 7 different aircraft models with a radar simulator we developed
called RadarPixel; it is a Windows GUI program implemented using Matlab and
Java programming, the simulator is capable of accurately replicating a real
SAR/ISAR configurations. Our objective is to utilize our multi-radar technique
and determine the optimal number of radars needed to detect and classify
targets.Comment: 8 pages, 9 figures, International Conference for Data Intelligence
and Security (ICDIS
A Generative Model of People in Clothing
We present the first image-based generative model of people in clothing for
the full body. We sidestep the commonly used complex graphics rendering
pipeline and the need for high-quality 3D scans of dressed people. Instead, we
learn generative models from a large image database. The main challenge is to
cope with the high variance in human pose, shape and appearance. For this
reason, pure image-based approaches have not been considered so far. We show
that this challenge can be overcome by splitting the generating process in two
parts. First, we learn to generate a semantic segmentation of the body and
clothing. Second, we learn a conditional model on the resulting segments that
creates realistic images. The full model is differentiable and can be
conditioned on pose, shape or color. The result are samples of people in
different clothing items and styles. The proposed model can generate entirely
new people with realistic clothing. In several experiments we present
encouraging results that suggest an entirely data-driven approach to people
generation is possible
- …