32 research outputs found
Meta-simulation for the Automated Design of Synthetic Overhead Imagery
The use of synthetic (or simulated) data for training machine learning models
has grown rapidly in recent years. Synthetic data can often be generated much
faster and more cheaply than its real-world counterpart. One challenge of using
synthetic imagery however is scene design: e.g., the choice of content and its
features and spatial arrangement. To be effective, this design must not only be
realistic, but appropriate for the target domain, which (by assumption) is
unlabeled. In this work, we propose an approach to automatically choose the
design of synthetic imagery based upon unlabeled real-world imagery. Our
approach, termed Neural-Adjoint Meta-Simulation (NAMS), builds upon the seminal
recent meta-simulation approaches. In contrast to the current state-of-the-art
methods, our approach can be pre-trained once offline, and then provides fast
design inference for new target imagery. Using both synthetic and real-world
problems, we show that NAMS infers synthetic designs that match both the
in-domain and out-of-domain target imagery, and that training segmentation
models with NAMS-designed imagery yields superior results compared to na\"ive
randomized designs and state-of-the-art meta-simulation methods
Meta-Learning for Color-to-Infrared Cross-Modal Style Transfer
Recent object detection models for infrared (IR) imagery are based upon deep
neural networks (DNNs) and require large amounts of labeled training imagery.
However, publicly-available datasets that can be used for such training are
limited in their size and diversity. To address this problem, we explore
cross-modal style transfer (CMST) to leverage large and diverse color imagery
datasets so that they can be used to train DNN-based IR image based object
detectors. We evaluate six contemporary stylization methods on four
publicly-available IR datasets - the first comparison of its kind - and find
that CMST is highly effective for DNN-based detectors. Surprisingly, we find
that existing data-driven methods are outperformed by a simple grayscale
stylization (an average of the color channels). Our analysis reveals that
existing data-driven methods are either too simplistic or introduce significant
artifacts into the imagery. To overcome these limitations, we propose
meta-learning style transfer (MLST), which learns a stylization by composing
and tuning well-behaved analytic functions. We find that MLST leads to more
complex stylizations without introducing significant image artifacts and
achieves the best overall detector performance on our benchmark datasets
Mixture Manifold Networks: A Computationally Efficient Baseline for Inverse Modeling
We propose and show the efficacy of a new method to address generic inverse
problems. Inverse modeling is the task whereby one seeks to determine the
control parameters of a natural system that produce a given set of observed
measurements. Recent work has shown impressive results using deep learning, but
we note that there is a trade-off between model performance and computational
time. For some applications, the computational time at inference for the best
performing inverse modeling method may be overly prohibitive to its use. We
present a new method that leverages multiple manifolds as a mixture of backward
(e.g., inverse) models in a forward-backward model architecture. These multiple
backwards models all share a common forward model, and their training is
mitigated by generating training examples from the forward model. The proposed
method thus has two innovations: 1) the multiple Manifold Mixture Network (MMN)
architecture, and 2) the training procedure involving augmenting backward model
training data using the forward model. We demonstrate the advantages of our
method by comparing to several baselines on four benchmark inverse problems,
and we furthermore provide analysis to motivate its design.Comment: This paper has been accepted to AAAI 2023; this is not the final
versio