5,580 research outputs found
VIGAN: Missing View Imputation with Generative Adversarial Networks
In an era when big data are becoming the norm, there is less concern with the
quantity but more with the quality and completeness of the data. In many
disciplines, data are collected from heterogeneous sources, resulting in
multi-view or multi-modal datasets. The missing data problem has been
challenging to address in multi-view data analysis. Especially, when certain
samples miss an entire view of data, it creates the missing view problem.
Classic multiple imputations or matrix completion methods are hardly effective
here when no information can be based on in the specific view to impute data
for such samples. The commonly-used simple method of removing samples with a
missing view can dramatically reduce sample size, thus diminishing the
statistical power of a subsequent analysis. In this paper, we propose a novel
approach for view imputation via generative adversarial networks (GANs), which
we name by VIGAN. This approach first treats each view as a separate domain and
identifies domain-to-domain mappings via a GAN using randomly-sampled data from
each view, and then employs a multi-modal denoising autoencoder (DAE) to
reconstruct the missing view from the GAN outputs based on paired data across
the views. Then, by optimizing the GAN and DAE jointly, our model enables the
knowledge integration for domain mappings and view correspondences to
effectively recover the missing view. Empirical results on benchmark datasets
validate the VIGAN approach by comparing against the state of the art. The
evaluation of VIGAN in a genetic study of substance use disorders further
proves the effectiveness and usability of this approach in life science.Comment: 10 pages, 8 figures, conferenc
Multi-modal Image Processing based on Coupled Dictionary Learning
In real-world scenarios, many data processing problems often involve
heterogeneous images associated with different imaging modalities. Since these
multimodal images originate from the same phenomenon, it is realistic to assume
that they share common attributes or characteristics. In this paper, we propose
a multi-modal image processing framework based on coupled dictionary learning
to capture similarities and disparities between different image modalities. In
particular, our framework can capture favorable structure similarities across
different image modalities such as edges, corners, and other elementary
primitives in a learned sparse transform domain, instead of the original pixel
domain, that can be used to improve a number of image processing tasks such as
denoising, inpainting, or super-resolution. Practical experiments demonstrate
that incorporating multimodal information using our framework brings notable
benefits.Comment: SPAWC 2018, 19th IEEE International Workshop On Signal Processing
Advances In Wireless Communication
- …