12,941 research outputs found
How is Gaze Influenced by Image Transformations? Dataset and Model
Data size is the bottleneck for developing deep saliency models, because
collecting eye-movement data is very time consuming and expensive. Most of
current studies on human attention and saliency modeling have used high quality
stereotype stimuli. In real world, however, captured images undergo various
types of transformations. Can we use these transformations to augment existing
saliency datasets? Here, we first create a novel saliency dataset including
fixations of 10 observers over 1900 images degraded by 19 types of
transformations. Second, by analyzing eye movements, we find that observers
look at different locations over transformed versus original images. Third, we
utilize the new data over transformed images, called data augmentation
transformation (DAT), to train deep saliency models. We find that label
preserving DATs with negligible impact on human gaze boost saliency prediction,
whereas some other DATs that severely impact human gaze degrade the
performance. These label preserving valid augmentation transformations provide
a solution to enlarge existing saliency datasets. Finally, we introduce a novel
saliency model based on generative adversarial network (dubbed GazeGAN). A
modified UNet is proposed as the generator of the GazeGAN, which combines
classic skip connections with a novel center-surround connection (CSC), in
order to leverage multi level features. We also propose a histogram loss based
on Alternative Chi Square Distance (ACS HistLoss) to refine the saliency map in
terms of luminance distribution. Extensive experiments and comparisons over 3
datasets indicate that GazeGAN achieves the best performance in terms of
popular saliency evaluation metrics, and is more robust to various
perturbations. Our code and data are available at:
https://github.com/CZHQuality/Sal-CFS-GAN
Reducing model bias in a deep learning classifier using domain adversarial neural networks in the MINERvA experiment
We present a simulation-based study using deep convolutional neural networks
(DCNNs) to identify neutrino interaction vertices in the MINERvA passive
targets region, and illustrate the application of domain adversarial neural
networks (DANNs) in this context. DANNs are designed to be trained in one
domain (simulated data) but tested in a second domain (physics data) and
utilize unlabeled data from the second domain so that during training only
features which are unable to discriminate between the domains are promoted.
MINERvA is a neutrino-nucleus scattering experiment using the NuMI beamline at
Fermilab. -dependent cross sections are an important part of the physics
program, and these measurements require vertex finding in complicated events.
To illustrate the impact of the DANN we used a modified set of simulation in
place of physics data during the training of the DANN and then used the label
of the modified simulation during the evaluation of the DANN. We find that deep
learning based methods offer significant advantages over our prior track-based
reconstruction for the task of vertex finding, and that DANNs are able to
improve the performance of deep networks by leveraging available unlabeled data
and by mitigating network performance degradation rooted in biases in the
physics models used for training.Comment: 41 page
- …