7,305 research outputs found
Geometric robustness of deep networks: analysis and improvement
Deep convolutional neural networks have been shown to be vulnerable to
arbitrary geometric transformations. However, there is no systematic method to
measure the invariance properties of deep networks to such transformations. We
propose ManiFool as a simple yet scalable algorithm to measure the invariance
of deep networks. In particular, our algorithm measures the robustness of deep
networks to geometric transformations in a worst-case regime as they can be
problematic for sensitive applications. Our extensive experimental results show
that ManiFool can be used to measure the invariance of fairly complex networks
on high dimensional datasets and these values can be used for analyzing the
reasons for it. Furthermore, we build on Manifool to propose a new adversarial
training scheme and we show its effectiveness on improving the invariance
properties of deep neural networks
Investigating ultrafast quantum magnetism with machine learning
We investigate the efficiency of the recently proposed Restricted Boltzmann
Machine (RBM) representation of quantum many-body states to study both the
static properties and quantum spin dynamics in the two-dimensional Heisenberg
model on a square lattice. For static properties we find close agreement with
numerically exact Quantum Monte Carlo results in the thermodynamical limit. For
dynamics and small systems, we find excellent agreement with exact
diagonalization, while for systems up to N=256 spins close consistency with
interacting spin-wave theory is obtained. In all cases the accuracy converges
fast with the number of network parameters, giving access to much bigger
systems than feasible before. This suggests great potential to investigate the
quantum many-body dynamics of large scale spin systems relevant for the
description of magnetic materials strongly out of equilibrium.Comment: 18 pages, 5 figures, data up to N=256 spins added, minor change
- …