14,780 research outputs found
Equivariance In Higher Geometry
We study (pre-)sheaves in bicategories on geometric categories: smooth
manifolds, manifolds with a Lie group action and Lie groupoids. We present
three main results: we describe equivariant descent, we generalize the plus
construction to our setting and show that the plus construction yields a
2-stackification for 2-prestacks. Finally we show that, for a 2-stack, the
pullback functor along a Morita-equivalence of Lie groupoids is an equivalence
of bicategories. Our results have direct applications to gerbes and 2-vector
bundles. For instance, they allow to construct equivariant gerbes from local
data and can be used to simplify the description of the local data. We
illustrate the usefulness of our results in a systematic discussion of
holonomies for unoriented surfaces.Comment: 42 pages, minor correction
Equivariance, BRST and Superspace
The structure of equivariant cohomology in non-abelian localization formulas
and topological field theories is discussed. Equivariance is formulated in
terms of a nilpotent BRST symmetry, and another nilpotent operator which
restricts the BRST cohomology onto the equivariant, or basic sector. A
superfield formulation is presented and connections to reducible (BFV)
quantization of topological Yang-Mills theory are discussed.Comment: (24 pages, report UU-ITP and HU-TFT-93-65
What Affects Learned Equivariance in Deep Image Recognition Models?
Equivariance w.r.t. geometric transformations in neural networks improves
data efficiency, parameter efficiency and robustness to out-of-domain
perspective shifts. When equivariance is not designed into a neural network,
the network can still learn equivariant functions from the data. We quantify
this learned equivariance, by proposing an improved measure for equivariance.
We find evidence for a correlation between learned translation equivariance and
validation accuracy on ImageNet. We therefore investigate what can increase the
learned equivariance in neural networks, and find that data augmentation,
reduced model capacity and inductive bias in the form of convolutions induce
higher learned equivariance in neural networks.Comment: Accepted at CVPR workshop L3D-IVU 202
Breakdown and Groups II
The notion of breakdown point was introduced by Hampel (1968, 1971) and has since played an important role in the theory and practice of robust statistics. In Davies and Gather (2004) it was argued that the success of the concept is connected to the existence of a group of transformations on the sample space and the linking of breakdown and equivariance. For example the highest breakdown point of any translation equivariant functional on the real line is 1/2 whereas without equivariance considerations the highest breakdown point is the trivial upper bound of 1. --
- …