4 research outputs found
Revisiting Complex Moments For 2D Shape Representation and Image Normalization
When comparing 2D shapes, a key issue is their normalization. Translation and
scale are easily taken care of by removing the mean and normalizing the energy.
However, defining and computing the orientation of a 2D shape is not so simple.
In fact, although for elongated shapes the principal axis can be used to define
one of two possible orientations, there is no such tool for general shapes. As
we show in the paper, previous approaches fail to compute the orientation of
even noiseless observations of simple shapes. We address this problem. In the
paper, we show how to uniquely define the orientation of an arbitrary 2D shape,
in terms of what we call its Principal Moments. We show that a small subset of
these moments suffice to represent the underlying 2D shape and propose a new
method to efficiently compute the shape orientation: Principal Moment Analysis.
Finally, we discuss how this method can further be applied to normalize
grey-level images. Besides the theoretical proof of correctness, we describe
experiments demonstrating robustness to noise and illustrating the method with
real images.Comment: 69 pages, 20 figure
Symmetry Regularization
The properties of a representation, such as smoothness, adaptability, generality, equivari- ance/invariance, depend on restrictions imposed during learning. In this paper, we propose using data symmetries, in the sense of equivalences under transformations, as a means for learning symmetry- adapted representations, i.e., representations that are equivariant to transformations in the original space. We provide a sufficient condition to enforce the representation, for example the weights of a neural network layer or the atoms of a dictionary, to have a group structure and specifically the group structure in an unlabeled training set. By reducing the analysis of generic group symmetries to per- mutation symmetries, we devise an analytic expression for a regularization scheme and a permutation invariant metric on the representation space. Our work provides a proof of concept on why and how to learn equivariant representations, without explicit knowledge of the underlying symmetries in the data.This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216