We study a model of unsupervised learning where the real-valued data vectors
are isotropically distributed, except for a single symmetry breaking binary
direction B∈{−1,+1}N, onto which the projections have a Gaussian
distribution. We show that a candidate vector J undergoing Gibbs
learning in this discrete space, approaches the perfect match J=B
exponentially. Besides the second order ``retarded learning'' phase transition
for unbiased distributions, we show that first order transitions can also
occur. Extending the known result that the center of mass of the Gibbs ensemble
has Bayes-optimal performance, we show that taking the sign of the components
of this vector leads to the vector with optimal performance in the binary
space. These upper bounds are shown not to be saturated with the technique of
transforming the components of a special continuous vector, except in
asymptotic limits and in a special linear case. Simulations are presented which
are in excellent agreement with the theoretical results.Comment: 18 pages, 11 EPS figures; submitte