18,788 research outputs found
Statistical mechanics of lossy compression using multilayer perceptrons
Statistical mechanics is applied to lossy compression using multilayer
perceptrons for unbiased Boolean messages. We utilize a tree-like committee
machine (committee tree) and tree-like parity machine (parity tree) whose
transfer functions are monotonic. For compression using committee tree, a lower
bound of achievable distortion becomes small as the number of hidden units K
increases. However, it cannot reach the Shannon bound even where K -> infty.
For a compression using a parity tree with K >= 2 hidden units, the rate
distortion function, which is known as the theoretical limit for compression,
is derived where the code length becomes infinity.Comment: 12 pages, 5 figure
Statistical mechanics of lossy compression for non-monotonic multilayer perceptrons
A lossy data compression scheme for uniformly biased Boolean messages is
investigated via statistical mechanics techniques. We utilize tree-like
committee machine (committee tree) and tree-like parity machine (parity tree)
whose transfer functions are non-monotonic. The scheme performance at the
infinite code length limit is analyzed using the replica method. Both committee
and parity treelike networks are shown to saturate the Shannon bound. The AT
stability of the Replica Symmetric solution is analyzed, and the tuning of the
non-monotonic transfer function is also discussed.Comment: 29 pages, 7 figure
Bounding Embeddings of VC Classes into Maximum Classes
One of the earliest conjectures in computational learning theory-the Sample
Compression conjecture-asserts that concept classes (equivalently set systems)
admit compression schemes of size linear in their VC dimension. To-date this
statement is known to be true for maximum classes---those that possess maximum
cardinality for their VC dimension. The most promising approach to positively
resolving the conjecture is by embedding general VC classes into maximum
classes without super-linear increase to their VC dimensions, as such
embeddings would extend the known compression schemes to all VC classes. We
show that maximum classes can be characterised by a local-connectivity property
of the graph obtained by viewing the class as a cubical complex. This geometric
characterisation of maximum VC classes is applied to prove a negative embedding
result which demonstrates VC-d classes that cannot be embedded in any maximum
class of VC dimension lower than 2d. On the other hand, we show that every VC-d
class C embeds in a VC-(d+D) maximum class where D is the deficiency of C,
i.e., the difference between the cardinalities of a maximum VC-d class and of
C. For VC-2 classes in binary n-cubes for 4 <= n <= 6, we give best possible
results on embedding into maximum classes. For some special classes of Boolean
functions, relationships with maximum classes are investigated. Finally we give
a general recursive procedure for embedding VC-d classes into VC-(d+k) maximum
classes for smallest k.Comment: 22 pages, 2 figure
Statistical mechanics of lossy data compression using a non-monotonic perceptron
The performance of a lossy data compression scheme for uniformly biased
Boolean messages is investigated via methods of statistical mechanics. Inspired
by a formal similarity to the storage capacity problem in the research of
neural networks, we utilize a perceptron of which the transfer function is
appropriately designed in order to compress and decode the messages. Employing
the replica method, we analytically show that our scheme can achieve the
optimal performance known in the framework of lossy compression in most cases
when the code length becomes infinity. The validity of the obtained results is
numerically confirmed.Comment: 9 pages, 5 figures, Physical Review
- …