27 research outputs found
Improved Auto-Encoding using Deterministic Projected Belief Networks
In this paper, we exploit the unique properties of a deterministic projected
belief network (D-PBN) to take full advantage of trainable compound activation
functions (TCAs). A D-PBN is a type of auto-encoder that operates by "backing
up" through a feed-forward neural network. TCAs are activation functions with
complex monotonic-increasing shapes that change the distribution of the data so
that the linear transformation that follows is more effective. Because a D-PBN
operates by "backing up", the TCAs are inverted in the reconstruction process,
restoring the original distribution of the data, thus taking advantage of a
given TCA in both analysis and reconstruction. In this paper, we show that a
D-PBN auto-encoder with TCAs can significantly out-perform standard
auto-encoders including variational auto-encoders
EEF: Exponentially Embedded Families with Class-Specific Features for Classification
In this letter, we present a novel exponentially embedded families (EEF)
based classification method, in which the probability density function (PDF) on
raw data is estimated from the PDF on features. With the PDF construction, we
show that class-specific features can be used in the proposed classification
method, instead of a common feature subset for all classes as used in
conventional approaches. We apply the proposed EEF classifier for text
categorization as a case study and derive an optimal Bayesian classification
rule with class-specific feature selection based on the Information Gain (IG)
score. The promising performance on real-life data sets demonstrates the
effectiveness of the proposed approach and indicates its wide potential
applications.Comment: 9 pages, 3 figures, to be published in IEEE Signal Processing Letter.
IEEE Signal Processing Letter, 201
A Comparison of PDF Projection with Normalizing Flows and SurVAE
Normalizing flows (NF) recently gained attention as a way to construct
generative networks with exact likelihood calculation out of composable layers.
However, NF is restricted to dimension-preserving transformations. Surjection
VAE (SurVAE) has been proposed to extend NF to dimension-altering
transformations. Such networks are desirable because they are expressive and
can be precisely trained. We show that the approaches are a re-invention of PDF
projection, which appeared over twenty years earlier and is much further
developed
Beyond Moments: Extending the Maximum Entropy Principle to Feature Distribution Constraints
The maximum entropy principle introduced by Jaynes proposes that a data distribution should maximize the entropy subject to constraints imposed by the available knowledge. Jaynes provided a solution for the case when constraints were imposed on the expected value of a set of scalar functions of the data. These expected values are typically moments of the distribution. This paper describes how the method of maximum entropy PDF projection can be used to generalize the maximum entropy principle to constraints on the joint distribution of this set of functions