18,041 research outputs found
Visualizing probabilistic models: Intensive Principal Component Analysis
Unsupervised learning makes manifest the underlying structure of data without
curated training and specific problem definitions. However, the inference of
relationships between data points is frustrated by the `curse of
dimensionality' in high-dimensions. Inspired by replica theory from statistical
mechanics, we consider replicas of the system to tune the dimensionality and
take the limit as the number of replicas goes to zero. The result is the
intensive embedding, which is not only isometric (preserving local distances)
but allows global structure to be more transparently visualized. We develop the
Intensive Principal Component Analysis (InPCA) and demonstrate clear
improvements in visualizations of the Ising model of magnetic spins, a neural
network, and the dark energy cold dark matter ({\Lambda}CDM) model as applied
to the Cosmic Microwave Background.Comment: 6 pages, 5 figure
Learning Edge Representations via Low-Rank Asymmetric Projections
We propose a new method for embedding graphs while preserving directed edge
information. Learning such continuous-space vector representations (or
embeddings) of nodes in a graph is an important first step for using network
information (from social networks, user-item graphs, knowledge bases, etc.) in
many machine learning tasks.
Unlike previous work, we (1) explicitly model an edge as a function of node
embeddings, and we (2) propose a novel objective, the "graph likelihood", which
contrasts information from sampled random walks with non-existent edges.
Individually, both of these contributions improve the learned representations,
especially when there are memory constraints on the total size of the
embeddings. When combined, our contributions enable us to significantly improve
the state-of-the-art by learning more concise representations that better
preserve the graph structure.
We evaluate our method on a variety of link-prediction task including social
networks, collaboration networks, and protein interactions, showing that our
proposed method learn representations with error reductions of up to 76% and
55%, on directed and undirected graphs. In addition, we show that the
representations learned by our method are quite space efficient, producing
embeddings which have higher structure-preserving accuracy but are 10 times
smaller
Bitwise Source Separation on Hashed Spectra: An Efficient Posterior Estimation Scheme Using Partial Rank Order Metrics
This paper proposes an efficient bitwise solution to the single-channel
source separation task. Most dictionary-based source separation algorithms rely
on iterative update rules during the run time, which becomes computationally
costly especially when we employ an overcomplete dictionary and sparse encoding
that tend to give better separation results. To avoid such cost we propose a
bitwise scheme on hashed spectra that leads to an efficient posterior
probability calculation. For each source, the algorithm uses a partial rank
order metric to extract robust features that form a binarized dictionary of
hashed spectra. Then, for a mixture spectrum, its hash code is compared with
each source's hashed dictionary in one pass. This simple voting-based
dictionary search allows a fast and iteration-free estimation of ratio masking
at each bin of a signal spectrogram. We verify that the proposed BitWise Source
Separation (BWSS) algorithm produces sensible source separation results for the
single-channel speech denoising task, with 6-8 dB mean SDR. To our knowledge,
this is the first dictionary based algorithm for this task that is completely
iteration-free in both training and testing
- …