18,663 research outputs found
Conditional network embeddings
Network Embeddings (NEs) map the nodes of a given network into -dimensional Euclidean space . Ideally, this mapping is such that 'similar' nodes are mapped onto nearby points, such that the NE can be used for purposes such as link prediction (if 'similar' means being 'more likely to be connected') or classification (if 'similar' means 'being more likely to have the same label'). In recent years various methods for NE have been introduced, all following a similar strategy: defining a notion of similarity between nodes (typically some distance measure within the network), a distance measure in the embedding space, and a loss function that penalizes large distances for similar nodes and small distances for dissimilar nodes.
A difficulty faced by existing methods is that certain networks are fundamentally hard to embed due to their structural properties: (approximate) multipartiteness, certain degree distributions, assortativity, etc. To overcome this, we introduce a conceptual innovation to the NE literature and propose to create \emph{Conditional Network Embeddings} (CNEs); embeddings that maximally add information with respect to given structural properties (e.g. node degrees, block densities, etc.). We use a simple Bayesian approach to achieve this, and propose a block stochastic gradient descent algorithm for fitting it efficiently.
We demonstrate that CNEs are superior for link prediction and multi-label classification when compared to state-of-the-art methods, and this without adding significant mathematical or computational complexity. Finally, we illustrate the potential of CNE for network visualization
Maximum entropy models capture melodic styles
We introduce a Maximum Entropy model able to capture the statistics of
melodies in music. The model can be used to generate new melodies that emulate
the style of the musical corpus which was used to train it. Instead of using
the body interactions of order Markov models, traditionally used in
automatic music generation, we use a nearest neighbour model with pairwise
interactions only. In that way, we keep the number of parameters low and avoid
over-fitting problems typical of Markov models. We show that long-range musical
phrases don't need to be explicitly enforced using high-order Markov
interactions, but can instead emerge from multiple, competing, pairwise
interactions. We validate our Maximum Entropy model by contrasting how much the
generated sequences capture the style of the original corpus without
plagiarizing it. To this end we use a data-compression approach to discriminate
the levels of borrowing and innovation featured by the artificial sequences.
The results show that our modelling scheme outperforms both fixed-order and
variable-order Markov models. This shows that, despite being based only on
pairwise interactions, this Maximum Entropy scheme opens the possibility to
generate musically sensible alterations of the original phrases, providing a
way to generate innovation
What Can We Learn Privately?
Learning problems form an important category of computational tasks that
generalizes many of the computations researchers apply to large real-life data
sets. We ask: what concept classes can be learned privately, namely, by an
algorithm whose output does not depend too heavily on any one input or specific
training example? More precisely, we investigate learning algorithms that
satisfy differential privacy, a notion that provides strong confidentiality
guarantees in contexts where aggregate information is released about a database
containing sensitive information about individuals. We demonstrate that,
ignoring computational constraints, it is possible to privately agnostically
learn any concept class using a sample size approximately logarithmic in the
cardinality of the concept class. Therefore, almost anything learnable is
learnable privately: specifically, if a concept class is learnable by a
(non-private) algorithm with polynomial sample complexity and output size, then
it can be learned privately using a polynomial number of samples. We also
present a computationally efficient private PAC learner for the class of parity
functions. Local (or randomized response) algorithms are a practical class of
private algorithms that have received extensive investigation. We provide a
precise characterization of local private learning algorithms. We show that a
concept class is learnable by a local algorithm if and only if it is learnable
in the statistical query (SQ) model. Finally, we present a separation between
the power of interactive and noninteractive local learning algorithms.Comment: 35 pages, 2 figure
Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks
We propose a method for lossy image compression based on recurrent,
convolutional neural networks that outperforms BPG (4:2:0 ), WebP, JPEG2000,
and JPEG as measured by MS-SSIM. We introduce three improvements over previous
research that lead to this state-of-the-art result. First, we show that
training with a pixel-wise loss weighted by SSIM increases reconstruction
quality according to several metrics. Second, we modify the recurrent
architecture to improve spatial diffusion, which allows the network to more
effectively capture and propagate image information through the network's
hidden state. Finally, in addition to lossless entropy coding, we use a
spatially adaptive bit allocation algorithm to more efficiently use the limited
number of bits to encode visually complex image regions. We evaluate our method
on the Kodak and Tecnick image sets and compare against standard codecs as well
recently published methods based on deep neural networks
Similarity-Based Models of Word Cooccurrence Probabilities
In many applications of natural language processing (NLP) it is necessary to
determine the likelihood of a given word combination. For example, a speech
recognizer may need to determine which of the two word combinations ``eat a
peach'' and ``eat a beach'' is more likely. Statistical NLP methods determine
the likelihood of a word combination from its frequency in a training corpus.
However, the nature of language is such that many word combinations are
infrequent and do not occur in any given corpus. In this work we propose a
method for estimating the probability of such previously unseen word
combinations using available information on ``most similar'' words.
We describe probabilistic word association models based on distributional
word similarity, and apply them to two tasks, language modeling and pseudo-word
disambiguation. In the language modeling task, a similarity-based model is used
to improve probability estimates for unseen bigrams in a back-off language
model. The similarity-based method yields a 20% perplexity improvement in the
prediction of unseen bigrams and statistically significant reductions in
speech-recognition error.
We also compare four similarity-based estimation methods against back-off and
maximum-likelihood estimation methods on a pseudo-word sense disambiguation
task in which we controlled for both unigram and bigram frequency to avoid
giving too much weight to easy-to-disambiguate high-frequency configurations.
The similarity-based methods perform up to 40% better on this particular task.Comment: 26 pages, 5 figure
- …