20,359 research outputs found
Efficient Learning of Sparse Conditional Random Fields for Supervised Sequence Labelling
Conditional Random Fields (CRFs) constitute a popular and efficient approach
for supervised sequence labelling. CRFs can cope with large description spaces
and can integrate some form of structural dependency between labels. In this
contribution, we address the issue of efficient feature selection for CRFs
based on imposing sparsity through an L1 penalty. We first show how sparsity of
the parameter set can be exploited to significantly speed up training and
labelling. We then introduce coordinate descent parameter update schemes for
CRFs with L1 regularization. We finally provide some empirical comparisons of
the proposed approach with state-of-the-art CRF training strategies. In
particular, it is shown that the proposed approach is able to take profit of
the sparsity to speed up processing and hence potentially handle larger
dimensional models
Topic Models Conditioned on Arbitrary Features with Dirichlet-multinomial Regression
Although fully generative models have been successfully used to model the
contents of text documents, they are often awkward to apply to combinations of
text data and document metadata. In this paper we propose a
Dirichlet-multinomial regression (DMR) topic model that includes a log-linear
prior on document-topic distributions that is a function of observed features
of the document, such as author, publication venue, references, and dates. We
show that by selecting appropriate features, DMR topic models can meet or
exceed the performance of several previously published topic models designed
for specific data.Comment: Appears in Proceedings of the Twenty-Fourth Conference on Uncertainty
in Artificial Intelligence (UAI2008
Maximum entropy models capture melodic styles
We introduce a Maximum Entropy model able to capture the statistics of
melodies in music. The model can be used to generate new melodies that emulate
the style of the musical corpus which was used to train it. Instead of using
the body interactions of order Markov models, traditionally used in
automatic music generation, we use a nearest neighbour model with pairwise
interactions only. In that way, we keep the number of parameters low and avoid
over-fitting problems typical of Markov models. We show that long-range musical
phrases don't need to be explicitly enforced using high-order Markov
interactions, but can instead emerge from multiple, competing, pairwise
interactions. We validate our Maximum Entropy model by contrasting how much the
generated sequences capture the style of the original corpus without
plagiarizing it. To this end we use a data-compression approach to discriminate
the levels of borrowing and innovation featured by the artificial sequences.
The results show that our modelling scheme outperforms both fixed-order and
variable-order Markov models. This shows that, despite being based only on
pairwise interactions, this Maximum Entropy scheme opens the possibility to
generate musically sensible alterations of the original phrases, providing a
way to generate innovation
- …