13,280 research outputs found
Multimodal Sparse Coding for Event Detection
Unsupervised feature learning methods have proven effective for
classification tasks based on a single modality. We present multimodal sparse
coding for learning feature representations shared across multiple modalities.
The shared representations are applied to multimedia event detection (MED) and
evaluated in comparison to unimodal counterparts, as well as other feature
learning methods such as GMM supervectors and sparse RBM. We report the
cross-validated classification accuracy and mean average precision of the MED
system trained on features learned from our unimodal and multimodal settings
for a subset of the TRECVID MED 2014 dataset.Comment: Multimodal Machine Learning Workshop at NIPS 201
Methodological considerations concerning manual annotation of musical audio in function of algorithm development
In research on musical audio-mining, annotated music databases are needed which allow the development of computational tools that extract from the musical audiostream the kind of high-level content that users can deal with in Music Information Retrieval (MIR) contexts. The notion of musical content, and therefore the notion of annotation, is ill-defined, however, both in the syntactic and semantic sense. As a consequence, annotation has been approached from a variety of perspectives (but mainly linguistic-symbolic oriented), and a general methodology is lacking. This paper is a step towards the definition of a general framework for manual annotation of musical audio in function of a computational approach to musical audio-mining that is based on algorithms that learn from annotated data. 1
A Sub-Character Architecture for Korean Language Processing
We introduce a novel sub-character architecture that exploits a unique
compositional structure of the Korean language. Our method decomposes each
character into a small set of primitive phonetic units called jamo letters from
which character- and word-level representations are induced. The jamo letters
divulge syntactic and semantic information that is difficult to access with
conventional character-level units. They greatly alleviate the data sparsity
problem, reducing the observation space to 1.6% of the original while
increasing accuracy in our experiments. We apply our architecture to dependency
parsing and achieve dramatic improvement over strong lexical baselines.Comment: EMNLP 201
- …