71 research outputs found
Love Me, Love Me, Say (and Write!) that You Love Me: Enriching the WASABI Song Corpus with Lyrics Annotations
We present the WASABI Song Corpus, a large corpus of songs enriched with
metadata extracted from music databases on the Web, and resulting from the
processing of song lyrics and from audio analysis. More specifically, given
that lyrics encode an important part of the semantics of a song, we focus here
on the description of the methods we proposed to extract relevant information
from the lyrics, such as their structure segmentation, their topics, the
explicitness of the lyrics content, the salient passages of a song and the
emotions conveyed. The creation of the resource is still ongoing: so far, the
corpus contains 1.73M songs with lyrics (1.41M unique lyrics) annotated at
different levels with the output of the above mentioned methods. Such corpus
labels and the provided methods can be exploited by music search engines and
music professionals (e.g. journalists, radio presenters) to better handle large
collections of lyrics, allowing an intelligent browsing, categorization and
segmentation recommendation of songs.Comment: 10 page
Natural language processing for similar languages, varieties, and dialects: A survey
There has been a lot of recent interest in the natural language processing (NLP) community in the computational processing of language varieties and dialects, with the aim to improve the performance of applications such as machine translation, speech recognition, and dialogue systems. Here, we attempt to survey this growing field of research, with focus on computational methods for processing similar languages, varieties, and dialects. In particular, we discuss the most important challenges when dealing with diatopic language variation, and we present some of the available datasets, the process of data collection, and the most common data collection strategies used to compile datasets for similar languages, varieties, and dialects. We further present a number of studies on computational methods developed and/or adapted for preprocessing, normalization, part-of-speech tagging, and parsing similar languages, language varieties, and dialects. Finally, we discuss relevant applications such as language and dialect identification and machine translation for closely related languages, language varieties, and dialects.Non peer reviewe
Listening while Speaking and Visualizing: Improving ASR through Multimodal Chain
Previously, a machine speech chain, which is based on sequence-to-sequence
deep learning, was proposed to mimic speech perception and production behavior.
Such chains separately processed listening and speaking by automatic speech
recognition (ASR) and text-to-speech synthesis (TTS) and simultaneously enabled
them to teach each other in semi-supervised learning when they received
unpaired data. Unfortunately, this speech chain study is limited to speech and
textual modalities. In fact, natural communication is actually multimodal and
involves both auditory and visual sensory systems. Although the said speech
chain reduces the requirement of having a full amount of paired data, in this
case we still need a large amount of unpaired data. In this research, we take a
further step and construct a multimodal chain and design a closely knit chain
architecture that combines ASR, TTS, image captioning, and image production
models into a single framework. The framework allows the training of each
component without requiring a large number of parallel multimodal data. Our
experimental results also show that an ASR can be further trained without
speech and text data and cross-modal data augmentation remains possible through
our proposed chain, which improves the ASR performance.Comment: Accepted in IEEE ASRU 201
- …