1,247 research outputs found

    Constrained speaker linking

    Get PDF
    In this paper we study speaker linking (a.k.a.\ partitioning) given constraints of the distribution of speaker identities over speech recordings. Specifically, we show that the intractable partitioning problem becomes tractable when the constraints pre-partition the data in smaller cliques with non-overlapping speakers. The surprisingly common case where speakers in telephone conversations are known, but the assignment of channels to identities is unspecified, is treated in a Bayesian way. We show that for the Dutch CGN database, where this channel assignment task is at hand, a lightweight speaker recognition system can quite effectively solve the channel assignment problem, with 93% of the cliques solved. We further show that the posterior distribution over channel assignment configurations is well calibrated.Comment: Submitted to Interspeech 2014, some typos fixe

    Creating a Dutch testbed to evaluate the retrieval from textual databases

    Get PDF
    This paper describes the first large-scale evaluation of information retrieval systems using Dutch documents and queries. We describe in detail the characteristics of the Dutch test data, which is part of the official CLEF multilingual texttual database, and give an overview of the experimental results of companies and research institutions that participated in the first official Dutch CLEF experiments. Judging from these experiments, the handling of language-specific issues of Dutch, like for instance simple morphology and compound nouns, significantly improves the performance of information retrieval systems in many cases. Careful examination of the test collection shows that it serves as a reliable tool for the evaluation of information retrieval systems in the future

    Speech-based recognition of self-reported and observed emotion in a dimensional space

    Get PDF
    The differences between self-reported and observed emotion have only marginally been investigated in the context of speech-based automatic emotion recognition. We address this issue by comparing self-reported emotion ratings to observed emotion ratings and look at how differences between these two types of ratings affect the development and performance of automatic emotion recognizers developed with these ratings. A dimensional approach to emotion modeling is adopted: the ratings are based on continuous arousal and valence scales. We describe the TNO-Gaming Corpus that contains spontaneous vocal and facial expressions elicited via a multiplayer videogame and that includes emotion annotations obtained via self-report and observation by outside observers. Comparisons show that there are discrepancies between self-reported and observed emotion ratings which are also reflected in the performance of the emotion recognizers developed. Using Support Vector Regression in combination with acoustic and textual features, recognizers of arousal and valence are developed that can predict points in a 2-dimensional arousal-valence space. The results of these recognizers show that the self-reported emotion is much harder to recognize than the observed emotion, and that averaging ratings from multiple observers improves performance

    Arousal and Valence Prediction in Spontaneous Emotional Speech: Felt versus Perceived Emotion

    Get PDF
    In this paper, we describe emotion recognition experiments carried out for spontaneous affective speech with the aim to compare the added value of annotation of felt emotion versus annotation of perceived emotion. Using speech material available in the TNO-GAMING corpus (a corpus containing audiovisual recordings of people playing videogames), speech-based affect recognizers were developed that can predict Arousal and Valence scalar values. Two types of recognizers were developed in parallel: one trained with felt emotion annotations (generated by the gamers themselves) and one trained with perceived/observed emotion annotations (generated by a group of observers). The experiments showed that, in speech, with the methods and features currently used, observed emotions are easier to predict than felt emotions. The results suggest that recognition performance strongly depends on how and by whom the emotion annotations are carried out. \u

    Impact of basic angle variations on the parallax zero point for a scanning astrometric satellite

    Full text link
    Determination of absolute parallaxes by means of a scanning astrometric satellite such as Hipparcos or Gaia relies on the short-term stability of the so-called basic angle between the two viewing directions. Uncalibrated variations of the basic angle may produce systematic errors in the computed parallaxes. We examine the coupling between a global parallax shift and specific variations of the basic angle, namely those related to the satellite attitude with respect to the Sun. The changes in observables produced by small perturbations of the basic angle, attitude, and parallaxes are calculated analytically. We then look for a combination of perturbations that has no net effect on the observables. In the approximation of infinitely small fields of view, it is shown that certain perturbations of the basic angle are observationally indistinguishable from a global shift of the parallaxes. If such perturbations exist, they cannot be calibrated from the astrometric observations but will produce a global parallax bias. Numerical simulations of the astrometric solution, using both direct and iterative methods, confirm this theoretical result. For a given amplitude of the basic angle perturbation, the parallax bias is smaller for a larger basic angle and a larger solar aspect angle. In both these respects Gaia has a more favourable geometry than Hipparcos. In the case of Gaia, internal metrology is used to monitor basic angle variations. Additionally, Gaia has the advantage of detecting numerous quasars, which can be used to verify the parallax zero point.Comment: 8 pages, 2 figures; Accepted for publication in Astronomy & Astrophysic

    Dealing with Phrase Level Co-Articulation (PLC) in speech recognition: A first approach

    Get PDF
    Whereas nowadays within-word co-articulation effects are usually sufficiently dealt with in automatic speech recognition, this is not always the case with phrase level co-articulation effects (PLC). This paper describes a first approach in dealing with phrase level co-articulation by applying these rules on the reference transcripts used for training our recogniser and by adding a set of temporary PLC phones that later on will be mapped on the original phones. In fact we temporarily break down acoustic context into a general and a PLC context. With this method, more robust models could be trained because phones that are confused due to PLC effects like for example /v/-/f/ and /z/-/s/, receive their own models. A first attempt to apply this method is described
    corecore