57 research outputs found

    Science on television : how? Like that!

    Get PDF
    This study explores the presence of science programs on the Flemish public broadcaster between 1997 and 2002 in terms of length, science domains, target groups, production mode, and type of broadcast. Our data show that for nearly all variables 2000 can be marked as a year in which the downward spiral for science on television was reversed. These results serve as a case study to discuss the influence of public policy and other possible motives for changes in science programming, as to gain a clearer insight into the factors that influence whether and how science programs are broadcast on television. Three factors were found to be crucial in this respect: 1) public service philosophy, 2) a strong governmental science policy providing structural government support, and 3) the reflection of a social discourse that articulates a need for more hard sciences

    UGENT-LT3 SCATE system for machine translation quality estimation

    Get PDF
    This paper describes the submission of the UGENT-LT3 SCATE system to the WMT15 Shared Task on Quality Estima-tion (QE), viz. English-Spanish word and sentence-level QE. We conceived QE as a supervised Machine Learning (ML) problem and designed additional features and combined these with the baseline feature set to estimate quality. The sen-tence-level QE system re-uses the word level predictions of the word-level QE system. We experimented with different learning methods and observe improve-ments over the baseline system for word-level QE with the use of the new features and by combining learning methods into ensembles. For sentence-level QE we show that using a single feature based on word-level predictions can perform better than the baseline system and using this in combination with additional features led to further improvements in performance

    LeTs Preprocess: The multilingual LT3 linguistic preprocessing toolkit

    Get PDF
    This paper presents the LeTs Preprocess Toolkit, a suite of robust high-performance preprocessing modules including Part-of-Speech Taggers, Lemmatizers and Named Entity Recognizers. The currently supported languages are Dutch, English, French and German. We give a detailed description of the architecture of the LeTs Preprocess pipeline and describe the data and methods used to train each component. Ten-fold cross-validation results are also presented. To assess the performance of each module on different domains, we collected real-world textual data from companies covering various domains (a.o. automotive, dredging and human resources) for all four supported languages. For this multi-domain corpus, a manually verified gold standard was created for each of the three preprocessing steps. We present the performance of our preprocessing components on this corpus and compare it to the performance of other existing tools. 1
    • …
    corecore