1,160 research outputs found

    Why We Should Report the Details in Subjective Evaluation of TTS More Rigorously

    Full text link
    This paper emphasizes the importance of reporting experiment details in subjective evaluations and demonstrates how such details can significantly impact evaluation results in the field of speech synthesis. Through an analysis of 80 papers presented at INTERSPEECH 2022, we find a lack of thorough reporting on critical details such as evaluator recruitment and filtering, instructions and payments, and the geographic and linguistic backgrounds of evaluators. To illustrate the effect of these details on evaluation outcomes, we conducted mean opinion score (MOS) tests on three well-known TTS systems under different evaluation settings and we obtain at least three distinct rankings of TTS models. We urge the community to report experiment details in subjective evaluations to improve the reliability and interpretability of experimental results.Comment: Interspeech 2023 camera-ready versio

    Patent Analysis for the Formulation of Technology Policy: Evidence from 4G LTE Industry

    Get PDF
    Policy-makers seek a more rigorous method of selecting potentially successful technologies to fulfil the requirements of different stakeholders. Patent analysis should be able to assist policy-makers in (1) understanding the development trajectory of technologies and monitoring the status of technological development to gain a dynamic view of the current competition situation; (2) applying the concept of relative patent advantage (RPA) to grasp the comparative advantages or disadvantages of specific technology domains in each nation; and (3) combining the patent data and multivariate methods of analysis to clarify the current state of an industry’s leading technologies. With the goal of combining the methods of patent data analysis and multivariate analysis, we assess the 4G LTE techniques and explore the comparative technological advantages of Top 10 countries with most patents. This study aims to provide suggestions to serve as an important reference for each nation in formulating its future technology policies

    Knowledge Sharing in Virtual Community: The Comparison between Contributors and Lurkers

    Get PDF
    Internet-based virtual communities are growing with an unprecedented rate. Virtual communities have been viewed as platforms for sharing knowledge. The present study proposed an integrated model by investigating social capital and motivational factors that would influence the knowledge sharing attitude of members. Data were collected from 207 professional virtual community users (including 53 contributors and 154 lurkers). The results showed that trust and pro-sharing norms mediate the relationship between shared understanding and knowledge sharing attitude. Enjoy helping, commitment, and community-related outcome expectations enhance contributors’ attitudes toward knowledge sharing. When lurkers perceived more reciprocity in their communities and expect more community-related outcome, they incline to sharing knowledge with others. The implications of these results are discussed

    Revealing the Blind Spot of Sentence Encoder Evaluation by HEROS

    Full text link
    Existing sentence textual similarity benchmark datasets only use a single number to summarize how similar the sentence encoder's decision is to humans'. However, it is unclear what kind of sentence pairs a sentence encoder (SE) would consider similar. Moreover, existing SE benchmarks mainly consider sentence pairs with low lexical overlap, so it is unclear how the SEs behave when two sentences have high lexical overlap. We introduce a high-quality SE diagnostic dataset, HEROS. HEROS is constructed by transforming an original sentence into a new sentence based on certain rules to form a \textit{minimal pair}, and the minimal pair has high lexical overlaps. The rules include replacing a word with a synonym, an antonym, a typo, a random word, and converting the original sentence into its negation. Different rules yield different subsets of HEROS. By systematically comparing the performance of over 60 supervised and unsupervised SEs on HEROS, we reveal that most unsupervised sentence encoders are insensitive to negation. We find the datasets used to train the SE are the main determinants of what kind of sentence pairs an SE considers similar. We also show that even if two SEs have similar performance on STS benchmarks, they can have very different behavior on HEROS. Our result reveals the blind spot of traditional STS benchmarks when evaluating SEs.Comment: ACL 2023 repl4nlp (representation learning for NLP) workshop poster paper. Dataset at https://huggingface.co/datasets/dcml0714/Hero
    corecore