5 research outputs found

    Π£Π»ΠΎΠ³Π° истраТивача Ρƒ ΠΊΡ€Π΅ΠΈΡ€Π°ΡšΡƒ корпуса ΠΊΠΎΠ½Π²Π΅Ρ€Π·Π°Ρ†ΠΈΠΎΠ½ΠΈΡ… Π½Π°Ρ€Π°Ρ‚ΠΈΠ²Π°

    Get PDF
    Π£ ΠΎΠ²ΠΎΠΌ ΠΏΡ€ΠΈΠ»ΠΎΠ³Ρƒ сС антролингвистичком Π°Π½Π°Π»ΠΈΠ·ΠΎΠΌ ΠΏΡ€ΠΈΠΌΠ΅Ρ€Π° ΠΈΠ·Π΄Π²ΠΎΡ˜Π΅Π½ΠΈΡ… ΠΈΠ· Ρ€Π°Π·Π³ΠΎΠ²ΠΎΡ€Π° који Ρ‡ΠΈΠ½Π΅ корпус Ρ„ΠΎΡ€ΠΌΠΈΡ€Π°Π½ Π·Π° ΠΏΠΎΡ‚Ρ€Π΅Π±Π΅ ΡΡ‚ΡƒΠ΄ΠΈΡ˜Π΅ Π‘Ρ‚Π΅Ρ€Π΅ΠΎΡ‚ΠΈΠΏ Π²Ρ€Π΅ΠΌΠ΅Π½Π° Ρƒ дискурсу расСљСних Π»ΠΈΡ†Π° са Косова ΠΈ ΠœΠ΅Ρ‚ΠΎΡ…ΠΈΡ˜Π΅ ΡƒΠΊΠ°Π·ΡƒΡ˜Π΅ Π½Π° ΡƒΠ»ΠΎΠ³Ρƒ истраТивача Ρƒ Π²ΠΎΡ’Π΅ΡšΡƒ тСрСнских Ρ€Π°Π·Π³ΠΎΠ²ΠΎΡ€Π° са расСљС- Π½ΠΈΠΌ Π»ΠΈΡ†ΠΈΠΌΠ°. Ѐокус Π°Π½Π°Π»ΠΈΠ·Π΅ јС Π½Π° ΠΈΠ½Ρ‚Π΅Ρ€Π²Π΅Π½Ρ†ΠΈΡ˜Π°ΠΌΠ° истраТивача Ρƒ Ρ€Π°Π·Π³ΠΎΠ²ΠΎΡ€ΠΈΠΌΠ° којС су, с јСднС странС, ΠΈΠΌΠ°Π»Π΅ Π²Π°ΠΆΠ½Ρƒ ΡƒΠ»ΠΎΠ³Ρƒ Ρƒ Ρ„ΠΎΡ€ΠΌΠΈΡ€Π°ΡšΡƒ ΠΊΠΎΠΌΠ»Π΅Ρ‚Π½ΠΎΠ³ корпуса, Π° с Π΄Ρ€ΡƒΠ³Π΅, Ρ‚Π΅ ΠΈΠ½Ρ‚Π΅Ρ€Π²Π΅Π½Ρ†ΠΈΡ˜Π΅ ΠΏΠΎΠΊΠ°Π·ΡƒΡ˜Ρƒ Π½Π΅ΠΊΠ΅ ΠΎΠ΄ Ρ€Π°Π·Π»ΠΈΠΊΠ° Ρƒ ΠΊΠΎΠ½Ρ†Π΅ΠΏΡ‚ΡƒΠ°Π»ΠΈΠ·Π°Ρ†ΠΈΡ˜ΠΈ свСта истраТивача ΠΈ саговорника, ΠΎ којима сС нијС ΡƒΠ½Π°ΠΏΡ€Π΅Π΄ ΠΌΠΎΠ³Π»ΠΎ Ρ€Π°Π·ΠΌΠΈΡˆΡ™Π°Ρ‚ΠΈ ΠΈ којС су ΡƒΠΎΡ‡Π΅Π½Π΅ Ρ‚Π΅ΠΊ Π½Π°ΠΊΠΎΠ½ Π°Π½Π°Π»ΠΈΠ·Π΅ транскрипата

    Extracting Multilingual Topics from Unaligned Comparable Corpora

    No full text
    Topic models have been studied extensively in the context of monolingual corpora. Though there are some attempts to mine topical structure from cross-lingual corpora, they require clues about document alignments. In this paper we present a generative model called JointLDA which uses a bilingual dictionary to mine multilingual topics from an unaligned corpus. Experiments conducted on different data sets confirm our conjecture that jointly modeling the cross-lingual corpora offers several advantages compared to individual monolingual models. Since the JointLDA model merges related topics in different languages into a single multilingual topic: a) it can fit the data with relatively fewer topics. b) it has the ability to predict related words from a language different than that of the given document. In fact it has better predictive power compared to the bag-of-word based translation model leaving the possibility for JointLDA to be preferred over bag-of-word model for cross-lingual IR applications. We also found that the monolingual models learnt while optimizing the cross-lingual copora are more effective than the corresponding LDA models

    Classifying Bias in Large Multilingual Corpora via Crowdsourcing and Topic Modeling

    Get PDF
    Our project extends previous algorithmic approaches to finding bias in large text corpora. We used multilingual topic modeling to examine language-specific bias in the English, Spanish, and Russian versions of Wikipedia. In particular, we placed Spanish articles discussing the Cold War on a Russian-English viewpoint spectrum based on similarity in topic distribution. We then crowdsourced human annotations of Spanish Wikipedia articles for comparison to the topic model. Our hypothesis was that human annotators and topic modeling algorithms would provide correlated results for bias. However, that was not the case. Our annotators indicated that humans were more perceptive of sentiment in article text than topic distribution, which suggests that our classifier provides a different perspective on a text’s bias
    corecore