723 research outputs found

    Similar Text Fragments Extraction for Identifying Common Wikipedia Communities

    Get PDF
    Similar text fragments extraction from weakly formalized data is the task of natural language processing and intelligent data analysis and is used for solving the problem of automatic identification of connected knowledge fields. In order to search such common communities in Wikipedia, we propose to use as an additional stage a logical-algebraic model for similar collocations extraction. With Stanford Part-Of-Speech tagger and Stanford Universal Dependencies parser, we identify the grammatical characteristics of collocation words. WithWordNet synsets, we choose their synonyms. Our dataset includes Wikipedia articles from different portals and projects. The experimental results show the frequencies of synonymous text fragments inWikipedia articles that form common information spaces. The number of highly frequented synonymous collocations can obtain an indication of key common up-to-date Wikipedia communities

    The logic and linguistic model for automatic extraction of collocation similarity

    Get PDF
    The article discusses the process of automatic identification of collocation similarity. The semantic analysis is one of the most advanced as well as the most difficult NLP task. The main problem of semantic processing is the determination of polysemy and synonymy of linguistic units. In addition, the task becomes complicated in case of word collocations. The paper suggests a logical and linguistic model for automatic determining semantic similarity between colocations in Ukraine and English languages. The proposed model formalizes semantic equivalence of collocations by means of semantic and grammatical characteristics of collocates. The basic idea of this approach is that morphological, syntactic and semantic characteristics of lexical units are to be taken into account for the identification of collocation similarity. Basic mathematical means of our model are logical-algebraic equations of the finite predicates algebra. Verb-noun and noun-adjective collocations in Ukrainian and English languages consist of words belonged to main parts of speech. These collocations are examined in the model. The model allows extracting semantically equivalent collocations from semi-structured and non-structured texts. Implementations of the model will allow to automatically recognize semantically equivalent collocations. Usage of the model allows increasing the effectiveness of natural language processing tasks such as information extraction, ontology generation, sentiment analysis and some others

    Knowledge Representation and WordNets

    Get PDF
    Knowledge itself is a representation of “real facts”. Knowledge is a logical model that presents facts from “the real world” witch can be expressed in a formal language. Representation means the construction of a model of some part of reality. Knowledge representation is contingent to both cognitive science and artificial intelligence. In cognitive science it expresses the way people store and process the information. In the AI field the goal is to store knowledge in such way that permits intelligent programs to represent information as nearly as possible to human intelligence. Knowledge Representation is referred to the formal representation of knowledge intended to be processed and stored by computers and to draw conclusions from this knowledge. Examples of applications are expert systems, machine translation systems, computer-aided maintenance systems and information retrieval systems (including database front-ends).knowledge, representation, ai models, databases, cams

    A Hybrid Extraction Model for Chinese Noun/Verb Synonymous bi-gram Collocations

    Get PDF

    A lexicon of verb and -<i>mente</i> adverb collocations in Portuguese:extraction from corpora and classification

    Get PDF
    International audienceCollocations started to be a target of research in the twentieth century after FIRTH (1957) coined the term and called attention to the fact that the way we combine words in natural language is far from being unconstrained. In the sense of Firth, a pair or group of words can be considered a collocation if the probability for their co-occurrence exceeds chance levels. For a long time this concept has prevailed in the literature as the rationale behind the task of collocation extraction. However, the more recent formulation of MEL’ČUK (2003) provides a more semantic-based view on the phenomenon that does not necessarily coincide with an attested high frequency of the word combinations. According to Mel’čuk, the meaning of certain words would dictate the adjacent use of others, forming groups or pairs of base words and collocates.Concerning the linguistic pattern investigated in this study, namely pairs of verb and -mente (‘-ly’)-ending adverbs, the verb would be the base of the combination, while the adverb would be the collocate. The strategy here adopted for the extraction of this pattern profits both from Firth's and Mel’čuk’s formulations, since, at different stages, it relies both on frequency of distribution and on meaning-oriented human annotations.The corpus used for the extraction of verb-adverb bigrams was the CETEMPúblico (SANTOS & ROCHA, 2001) corpus of European Portuguese, consisting of 192M words of journalistic texts. This is, to the best of our knowledge, the largest freely distributed corpus of Portuguese. Albeit constituting just over 10% of all simple adverb occurrences in the corpus, adverbs ending in -mente, henceforth Adv-mente, represent in fact the majority of the simple-word lemmas of this grammatical class, based on data from the CETEMPúblico.While a number of initiatives at collocation extraction have relied substantially on a search for adjacent words, as CHOUEKA (1988), newer studies suggest that methods involving some level of syntactical parsing might prove more precise for certain linguistic patterns (SERETAN, 2011). In view of this, we experiment with a syntax-based approach for the extraction of verb-adverb pairs from the corpus. This pattern could be deemed a challenging one in respect to the extraction task, since adverbs can occupy different positions in the sentence, being commonly associated with a rather loose mobility in speech (BECHARA, 2003).In the remainder of this paper, we describe the syntax-based approach adopted to extract collocations from the corpus, explain the linguistically motivated classification of collocation candidates, and present an empirical evaluation of statistical association measures in identifying {V, Adv-mente} collocations. We conclude by discussing the appropriateness of the methods experimented with in view of {V, Adv-mente} pairs and by proposing future work

    Mining Large-scale Event Knowledge from Web Text

    Get PDF
    AbstractThis paper addresses the problem of automatic acquisition of semantic relations between events. While previous works on semantic relation automatic acquisition relied on annotated text corpus, it is still unclear how to develop more generic methods to meet the needs of identifying related event pairs and extracting event-arguments (especially the predicate, subject and object). Motivated by this limitation, we develop a three-phased approach that acquires causality from the Web text. First, we use explicit connective markers (such as “because”) as linguistic cues to discover causal related events. Next, we extract the event-arguments based on local dependency parse trees of event expressions. At the last step, we propose a statistical model to measure the potential causal relations. The results of our empirical evaluations on a large-scale Web text corpus show that (a) the use of local dependency tree extensively improves both the accuracy and recall of event-arguments extraction task, and (b) our measure improves the traditional PMI method

    A GENRE AND COLLOCATIONAL ANALYSIS OF THE NEAR-SYNONYMS TEACH, EDUCATE AND INSTRUCT: A CORPUS-BASED APPROACH

    Get PDF
    This study investigated three synonymous verbs, teach, educate, and instruct, in terms of their collocational patterns and distribution across genres. Data were drawn from the Longman Dictionary of Contemporary English (2021) and the Corpus of Contemporary American English (COCA). The findings from this investigation revealed that from distribution patterns among text types, teach is far more widely and commonly used than educate and instruct, with the highest frequency among eight genres. The frequency data also revealed that all three synonyms are preferred in formal genres than in spoken conversation, as represented by academic literature. The results also revealed that grouping of noun collocates into categories offers more insightful information about each target synonym’s co-occurring authentic use. This can be more helpful to EFL students than simply looking up a term in a dictionary or relying on native speakers’ intuition, which may not be dependable or trustworthy

    A Corpus-based Language Network Analysis of Near-synonyms in a Specialized Corpus

    Get PDF
    As the international medium of communication for seafarers throughout the world, the importance of English has long been recognized in the maritime industry. Many studies have been conducted on Maritime English teaching and learning, nevertheless, although there are many near-synonyms existing in the language, few studies have been conducted on near-synonyms used in the maritime industry. The objective of this study is to answer the following three questions. First, what are the differences and similarities between different near-synonyms in English? Second, can collocation network analysis provide a new perspective to explain the distinctions of near-synonyms from a micro-scopic level? Third, is semantic domain network analysis useful to distinguish one near-synonym from the other at the macro-scopic level? In pursuit of these research questions, I first illustrated how the idea of incorporating collocates in corpus linguistics, Maritime English, near-synonyms, semantic domains and language network was studied. Then important concepts such as Maritime English, English for Specific Purposes, corpus linguistics, synonymy, collocation, semantic domains and language network analysis were introduced. Third, I compiled a 2.5 million word specialized Maritime English Corpus and proposed a new method of tagging English multi-word compounds, discussing the comparison of with and without multi-word compounds with regard to tokens, types, STTR and mean word length. Fourth, I examined collocates of five groups of near-synonyms, i.e., ship vs. vessel, maritime vs. marine, ocean vs. sea, safety vs. security, and harbor vs. port, drawing data through WordSmith 6.0, tagging semantic domains in Wmatrix 3.0, and conducting network analyses using NetMiner 4.0. In the final stage, from the results and discussions, I was able to answer the research questions. First, maritime near-synonyms generally show clear preference to specific collocates. Due to the specialty of Maritime English, general definitions are not helpful for the distinction between near-synonyms, therefore a new perspective is needed to view the behaviors of maritime words. Second, as a special visualization method, collocation network analysis can provide learners with a direct vision of the relationships between words. Compared with traditional collocation tables, learners are able to more quickly identify the collocates and find the relationship between several node words. In addition, it is much easier for learners to find the collocates exclusive to a specific word, thereby helping them to understand the meaning specific to that word. Third, if the collocation network shows learners relationships of words, the semantic domain network is able to offer guidance cognitively: when a person has a specific word, how he can process it in his mind and therefore find the more appropriate synonym to collocate with. Main semantic domain network analysis shows us the exclusive domains to a certain near-synonym, and therefore defines the concepts exclusive to that near-synonym: furthermore, main semantic domain network analysis and sub-semantic domain network analysis together are able to tell us how near-synonyms show preference or tendency for one synonym rather than another, even when they have shared semantic domains. The options in identifying relationships of near-synonyms can be presented through the classic metaphor of "the forest and the trees." Generally speaking, we see only the vein of a tree leaf through the traditional way of sentence-level analysis. We see the full leaf through collocation network analysis. We see the tree, even the whole forest, through semantic domain network analysis.Contents Chapter 1. Introduction 1 1.1 Focus of Inquiry 1 1.2 Outline of the Thesis 5 Chapter 2. Literature Review 8 2.1 A Brief Synopsis 8 2.2 Maritime English as an English for Specific Purposes (ESP) 9 2.2.1 What is ESP? 9 2.2.2 Maritime English as ESP 10 2.2.3 ESP and Corpus Linguistics 11 2.3 Synonymy 12 2.3.1 Definition of Synonymy 13 2.3.2 Synonymy as a Matter of Degree 15 2.3.3 Criteria for Synonymy Differentiation 18 2.3.4 Near-synonyms in Corpus Linguistics 19 2.4 Collocation 21 2.4.1 Definition of Collocation 21 2.4.2 Collocation in Corpus Linguistics 22 2.4.2.1 Definition of Collocation in Corpus Linguistics 23 2.4.2.2 Collocation vs. Colligation 24 2.4.3 Lexical Priming of Collocation in Psychology 25 2.5 Language Network Analysis 26 2.5.1 Definition 26 2.5.2 Classification 27 2.5.3 Basic Concepts 31 2.5.4 Previous Studies 33 2.6 Semantic Domain Analysis 39 2.6.1 Concepts of Semantic Domains 39 2.6.2 Previous Studies on Semantic Domain Analysis 39 Chapter 3. Data and Methodology 41 3.1 Maritime English Corpus 41 3.1.1 What is a Corpus? 41 3.1.2 Characteristics of a Corpus 42 3.1.2.1 Corpus-driven vs. Corpus-based research 42 3.1.2.2 Specialized Corpora for Specialized Discourse 43 3.1.3 Maritime English Corpus (MEC) 44 3.1.3.1 Sampling of the MEC 45 3.1.3.2 Size, Balance, and Representativeness 51 3.1.3.3 Multi-word Compounds in the MEC 53 3.1.3.4 Basic Information of the MEC 56 3.2 Methodology for Collocates Extraction 60 3.3 Methodology for Networks Visualization 63 3.4 Methodology for Semantic Tagging 65 3.5 Process of Data Analysis 69 Chapter 4. Collocation Network Analysis of Near-synonyms 70 4.1 Meaning Differences 71 4.1.1 Ship vs. Vessel 71 4.1.2 Maritime vs. Marine 72 4.1.3 Sea vs. Ocean 73 4.1.4 Safety vs. Security 74 4.1.5 Port vs. Harbor 76 4.2 Similarity Degree of Groups of Near-synonyms 76 4.2.1 Similarity Degree Based on Number of Shared Collocates 77 4.2.2 Similarity Degree Based on MI3 Cosine Similarity 78 4.3 Collocation Network Analysis 80 4.3.1 Ship vs. Vessel 80 4.3.2 Maritime vs. Marine 82 4.3.3 Sea vs. Ocean 84 4.3.4 Safety vs. Security 85 4.3.5 Port vs. Harbor 87 4.4 Advantages and Limitations of Collocation Network Analysis 88 Chapter 5. Semantic Domain Network Analysis of Near-synonyms 89 5.1 Comparison between Collocation and Semantic Domain Analysis 89 5.2 Semantic Domain Network Analysis of Exclusiveness 92 5.2.1 Ship vs. Vessel 93 5.2.2 Maritime vs. Marine 96 5.2.3 Sea vs. Ocean 99 5.2.4 Safety vs. Security 102 5.2.5 Port vs. Harbor 105 5.3 Analysis of Shared Semantic Domains 108 5.4 Advantages and Limitations of Semantic Domain Network Analysis 112 Chapter 6. Conclusion 113 6.1 Summary 113 6.2 Limitations and Implications 116 References 118 Appendix: Collocates of Near-synonyms 136Docto
    corecore