15 research outputs found

    Arabic nested noun compound extraction based on linguistic features and statistical measures

    Get PDF
    The extraction of Arabic nested noun compound is significant for several research areas such as sentiment analysis, text summarization, word categorization, grammar checker, and machine translation. Much research has studied the extraction of Arabic noun compound using linguistic approaches, statistical methods, or a hybrid of both. A wide range of the existing approaches concentrate on the extraction of the bi-gram or tri-gram noun compound. Nonetheless, extracting a 4-gram or 5-gram nested noun compound is a challenging task due to the morphological, orthographic, syntactic and semantic variations. Many features have an important effect on the efficiency of extracting a noun compound such as unit-hood, contextual information, and term-hood. Hence, there is a need to improve the effectiveness of the Arabic nested noun compound extraction. Thus, this paper proposes a hybrid linguistic approach and a statistical method with a view to enhance the extraction of the Arabic nested noun compound. A number of pre-processing phases are presented, including transformation, tokenization, and normalisation. The linguistic approaches that have been used in this study consist of a part-of-speech tagging and the named entities pattern, whereas the proposed statistical methods that have been used in this study consist of the NC-value, NTC-value, NLC-value, and the combination of these association measures. The proposed methods have demonstrated that the combined association measures have outperformed the NLC-value, NTC-value, and NC-value in terms of nested noun compound extraction by achieving 90%, 88%, 87%, and 81% for bigram, trigram, 4-gram, and 5-gram, respectively

    Il ruolo delle tecnologie del linguaggio nel monitoraggio dell’evoluzione delle abilità di scrittura: primi risultati

    Get PDF
    L’ultimo decennio ha visto l’affermarsi a livello internazionale dell’uso di tecnologie del linguaggio per lo studio dei processi di apprendimento. Questo contributo riporta i primi e promettenti risultati di uno studio interdisciplinare che si è avvalso di metodi e tecniche di analisi propri della linguistica computazionale, della linguistica e della pedagogia sperimentale. Lo studio, finalizzato al monitoraggio dell’evoluzione del processo di apprendimento della lingua italiana, è stato condotto a partire dalle produzione scritte di studenti della scuola secondaria di primo grado con strumenti di annotazione linguistica automatica e di estrazione di conoscenza e ha portato all’identificazione di un insieme di tratti qualificanti il processo di apprendimento linguistico.Over the last ten years, the use of language technologies was successfully extended to the study of learning processes. The paper reports the first and promising results of an interdisciplinary study aimed at monitoring the evolution of the learning process of the Italian language based on a corpus of written productions by students and exploiting automatic linguistic annotation and knowledge extraction tools

    Exploração de corpora para extração e descrição de léxico de especialidade: para uma metodologia sólida e sustentada

    Get PDF
    The use of corpora for specialized lexicon extraction is a common and consensual method for building lexical resources. However, the methodologies used to achieve this are not openly discussed, rendering the comparison and determination of robust approaches difficult. In order to fill in this gap, in this paper we present and discuss a detailed methodology for extracting specialized lexicon from corpus, combining linguistic and statistical approaches. The proposed method uses specialized and monitor corpora and comprises i) frequency information analyses; ii) concordances and collocations extraction; and iii) textual organization information; accounting for core single and multiword expressions and salient semantic relations extraction. This way, our goal is the determination of a solid and accurate list of potential specialized lexical units that will allow for a swifter final validation and for maximizing the informational value of the interaction with the experts.A exploração de corpora para a extração de léxico de especialidade é um método consensual e comum na construção de recursos lexicais. No entanto, as metodologias empregadas não são explicitamente discutidas, dificultando a comparação e a determinação de abordagens robustas. Para preencher essa lacuna, neste artigo apresentamos e discutimos uma metodologia detalhada para extração de léxico de especialidade a partir de corpora, conjugando abordagens linguísticas e estatísticas. O método proposto prevê tanto o uso de corpora de especialidade como de corpora monitores e inclui: i) análise de dados de frequência; ii) extração de concordâncias e colocações; iii) extração de informação de ordem textual, permitindo a extração de unidades lexicais atómicas e multipalavra e de relações semânticas relevantes. Desse modo, o objetivo da metodologia é a determinação de listas de potenciais unidades lexicais de especialidade e de informações relevantes para a sua descrição que permitam uma validação final rápida e eficiente, maximizando o valor informacional da interação com os especialistas

    Para uma metodologia sólida e sustentada

    Get PDF
    UIDB/03213/2020 UIDP/03213/2020A exploração de corpora para a extração de léxico de especialidade é um método consensual e comum na construção de recursos lexicais. No entanto, as metodologias empregadas não são explicitamente discutidas, dificultando a comparação e a determinação de abordagens robustas. Para preencher essa lacuna, neste artigo apresentamos e discutimos uma metodologia detalhada para extração de léxico de especialidade a partir de corpora, conjugando abordagens linguísticas e estatísticas. O método proposto prevê tanto o uso de corpora de especialidade como de corpora monitores e inclui: i) análise de dados de frequência; ii) extração de concordâncias e colocações; iii) extração de informação de ordem textual, permitindo a extração de unidades lexicais atómicas e multipalavra e de relações semânticas relevantes. Desse modo, o objetivo da metodologia é a determinação de listas de potenciais unidades lexicais de especialidade e de informações relevantes para a sua descrição que permitam uma validação final rápida e eficiente, maximizando o valor informacional da interação com os especialistas.publishersversionpublishe

    An ontology-based recommender system using scholar's background knowledge

    Get PDF
    Scholar’s recommender systems recommend scientific articles based on the similarity of articles to scholars’ profiles, which are a collection of keywords that scholars are interested in. Recent profiling approaches extract keywords from the scholars’ information such as publications, searching keywords, and homepages, and train a reference ontology, which is often a general-purpose ontology, in order to profile the scholars’ interests. However, such approaches do not consider the scholars’ knowledge because the recommender system only recommends articles which are syntactically similar to articles that scholars have already visited, while scholars are interested in articles which contain comparatively new knowledge. In addition, the systems do not support multi-area property of scholars’ knowledge as researchers usually do research in multiple topics simultaneously and are expected to receive focused-topic articles in each recommendation. To address these problems, this study develops a domain-specific reference ontology by merging six Web taxonomies and exploits Wikipedia as a conflict resolver of ontologies. Then, the knowledge items from the scholars’ information are extracted, transformed by DBpedia, and clustered into relevant topics in order to model the multi-area property of scholars’ knowledge. Finally, the clustered knowledge items are mapped to the reference ontology by using DBpedia to create clustered profiles. In addition a semantic similarity algorithm is adapted to the clustered profiles, which enables recommendation of focused-topic articles that contain new knowledge. To evaluate performance of the proposed approach, three different data sets from scholars’ information in Computer Science domain are created, and the precisions in different cases are measured. The proposed method, in comparison with the baseline methods, improves the average precision by 6% when the new reference ontology along with the full scholars’ knowledge is utilized, by an extra 7.2% when scholars’ knowledge is transformed by DBpedia, and further 8.9% when clustered profile is applied. Experimental results certify that using knowledge items instead of keywords for profiling as well as transforming the knowledge items by DBpedia can significantly improve the recommendation performance. Besides, the domain-specific reference ontology can effectively capture the full scholars’ knowledge which results to more accurate profiling

    D7.4 Third evaluation report. Evaluation of PANACEA v3 and produced resources

    Get PDF
    D7.4 reports on the evaluation of the different components integrated in the PANACEA third cycle of development as well as the final validation of the platform itself. All validation and evaluation experiments follow the evaluation criteria already described in D7.1. The main goal of WP7 tasks was to test the (technical) functionalities and capabilities of the middleware that allows the integration of the various resource-creation components into an interoperable distributed environment (WP3) and to evaluate the quality of the components developed in WP5 and WP6. The content of this deliverable is thus complementary to D8.2 and D8.3 that tackle advantages and usability in industrial scenarios. It has to be noted that the PANACEA third cycle of development addressed many components that are still under research. The main goal for this evaluation cycle thus is to assess the methods experimented with and their potentials for becoming actual production tools to be exploited outside research labs. For most of the technologies, an attempt was made to re-interpret standard evaluation measures, usually in terms of accuracy, precision and recall, as measures related to a reduction of costs (time and human resources) in the current practices based on the manual production of resources. In order to do so, the different tools had to be tuned and adapted to maximize precision and for some tools the possibility to offer confidence measures that could allow a separation of the resources that still needed manual revision has been attempted. Furthermore, the extension to other languages in addition to English, also a PANACEA objective, has been evaluated. The main facts about the evaluation results are now summarized

    Terminology Integration in Statistical Machine Translation

    Get PDF
    Elektroniskā versija nesatur pielikumusPromocijas darbs apraksta autora izpētītas metodes un izstrādātus rīkus divvalodu terminoloģijas integrācijai statistiskās mašīntulkošanas sistēmās. Autors darbā piedāvā inovatīvas metodes terminu integrācijai SMT sistēmu trenēšanas fāzē (ar statiskas integrācijas palīdzību) un tulkošanas fāzē (ar dinamiskas integrācijas palīdzību). Darbā uzmanība pievērsta ne tikai metodēm terminu integrācijai SMT, bet arī metodēm valodas resursu, kas nepieciešami dažādu uzdevumu veikšanai terminu integrācijas SMT darbplūsmās, ieguvei. Piedāvātās metodes ir novērtētas automātiskas un manuālas novērtēšanas eksperimentos. Iegūtie rezultāti parāda, ka statiskās un dinamiskās integrācijas metodes ļauj būtiski uzlabot tulkošanas kvalitāti. Darbā aprakstītie rezultāti ir aprobēti vairākos pētniecības projektos un ieviesti praktiskos risinājumos. Atslēgvārdi: statistiskā mašīntulkošana, terminoloģija, starpvalodu informācijas izvilkšanaThe doctoral thesis describes methods and tools researched and developed by the author for bilingual terminology integration into statistical machine translation systems. The author presents novel methods for terminology integration in SMT systems during training (through static integration) and during translation (through dynamic integration). The work focusses not only on the SMT integration techniques, but also on methods for acquisition of linguistic resources that are necessary for different tasks involved in workflows for terminology integration in SMT systems. The proposed methods have been evaluated using automatic and manual evaluation methods. The results show that both static and dynamic integration methods allow increasing translation quality. The thesis describes also areas where the methods have been approbated in practice. Keywords: statistical machine translation, terminology, cross-lingual information extractio

    Ontology Learning from the Arabic Text of the Qur’an: Concepts Identification and Hierarchical Relationships Extraction

    Get PDF
    Recent developments in ontology learning have highlighted the growing role ontologies play in linguistic and computational research areas such as language teaching and natural language processing. The ever-growing availability of annotations for the Qur’an text has made the acquisition of the ontological knowledge promising. However, the availability of resources and tools for Arabic ontology is not comparable with other languages. Manual ontology development is labour-intensive, time-consuming and it requires knowledge and skills of domain experts. This thesis aims to develop new methods for Ontology learning from the Arabic text of the Qur’an, including concepts identification and hierarchical relationships extraction. The thesis presents a methodology for reducing human intervention in building ontology from Classical Arabic Language of the Qur’an text. The set of concepts, which is a crucial step in ontology learning, was generated based on a set of patterns made of lexical and inflectional information. The concepts were identified based on adapted weighting schema that exploit a combination of knowledge to learn the relevance degree of a term. Statistical, domain-specific knowledge and internal information of Multi-Word Terms (MWTs) were combined to learn the relevance of generated terms. This methodology which represents the major contribution of the thesis was experimentally investigated using different terms generation methods. As a result, we provided the Arabic Qur’anic Terms (AQT) as a training resource for machine learning based term extraction. This thesis also introduces a new approach for hierarchical relations extraction from Arabic text of the Qur’an. A set of hierarchical relations occurring between identified concepts are extracted based on hybrid methods including head-modifier, set of markers for copula construct in Arabic text, referents. We also compared a number of ontology alignment methods for matching ontological bilingual Qur’anic resources. In addition, a multi-dimensional resource named Arabic Qur’anic Database (AQD) about the Qur’an is made for Arabic computational researchers, allowing regular expression query search over the included annotations. The search tool was successfully applied to find instances for a given complex rule made of different combined resources
    corecore