36 research outputs found

    Multilingual collocation extraction with a syntactic parser

    Get PDF
    An impressive amount of work was devoted over the past few decades to collocation extraction. The state of the art shows that there is a sustained interest in the morphosyntactic preprocessing of texts in order to better identify candidate expressions; however, the treatment performed is, in most cases, limited (lemmatization, POS-tagging, or shallow parsing). This article presents a collocation extraction system based on the full parsing of source corpora, which supports four languages: English, French, Spanish, and Italian. The performance of the system is compared against that of the standard mobile-window method. The evaluation experiment investigates several levels of the significance lists, uses a fine-grained annotation schema, and covers all the languages supported. Consistent results were obtained for these languages: parsing, even if imperfect, leads to a significant improvement in the quality of results, in terms of collocational precision (between 16.4 and 29.7%, depending on the language; 20.1% overall), MWE precision (between 19.9 and 35.8%; 26.1% overall), and grammatical precision (between 47.3 and 67.4%; 55.6% overall). This positive result bears a high importance, especially in the perspective of the subsequent integration of extraction results in other NLP application

    Extraction et représentation des constructions à verbe support en espagnol

    No full text
    International audienceThe computational treatment of support verb constructions (take a picture, make a presentation) is a challenging task in NLP. This is also true in Spanish, where these constructions are frequent in texts, but not frequently included in machine-readable lexicons. Our goal is to extract support verb constructions from a very large corpus of Spanish. We fine-tune a set of morpho-syntactic patterns based on a large set of possible support verbs. Then, we filter this list using thresholds and association measures. While quite standard, this methodology allows the extraction of many good-quality expressions. As future work, we would like to investigate semantic representations for these constructions in multilingual lexicons.Le traitement informatique de constructions à verbe support (prendre une photo, faire une présentation) est une tâche difficile en TAL. Cela est également vrai en espagnol, où ces constructions sont fréquentes dans les textes, mais ne font pas souvent partie des lexiques exploitables par une machine. Notre objectif est d'extraire des constructions à verbe support à partir d'un très grand corpus de l'espagnol. Nous peaufinons un ensemble de motifs morpho-syntaxiques fondés sur un grand nombre de verbe support possibles. Ensuite, nous filtrons cette liste en utilisant des seuils et des mesures d'association. Bien que tout à fait classique, cette méthode permet l'extraction de nombreuses expressions de bonne qualité. À l'avenir, nous souhaitons étudier les représentations sémantiques de ces constructions dans des lexiques multilingues

    Translating English verbal collocations into Spanish: On distribution and other relevant differences related to diatopic variation

    Get PDF
    Language varieties should be taken into account in order to enhance fluency and naturalness of translated texts. In this paper we will examine the collocational verbal range for prima-facie translation equivalents of words like decision and dilemma, which in both languages denote the act or process of reaching a resolution after consideration, resolving a question or deciding something. We will be mainly concerned with diatopic variation in Spanish. To this end, we set out to develop a giga-token corpus-based protocol which includes a detailed and reproducible methodology sufficient to detect collocational peculiarities of transnational languages. To our knowledge, this is one of the first observational studies of this kind. The paper is organised as follows. Section 1 introduces some basic issues about the translation of collocations against the background of languages’ anisomorphism. Section 2 provides a feature characterisation of collocations. Section 3 deals with the choice of corpora, corpus tools, nodes and patterns. Section 4 covers the automatic retrieval of the selected verb + noun (object) collocations in general Spanish and the co-existing national varieties. Special attention is paid to comparative results in terms of similarities and mismatches. Section 5 presents conclusions and outlines avenues of further research.Published versio

    Measuring the Stability of Query Term Collocations and Using it in Document Ranking

    Get PDF
    Delivering the right information to the user is fundamental in information retrieval system. Many traditional information retrieval models assume word independence and view a document as bag-of-words, however getting the right information requires a deep understanding of the content of the document and the relationships that exist between words in the text. This study focuses on developing two new document ranking techniques, which are based on a lexical cohesive relationship of collocation. Collocation relationship is a semantic relationship that exists between words that co-occur in the same lexical environment. Two types of collocation relationship have been considered; collocation in the same grammatical structure (such as a sentence), and collocation in the same semantic structure where query terms occur in different sentences but they co-occur with the same words. In the first technique, we only considered the first type of collocation to calculate the document score; where the positional frequency of query terms co-occurrence have been used to identify collocation relationship between query terms and calculating query term’s weight. In the second technique, both types of collocation have been considered; where the co-occurrence frequency distribution within a predefined window has been used to determine query terms collocations and computing query term’s weight. Evaluation of the proposed techniques show performance gain in some of the collocations over the chosen baseline runs

    Text mining techniques for patent analysis.

    Get PDF
    Abstract Patent documents contain important research results. However, they are lengthy and rich in technical terminology such that it takes a lot of human efforts for analyses. Automatic tools for assisting patent engineers or decision makers in patent analysis are in great demand. This paper describes a series of text mining techniques that conforms to the analytical process used by patent analysts. These techniques include text segmentation, summary extraction, feature selection, term association, cluster generation, topic identification, and information mapping. The issues of efficiency and effectiveness are considered in the design of these techniques. Some important features of the proposed methodology include a rigorous approach to verify the usefulness of segment extracts as the document surrogates, a corpus-and dictionary-free algorithm for keyphrase extraction, an efficient co-word analysis method that can be applied to large volume of patents, and an automatic procedure to create generic cluster titles for ease of result interpretation. Evaluation of these techniques was conducted. The results confirm that the machine-generated summaries do preserve more important content words than some other sections for classification. To demonstrate the feasibility, the proposed methodology was applied to a realworld patent set for domain analysis and mapping, which shows that our approach is more effective than existing classification systems. The attempt in this paper to automate the whole process not only helps create final patent maps for topic analyses, but also facilitates or improves other patent analysis tasks such as patent classification, organization, knowledge sharing, and prior art searches

    Semantic vector representations of senses, concepts and entities and their applications in natural language processing

    Get PDF
    Representation learning lies at the core of Artificial Intelligence (AI) and Natural Language Processing (NLP). Most recent research has focused on develop representations at the word level. In particular, the representation of words in a vector space has been viewed as one of the most important successes of lexical semantics and NLP in recent years. The generalization power and flexibility of these representations have enabled their integration into a wide variety of text-based applications, where they have proved extremely beneficial. However, these representations are hampered by an important limitation, as they are unable to model different meanings of the same word. In order to deal with this issue, in this thesis we analyze and develop flexible semantic representations of meanings, i.e. senses, concepts and entities. This finer distinction enables us to model semantic information at a deeper level, which in turn is essential for dealing with ambiguity. In addition, we view these (vector) representations as a connecting bridge between lexical resources and textual data, encoding knowledge from both sources. We argue that these sense-level representations, similarly to the importance of word embeddings, constitute a first step for seamlessly integrating explicit knowledge into NLP applications, while focusing on the deeper sense level. Its use does not only aim at solving the inherent lexical ambiguity of language, but also represents a first step to the integration of background knowledge into NLP applications. Multilinguality is another key feature of these representations, as we explore the construction language-independent and multilingual techniques that can be applied to arbitrary languages, and also across languages. We propose simple unsupervised and supervised frameworks which make use of these vector representations for word sense disambiguation, a key application in natural language understanding, and other downstream applications such as text categorization and sentiment analysis. Given the nature of the vectors, we also investigate their effectiveness for improving and enriching knowledge bases, by reducing the sense granularity of their sense inventories and extending them with domain labels, hypernyms and collocations

    Exploratory Search on Mobile Devices

    Get PDF
    The goal of this thesis is to provide a general framework (MobEx) for exploratory search especially on mobile devices. The central part is the design, implementation, and evaluation of several core modules for on-demand unsupervised information extraction well suited for exploratory search on mobile devices and creating the MobEx framework. These core processing elements, combined with a multitouch - able user interface specially designed for two families of mobile devices, i.e. smartphones and tablets, have been finally implemented in a research prototype. The initial information request, in form of a query topic description, is issued online by a user to the system. The system then retrieves web snippets by using standard search engines. These snippets are passed through a chain of NLP components which perform an ondemand or ad-hoc interactive Query Disambiguation, Named Entity Recognition, and Relation Extraction task. By on-demand or ad-hoc we mean the components are capable to perform their operations on an unrestricted open domain within special time constraints. The result of the whole process is a topic graph containing the detected associated topics as nodes and the extracted relation ships as labelled edges between the nodes. The Topic Graph is presented to the user in different ways depending on the size of the device she is using. Various evaluations have been conducted that help us to understand the potentials and limitations of the framework and the prototype

    Um sistema de manutenção semiautomática de ontologias a partir do reconhecimento de entidades.

    Get PDF
    TCC (graduação) - Universidade Federal de Santa Catarina, Campus Araranguá, Curso de Tecnologias da Informação e Comunicação.Uma quantidade cada vez maior de informações está disponível em formato textual e eletrônico. Essas informações contêm padrões textuais, tais como, conceitos, relacionamentos, regras, entre outros, podendo ser de grande auxílio na integração com outros sistemas ou mesmo, para auxiliar processos de tomada de decisão. Contudo, existe uma grande preocupação em como recuperar, organizar, armazenar e compartilhar estes padrões considerando uma formalização adequada. Neste sentido, a área de Extração de Informação promove suporte através de técnicas que analisam o texto e extraem padrões tidos como relevantes. Após a fase de extração, torna-se necessária a correta atribuição dos padrões para classes de um domínio em particular, em que estes passam a se chamar entidades. Tal processo é realizado através da subárea chamada de Reconhecimento de Entidades. Além disso, visando o compartilhamento e a manutenção de determinado domínio de conhecimento, as entidades devem ser armazenadas em um meio que possibilite atingir tais objetivos. Neste contexto a área de Ontologia se insere. Para demonstrar a viabilidade da proposição deste trabalho foi desenvolvido um protótipo voltado às fases de extração e reconhecimento de entidades, bem como, a adição dessas entidades em uma ontologia para posterior manutenção. O processo de manutenção envolve a participação de um especialista de domínio responsável por validar os conceitos e modificar estes para as suas devidas classes quando necessário. Sendo assim, a manutenção pode ser entendida como semiautomática. De modo geral, a aplicação do protótipo em alguns cenários permitiu demonstrar que o sistema proposto é capaz de obter resultados satisfatórios, ainda que iniciais, mesmo que não exista conhecimento prévio de determinado domínio.An increasing amount of information is available in textual and electronic format. This information has textual patterns, such as concepts, relationships, rules, among others. It can be valuable whether integrated with other systems or even to support decision making processes. However, there is great concern about how to retrieve, organize, store and share these patterns considering a suitable formalization. In this sense, the Information Extraction area promotes support through techniques that analyze the text and extract patterns regarded as relevant. After extraction phase it becomes necessary the correct assignment of patterns to classes in a particular domain. Thus, these patterns are called entities. This process is accomplished through the Named Entity Recognition area. Additionally, aiming sharing and maintenance of a specific knowledge domain, entities should be stored in a way that allows achieve these goals. In this context the Ontology area stands. To demonstrate the feasibility of the proposed work we have developed a prototype toward pattern extraction and entity recognition phases, as well as the addition of these entities into ontology for subsequent analyses. The maintenance process involves the participation of a domain expert which is responsible for the concepts validation, as well as by moving these entities to the properly classes when needed. Thus, maintenance can be understood as semiautomatic. In general, the application of the prototype in some scenarios demonstrated that the proposed system, although in an initial stage, is able to obtain satisfactory results even without prior knowledge of a particular domain
    corecore