6 research outputs found

    Parallel and Distributed Statistical-based Extraction of Relevant Multiwords from Large Corpora

    Get PDF
    The amount of information available through the Internet has been showing a significant growth in the last decade. The information can result from various sources such as scientific experiments resulting from particle acceleration, recording the flight data of a commercial aircraft, or sets of documents from a given domain such as medical articles, news headlines from a newspaper, or social networks contents. Due to the volume of data that must be analyzed, it is necessary to endow the search engines with new tools that allow the user to obtain the desired information in a timely and accurate manner. One approach is the annotation of documents with their relevant expressions. The extraction of relevant expressions from natural language text documents can be accomplished by the use of semantic, syntactic, or statistical techniques. Although the latter tend to be not so accurate, they have the advantage of being independent of the language. This investigation was performed in the context of LocalMaxs, which is a statistical method, thus language-independent, capable of extracting relevant expressions from natural language corpora. However, due to the large volume of data involved, the sequential implementations of the above techniques have severe limitations both in terms of execution time and memory space. In this thesis we propose a distributed architecture and strategies for parallel implementations of statistical-based extraction of relevant expressions from large corpora. A methodology was developed for modeling and evaluating those strategies based on empirical and theoretical approaches to estimate the statistical distribution of n-grams in natural language corpora. These approaches were applied to guide the design and evaluation of the behavior of LocalMaxs parallel and distributed implementations on cluster and cloud computing platforms. The implementation alternatives were compared regarding their precision and recall, and their performance metrics, namely, execution time, parallel speedup and sizeup. The performance results indicate almost linear speedup and sizeup for the range of large corpora sizes

    Towards improving WEBSOM with multi-word expressions

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaLarge quantities of free-text documents are usually rich in information and covers several topics. However, since their dimension is very large, searching and filtering data is an exhaustive task. A large text collection covers a set of topics where each topic is affiliated to a group of documents. This thesis presents a method for building a document map about the core contents covered in the collection. WEBSOM is an approach that combines document encoding methods and Self-Organising Maps (SOM) to generate a document map. However, this methodology has a weakness in the document encoding method because it uses single words to characterise documents. Single words tend to be ambiguous and semantically vague, so some documents can be incorrectly related. This thesis proposes a new document encoding method to improve the WEBSOM approach by using multi word expressions (MWEs) to describe documents. Previous research and ongoing experiments encourage us to use MWEs to characterise documents because these are semantically more accurate than single words and more descriptive

    A theoretical model for n-gram distribution in big data corpora

    Get PDF
    There is a wide diversity of applications relying on the identification of the sequences of n consecutive words (n-grams) occurring in corpora. Many studies follow an empirical approach for determining the statistical distribution of the n-grams but are usually constrained by the corpora sizes, which for practical reasons stay far away from Big Data. However, Big Data sizes imply hidden behaviors to the applications, such as extraction of relevant information from Web scale sources. In this paper we propose a theoretical approach for estimating the number of distinct n-grams in each corpus. It is based on the Zipf-Mandelbrot Law and the Poisson distribution, and it allows an efficient estimation of the number of distinct 1-grams, 2-grams, 6-grams, for any corpus size. The proposed model was validated for English and French corpora. We illustrate a practical application of this approach to the extraction of relevant expressions from natural language corpora, and predict its asymptotic behaviour for increasingly large sizes.info:eu-repo/semantics/publishedVersio

    Un environnement générique et ouvert pour le traitement des expressions polylexicales

    Get PDF
    The treatment of multiword expressions (MWEs), like take off, bus stop and big deal, is a challenge for NLP applications. This kind of linguistic construction is not only arbitrary but also much more frequent than one would initially guess. This thesis investigates the behaviour of MWEs across different languages, domains and construction types, proposing and evaluating an integrated methodological framework for their acquisition. There have been many theoretical proposals to define, characterise and classify MWEs. We adopt generic definition stating that MWEs are word combinations which must be treated as a unit at some level of linguistic processing. They present a variable degree of institutionalisation, arbitrariness, heterogeneity and limited syntactic and semantic variability. There has been much research on automatic MWE acquisition in the recent decades, and the state of the art covers a large number of techniques and languages. Other tasks involving MWEs, namely disambiguation, interpretation, representation and applications, have received less emphasis in the field. The first main contribution of this thesis is the proposal of an original methodological framework for automatic MWE acquisition from monolingual corpora. This framework is generic, language independent, integrated and contains a freely available implementation, the mwetoolkit. It is composed of independent modules which may themselves use multiple techniques to solve a specific sub-task in MWE acquisition. The evaluation of MWE acquisition is modelled using four independent axes. We underline that the evaluation results depend on parameters of the acquisition context, e.g., nature and size of corpora, language and type of MWE, analysis depth, and existing resources. The second main contribution of this thesis is the application-oriented evaluation of our methodology proposal in two applications: computer-assisted lexicography and statistical machine translation. For the former, we evaluate the usefulness of automatic MWE acquisition with the mwetoolkit for creating three lexicons: Greek nominal expressions, Portuguese complex predicates and Portuguese sentiment expressions. For the latter, we test several integration strategies in order to improve the treatment given to English phrasal verbs when translated by a standard statistical MT system into Portuguese. Both applications can benefit from automatic MWE acquisition, as the expressions acquired automatically from corpora can both speed up and improve the quality of the results. The promising results of previous and ongoing experiments encourage further investigation about the optimal way to integrate MWE treatment into other applications. Thus, we conclude the thesis with an overview of the past, ongoing and future work

    Estudo de implementações eficientes em correlações estatísticas de expressões relevantes em documentos de linguagem natural

    Get PDF
    Aferiu-se que 90% dos dados que existem na Internet foram criados nos últimos dois anos. Tendo em vista este crescimento de dados, o número de padrões/relações neles contida é também muito grande. Com o objetivo de obter meta-dados que descrevam fenómenos linguísticos, na linguagem natural, reúnem-se conjuntos de documentos (corpus linguístico), a fim de obter robustez estatística. Num corpus existem vários n-gramas que podem, ou não, estar fortemente ligados entre si. Os n-gramas mais informativos têm a propriedade de refletir fortemente o conteúdo "core" dos documentos onde ocorrem. Formam por isso, expressões relevantes (multi-word expression). Uma vez que as ERs são extraíveis diretamente do corpus, é possível medir quão semanticamente próximas estão umas das outras. Tomando como exemplo as ERs, "crise financeira" e "desemprego na Zona Euro", é de esperar que exista uma proximidade semântica forte entre elas. Esta proximidade pode ser calculada através de métricas de correlação estatística. Também, o conteúdo "core" dum documento pode estar semanticamente ligado a um conjunto de ERs mesmo que estas não estejam presentes no documento; por exemplo, num documento de texto curto que trate de questões relativas ao ambiente e contenha a ER "global warming" mas não contenha a ER "Ice melting", à qual está semanticamente próxima, como facilmente se compreende. Seria útil que em ambiente de pesquisa, um motor de busca pudesse recuperar este documento após a pesquisa sobre "Ice melting", mesmo que o documento não contivesse explicitamente esta ER. De modo a conseguir a construção automática de tais descritores de documentos, é necessário dispor da capacidade de cálculo da correlação entre pares de ERs. Considerando que o número de pares cresce com o quadrado do número de ERs dos corpora, este processamento requer um ambiente paralelo e distribuído sendo, Hadoop e Spark abordagens a ter em conta. O desafio desta dissertação inclui a implementação dum protótipo que consiga de forma automática, em tempo útil, construir descritores de documentos a partir de corpora linguísticos. Este protótipo pode vir a ser útil em diversas áreas, como é o caso de query expansion, entre outros

    An n-gram cache for large-scale parallel extraction of multiword relevant expressions with LocalMaxs

    No full text
    LocalMaxs extracts relevant multiword terms based on their cohesion but is computationally intensive, a critical issue for very large natural language corpora. The corpus properties concerning n-gram distribution determine the algorithm complexity and were empirically analyzed for corpora up to 982 million words. A parallel LocalMaxs implementation exhibits almost linear relative efficiency, speedup, and sizeup, when executed with up to 48 cloud virtual machines and a distributed key-value store. To reduce the remote data communication, we present a novel n-gram cache with cooperative-based warm-up, leading to reduced miss ratio and time penalty. A cache analytical model is used to estimate the performance of cohesion calculation of n-gram expressions, based on corpus empirical data. The model estimates agree with the real execution results.info:eu-repo/semantics/publishedVersio
    corecore