225 research outputs found

    Developing conceptual glossaries for the Latin vulgate bible.

    Get PDF
    A conceptual glossary is a textual reference work that combines the features of a thesaurus and an index verborum. In it, the word occurrences within a given text are classified, disambiguated, and indexed according to their membership of a set of conceptual (i.e. semantic) fields. Since 1994, we have been working towards building a set of conceptual glossaries for the Latin Vulgate Bible. So far, we have published a conceptual glossary to the Gospel according to John and are at present completing the analysis of the Gospel according to Mark and the minor epistles. This paper describes the background to our project and outlines the steps by which the glossaries are developed within a relational database framework

    Morphological resources for precise information retrieval

    Get PDF
    International audienceQuestion answering (QA) systems aim at providing a precise answer to a given user question. Their major difficulty lies in the lexical gap problem between question and answering passages. We present here the different types of morphological phenomena in question answering, the resources available for French, and in particular a resource that we built containing deverbal agent nouns. Then, we evaluate the results of a particular QA system, according to the morphological knowledge used

    Webaffix : un outil d'acquisition morphologique dérivationnelle à partir du Web

    Get PDF
    International audienceThis paper presents Webaffix, a tool for finding pairs of morphologically related words on the Web. The method used is inductive and languageindependent. Using the WWW as a corpus, the Webaffix tool detects the occurrences of new derived lexemes based on a given graphemic suffix, proposes a base lexeme, and then performs a compatibility test on the word pairs produced, using the Web again, but as a source of cooccurrences. The resulting pairs of words are used to enrich the Verbaction lexical database, which contains French verbs and their related nominals. The results are described and evaluated.L'article présente Webaffix, un outil d'acquisition de couples de lexÚmes morphologiquement apparentés à partir du Web. La méthode utilisé est inductive et indépendante des langues particuliÚres. Webaffix (1) utilise un moteur de recherche pour collecter des formes candidates qui contiennent un suffixe graphémique donné, (2) prédit les bases potentielles de ces candidats et (3) recherche sur le Web des cooccurrences des candidats et de leurs bases prédites. L'outil a été utilisé pour enrichir Verbaction, un lexique de liens entre verbes et noms d'action ou d'événement correspondants. L'article inclut une évaluation des liens morphologiques acquis

    What do Indonesians talk when they talk about COVID-19 Vaccine: A Topic Modeling Approach with LDA

    Get PDF
    To end the COVID-19 pandemics, the government attempted to accelerate the vaccination through various programs and collaboration. Unfortunately, the number is still relatively small compared to the number of populations in Indonesia. There are some reasons attributed to this challenge, one of them being the reluctance of citizens to accept the COVID-19 vaccine due to various factors. Knowing this factor to increase public compliance, the vaccination program can be speed-up. Unfortunately, traditionally acquiring the knowledge related to COVID-19 vaccine rejection can be challenging.  One of the ways to capture the knowledge is by conducting a survey or interview related to COVID-19 vaccine acceptance. This method can be inefficient in terms of cost and resources. To address those problem, we propose a novel method for analyzing the topics related to the COVID-19 Indonesians’ opinions on Twitter by implementing topic modeling algorithm called Latent Dirichlet Allocation. We gathered more than 22000 tweets related to the COVID-19 vaccine. By applying the algorithm to the collected dataset, we can capture the what is general opinion and topic when people discuss about COVID-19 vaccine. The result was validated using the labeled dataset that have been gathered in the previous research. Once we have the important term, the strategy based on can be determined by the medical professional who are responsible to administer the COVID-19 vaccine.

    Searching strategies for the Bulgarian language

    Get PDF
    This paper reports on the underlying IR problems encountered when indexing and searching with the Bulgarian language. For this language we propose a general light stemmer and demonstrate that it can be quite effective, producing significantly better MAP (around + 34%) than an approach not applying stemming. We implement the GL2 model derived from the Divergence from Randomness paradigm and find its retrieval effectiveness better than other probabilistic, vector-space and language models. The resulting MAP is found to be about 50% better than the classical tf idf approach. Moreover, increasing the query size enhances the MAP by around 10% (from T to TD). In order to compare the retrieval effectiveness of our suggested stopword list and the light stemmer developed for the Bulgarian language, we conduct a set of experiments on another stopword list and also a more complex and aggressive stemmer. Results tend to indicate that there is no statistically significant difference between these variants and our suggested approach. This paper evaluates other indexing strategies such as 4-gram indexing and indexing based on the automatic decompounding of compound words. Finally, we analyze certain queries to discover why we obtained poor results, when indexing Bulgarian documents using the suggested word-based approac

    WEBAFFIX : une boĂźte Ă  outils d'acquisition lexicale Ă  partir du Web

    Get PDF
    International audienceThis paper deals with the design and use of Webaffix, a tool for semi-automatically detecting new word forms from the World Wide Web. We focus mainly on new derived words, i.e. coined from other lexemes through suffixation and/or prefixation processes. We develop the techniques and methods used in Webaffix, along with a sample of results obtained via several studies on French. Resources such as the ones created through the use of Webaffix are useful not only for natural language processing and information retrieval tasks, but also for the linguistic study of word creation.Nous présentons ici Webaffix, un outil et une méthodologie qui permet d'enrichir et de constituer semi-automatiquement des données lexicales en utilisant le Web comme corpus. Notre approche concerne plus spécifiquement la détection et l'analyse d'unités lexicales construites par suffixation ou préfixation. Nous présentons les méthodes et techniques utilisées par Webaffix, en déclinant les différents modes d'utilisation que nous avons envisagés et mis en pratique, ainsi que des exemples de résultats produits par diverses campagnes d'utilisation. Les données ainsi recueillies sont utiles comme ressources pour différentes applications en traitement automatique des langues, mais permettent également d'étudier à grande échelle les phénomÚnes de création lexicale

    Artificial Neural Network methods applied to sentiment analysis

    Get PDF
    Sentiment Analysis (SA) is the study of opinions and emotions that are conveyed by text. This field of study has commercial applications for example in market research (e.g., “What do customers like and dislike about a product?”) and consumer behavior (e.g., “Which book will a customer buy next when he wrote a positive review about book X?”). A private person can benefit from SA by automatic movie or restaurant recommendations, or from applications on the computer or smart phone that adapt to the user’s current mood. In this thesis we will put forward research on artificial Neural Network (NN) methods applied to SA. Many challenges arise, such as sarcasm, domain dependency, and data scarcity, that need to be addressed by a successful system. In the first part of this thesis we perform linguistic analysis of a word (“hard”) under the light of SA. We show that sentiment-specific word sense disambiguation is necessary to distinguish fine nuances of polarity. Commonly available resources are not sufficient for this. The introduced Contextually Enhanced Sentiment Lexicon (CESL) is used to label occurrences of “hard” in a real dataset with its sense. That allows us to train a Support Vector Machine (SVM) with deep learning features that predicts the polarity of a single occurrence of the word, just given its context words. We show that the features we propose improve the result compared to existing standard features. Since the labeling effort is not negligible, we propose a clustering approach that reduces the manual effort to a minimum. The deep learning features that help predicting fine-grained, context-dependent polarity are computed by a Neural Network Language Model (NNLM), namely a variant of the Log-Bilinear Language model (LBL). By improving this model the performance of polarity classification might as well improve. Thus, we propose a non-linear version of the LBL and the vectorized Log-Bilinear Language model (vLBL), because non-linear models are generally considered more powerful. In a parameter study on a language modeling task, we show that the non-linear versions indeed perform better than their linear counterparts. However, the difference is small, except for settings where the model has only few parameters, which might be the case when little training data is available and the model therefore needs to be smaller in order to avoid overfitting. An alternative approach to fine-grained polarity classification as used above is to train classifiers that will do the distinction automatically. Due to the complexity of the task, the challenges of SA in general, and certain domain-specific issues (e.g., when using Twitter text) existing systems have much room to improve. Often statistical classifiers are used with simple Bag-of-Words (BOW) features or count features that stem from sentiment lexicons. We introduce a linguistically-informed Convolutional Neural Network (lingCNN) that builds upon the fact that there has been much research on language in general and sentiment lexicons in particular. lingCNN makes use of two types of linguistic features: word-based and sentence-based. Word-based features comprise features derived from sentiment lexicons, such as polarity or valence and general knowledge about language, such as a negation-based feature. Sentence-based features are also based on lexicon counts and valences. The combination of both types of features is superior to the original model without these features. Especially, when little training data is available (that can be the case for different languages that are underresourced), lingCNN proves to be significantly better (up to 12 macro-F1 points). Although, linguistic features in terms of sentiment lexicons are beneficial, their usage gives rise to a new set of problems. Most lexicons consist of infinitive forms of words only. Especially, lexicons for low-resource languages. However, the text that needs to be classified is unnormalized. Hence, we want to answer the question if morphological information is necessary for SA or if a system that neglects all this information and therefore can make better use of lexicons actually has an advantage. Our approach is to first stem or lemmatize a dataset and then perform polarity classification on it. On Czech and English datasets we show that better results can be achieved with normalization. As a positive side effect, we can compute better word embeddings by first normalizing the training corpus. This works especially well for languages that have rich morphology. We show on word similarity datasets for English, German, and Spanish that our embeddings improve performance. On a new WordNet-based evaluation we confirm these results on five different languages (Czech, English, German, Hungarian, and Spanish). The benefit of this new evaluation is further that it can be used for many other languages, as the only resource that is required is a WordNet. In the last part of the thesis, we use a recently introduced method to create an ultradense sentiment space out of generic word embeddings. This method allows us to compress 400 dimensional word embeddings down to 40 or even just 4 dimensions and still get similar results on a polarity classification task. While the training speed increases by a factor of 44, the difference in classification performance is not significant.Sentiment Analyse (SA) ist das Untersuchen von Meinungen und Emotionen die durch Text ĂŒbermittelt werden. Dieses Forschungsgebiet findet kommerzielle Anwendungen in Marktforschung (z.B.: „Was mögen Kunden an einem Produkt (nicht)?“) und Konsumentenverhalten (z.B.: „Welches Buch wird ein Kunde als nĂ€chstes kaufen, nachdem er eine positive Rezension ĂŒber Buch X geschrieben hat?“). Aber auch als Privatperson kann man von Forschung in SA profitieren. Beispiele hierfĂŒr sind automatisch erstellte Film- oder Restaurantempfehlungen oder Anwendungen auf Computer oder Smartphone die sich der aktuellen Stimmungslage des Benutzers anpassen. In dieser Arbeit werden wir Forschung auf dem Gebiet der Neuronen Netze (NN) angewendet auf SA vorantreiben. Dabei ergeben sich viele Herausforderungen, wie Sarkasmus, DomĂ€nenabhĂ€ngigkeit und Datenarmut, die ein erfolgreiches System angehen muss. Im ersten Teil der Arbeit fĂŒhren wir eine linguistische Analyse des englischen Wortes „hard“ in Hinblick auf SA durch. Wir zeigen, dass sentiment-spezifische Wortbedeutungsdisambiguierung notwendig ist, um feine Nuancen von PolaritĂ€t (positive vs. negative Stimmung) unterscheiden zu können. HĂ€ufig verwendete, frei verfĂŒgbare Ressourcen sind dafĂŒr nicht ausreichend. Daher stellen wir CESL (Contextually Enhanced Sentiment Lexicon), ein sentiment-spezifisches Bedeutungslexicon vor, welches verwendet wird, um Vorkommen von „hard“ in einem realen Datensatz mit seinen Bedeutungen zu versehen. Das Lexikon erlaubt es eine Support Vector Machine (SVM) mit Features aus dem Deep Learning zu trainieren, die in der Lage ist, die PolaritĂ€t eines Vorkommens nur anhand seiner Kontextwörter vorherzusagen. Wir zeigen, dass die vorgestellten Features die Ergebnisse der SVM verglichen mit Standard-Features verbessern. Da der Aufwand fĂŒr das Erstellen von markierten Trainingsdaten nicht zu unterschĂ€tzen ist, stellen wir einen Clustering-Ansatz vor, der den manuellen Markierungsaufwand auf ein Minimum reduziert. Die Deep Learning Features, die die Vorhersage von feingranularer, kontextabhĂ€ngiger PolaritĂ€t verbessern, werden mittels eines neuronalen Sprachmodells, genauer eines Log-Bilinear Language model (LBL)s, berechnet. Wenn man dieses Modell verbessert, wird vermutlich auch das Ergebnis der PolaritĂ€tsklassifikation verbessert. Daher fĂŒhren wir nichtlineare Versionen des LBL und vectorized Log-Bilinear Language model (vLBL) ein, weil nichtlineare Modelle generell als mĂ€chtiger angesehen werden. In einer Parameterstudie zur Sprachmodellierung zeigen wir, dass nichtlineare Modelle tatsĂ€chlich besser abschneiden, als ihre linearen GegenstĂŒcke. Allerdings ist der Unterschied gering, es sei denn die Modelle können nur auf wenige Parameter zurĂŒckgreifen. So etwas kommt zum Beispiel vor, wenn nur wenige Trainingsdaten verfĂŒgbar sind und das Modell deshalb kleiner sein muss, um Überanpassung zu verhindern. Ein alternativer Ansatz zur feingranularen PolaritĂ€tsklassifikation wie oben verwendet, ist es, einen Klassifikator zu trainieren, der die Unterscheidung automatisch vornimmt. Durch die KomplexitĂ€t der Aufgabe, der Herausforderungen von SA im Allgemeinen und speziellen domĂ€nenspezifischen Problemen (z.B.: wenn Twitter-Daten verwendet werden) haben existierende Systeme noch immer großes Optimierungspotential. Oftmals verwenden statistische Klassifikatoren einfache Bag-of-Words (BOW)-Features. Alternativ kommen ZĂ€hl-Features zum Einsatz, die auf Sentiment-Lexika aufsetzen. Wir stellen linguistically-informed Convolutional Neural Network (lingCNN) vor, dass auf dem Fakt beruht, dass bereits viel Forschung in Sprachen und Sentiment-Lexika geflossen ist. lingCNN macht von zwei linguistischen Feature-Typen Gebrauch: wortbasierte und satzbasierte. Wort-basierte Features umfassen Features die von Sentiment-Lexika, wie PolaritĂ€t oder Valenz (die StĂ€rke der PolaritĂ€t) und generellem Wissen ĂŒber Sprache, z.B.: Verneinung, herrĂŒhren. Satzbasierte Features basieren ebenfalls auf ZĂ€hl-Features von Lexika und auf Valenzen. Die Kombination beider Feature-Typen ist dem Originalmodell ohne linguistische Features ĂŒberlegen. Besonders wenn wenige TrainingsdatensĂ€tze vorhanden sind (das kann der Fall fĂŒr Sprachen sein, die weniger erforscht sind als englisch). lingCNN schneidet signifikant besser ab (bis zu 12 macro-F1 Punkte). Obwohl linguistische Features basierend auf Sentiment-Lexika vorteilhaft sind, fĂŒhrt deren Verwendung zu neuen Problemen. Der Großteil der Lexika enthĂ€lt nur Infinitivformen der Wörter. Dies gilt insbesondere fĂŒr Sprachen mit wenigen Ressourcen. Das ist eine Herausforderung, weil der Text der klassifiziert werden soll in der Regel nicht normalisiert ist. Daher wollen wir die Frage beantworten, ob morphologische Information fĂŒr SA ĂŒberhaupt notwendig ist oder ob ein System, dass jegliche morphologische Information ignoriert und dadurch bessere Verwendung der Lexika erzielt, einen Vorteil genießt. Unser Ansatz besteht aus Stemming und Lemmatisierung des Datensatzes, bevor dann die PolaritĂ€tsklassifikation durchgefĂŒhrt wird. Auf englischen und tschechischen Daten zeigen wir, dass durch Normalisierung bessere Ergebnisse erzielt werden. Als positiven Nebeneffekt kann man bessere Wortrepresentationen (engl. word embeddings) berechnen, indem das Trainingskorpus zuerst normalisiert wird. Das funktioniert besonders gut fĂŒr morphologisch reiche Sprachen. Wir zeigen auf DatensĂ€tzen zur WortĂ€hnlichkeit fĂŒr deutsch, englisch und spanisch, dass unsere Wortrepresentationen die Ergebnisse verbessern. In einer neuen WordNet-basierten Evaluation bestĂ€tigen wir diese Ergebnisse fĂŒr fĂŒnf verschiedene Sprachen (deutsch, englisch, spanisch, tschechisch und ungarisch). Der Vorteil dieser Evaluation ist weiterhin, dass sie fĂŒr viele Sprachen angewendet werden kann, weil sie lediglich ein WordNet als Ressource benötigt. Im letzten Teil der Arbeit verwenden wir eine kĂŒrzlich vorgestellte Methode zur Erstellen eines ultradichten Sentiment-Raumes aus generischen Wortrepresentationen. Diese Methode erlaubt es uns 400 dimensionale Wortrepresentationen auf 40 oder sogar nur 4 Dimensionen zu komprimieren und weiterhin die gleichen Resultate in PolaritĂ€tsklassifikation zu erhalten. WĂ€hrend die Trainingsgeschwindigkeit um einen Faktor von 44 verbessert wird, sind die Unterschiede in der PolaritĂ€tsklassifikation nicht signifikant

    Machine Learning Approach for Cancer Entities Association and Classification

    Full text link
    According to the World Health Organization (WHO), cancer is the second leading cause of death globally. Scientific research on different types of cancers grows at an ever-increasing rate, publishing large volumes of research articles every year. The insight information and the knowledge of the drug, diagnostics, risk, symptoms, treatments, etc., related to genes are significant factors that help explore and advance the cancer research progression. Manual screening of such a large volume of articles is very laborious and time-consuming to formulate any hypothesis. The study uses the two most non-trivial NLP, Natural Language Processing functions, Entity Recognition, and text classification to discover knowledge from biomedical literature. Named Entity Recognition (NER) recognizes and extracts the predefined entities related to cancer from unstructured text with the support of a user-friendly interface and built-in dictionaries. Text classification helps to explore the insights into the text and simplifies data categorization, querying, and article screening. Machine learning classifiers are also used to build the classification model and Structured Query Languages (SQL) is used to identify the hidden relations that may lead to significant predictions
    • 

    corecore