15 research outputs found

    Email Classification Using Adaptive Ontologies Learning

    Get PDF
    Email is a way of communication for the today’s internet world, private and government sector or public sector all are used email for communication with their clients. They can freely send number of mail to their client without disturbing them. Now a day email communication is also a way of advertising, some mail is also spam, lots of social mails are there. Categorization and handling lots of email is an important task for the researches, as they all are working in this field by using the Natural language processing and ontology extraction process. User get frustrated for handling lots of mails and reading those for finding there is any important mail, sometime user delete lots of mail without reading and in that case may be some important mail which contain the important information may be about meeting, seminar etc. is also deleted. For avoiding these scenarios here auto updation of schedule calendar procedure is proposed by the author. Concept extraction and clustering of concept is done based on fuzzy logic, similar mail pattern is grouped in a same cluster if similarity is less than threshold value a new cluster is defined for that. From the extracted concept author establish the relationship between them and generate the result. Computation overhead is also calculated for different set of mails and finds that it takes very less time in computing large email data set

    Desarrollo de recursos léxicos multi-dialécticos para el quechua

    Get PDF
    Las lenguas de bajos recursos como el quechua no cuentan con recursos léxicos a pesar de ser importantes para contribuir en las investigaciones y en el desarrollo de muchas herramientas de Procesamiento de Lenguaje Natural (NLP) que se benefician o requieren de recursos de este tipo, de esa forma poder contribuir en la preservación de la lengua. El objetivo de esta investigación es construir una WordNet (base de datos léxica) para las variedades quechua sureño, central, amazónico y norteño, y un un etiquetado gramatical de secuencias de palabras (POS tagging) para la variedad del quechua sureño. Para el desarrollo de esta investigación se recopiló información de los diccionarios y se creó corpus paralelo quechua - español, se implementó un algoritmo de clasificación para alinear el sentido de las palabras con el synset del significado en español para cada variedad de la lengua quechua y finalmente se creó un modelo de etiquetación gramatical basado en el modelo BERT. El score obtenido para el POS tagging de la variedad quechua sureño fue 0.85% y para el quechua central 0.8 %

    Crosslingual Transfer Learning for Low-Resource Languages Based on Multilingual Colexification Graphs

    Full text link
    In comparative linguistics, colexification refers to the phenomenon of a lexical form conveying two or more distinct meanings. Existing work on colexification patterns relies on annotated word lists, limiting scalability and usefulness in NLP. In contrast, we identify colexification patterns of more than 2,000 concepts across 1,335 languages directly from an unannotated parallel corpus. We then propose simple and effective methods to build multilingual graphs from the colexification patterns: ColexNet and ColexNet+. ColexNet's nodes are concepts and its edges are colexifications. In ColexNet+, concept nodes are additionally linked through intermediate nodes, each representing an ngram in one of 1,334 languages. We use ColexNet+ to train \overrightarrow{\mbox{ColexNet+}}, high-quality multilingual embeddings that are well-suited for transfer learning. In our experiments, we first show that ColexNet achieves high recall on CLICS, a dataset of crosslingual colexifications. We then evaluate \overrightarrow{\mbox{ColexNet+}} on roundtrip translation, sentence retrieval and sentence classification and show that our embeddings surpass several transfer learning baselines. This demonstrates the benefits of using colexification as a source of information in multilingual NLP.Comment: EMNLP 2023 Finding

    Advances in Meta-Heuristic Optimization Algorithms in Big Data Text Clustering

    Full text link
    This paper presents a comprehensive survey of the meta-heuristic optimization algorithms on the text clustering applications and highlights its main procedures. These Artificial Intelligence (AI) algorithms are recognized as promising swarm intelligence methods due to their successful ability to solve machine learning problems, especially text clustering problems. This paper reviews all of the relevant literature on meta-heuristic-based text clustering applications, including many variants, such as basic, modified, hybridized, and multi-objective methods. As well, the main procedures of text clustering and critical discussions are given. Hence, this review reports its advantages and disadvantages and recommends potential future research paths. The main keywords that have been considered in this paper are text, clustering, meta-heuristic, optimization, and algorithm

    Cold-start universal information extraction

    Get PDF
    Who? What? When? Where? Why? are fundamental questions asked when gathering knowledge about and understanding a concept, topic, or event. The answers to these questions underpin the key information conveyed in the overwhelming majority, if not all, of language-based communication. At the core of my research in Information Extraction (IE) is the desire to endow machines with the ability to automatically extract, assess, and understand text in order to answer these fundamental questions. IE has been serving as one of the most important components for many downstream natural language processing (NLP) tasks, such as knowledge base completion, machine reading comprehension, machine translation and so on. The proliferation of the Web also intensifies the need of dealing with enormous amount of unstructured data from various sources, such as languages, genres and domains. When building an IE system, the conventional pipeline is to (1) ask expert linguists to rigorously define a target set of knowledge types we wish to extract by examining a large data set, (2) collect resources and human annotations for each type, and (3) design features and train machine learning models to extract knowledge elements. In practice, this process is very expensive as each step involves extensive human effort which is not always available, for example, to specify the knowledge types for a particular scenario, both consumers and expert linguists need to examine a lot of data from that domain and write detailed annotation guidelines for each type. Hand-crafted schemas, which define the types and complex templates of the expected knowledge elements, often provide low coverage and fail to generalize to new domains. For example, none of the traditional event extraction programs, such as ACE (Automatic Content Extraction) and TAC-KBP, include "donation'' and "evacuation'' in their schemas in spite of their potential relevance to natural disaster management users. Additionally, these approaches are highly dependent on linguistic resources and human labeled data tuned to pre-defined types, so they suffer from poor scalability and portability when moving to a new language, domain, or genre. The focus of this thesis is to develop effective theories and algorithms for IE which not only yield satisfactory quality by incorporating prior linguistic and semantic knowledge, but also greater portability and scalability by moving away from the high cost and narrow focus of large-scale manual annotation. This thesis opens up a new research direction called Cold-Start Universal Information Extraction, where the full extraction and analysis starts from scratch and requires little or no prior manual annotation or pre-defined type schema. In addition to this new research paradigm, we also contribute effective algorithms and models towards resolving the following three challenges: How can machines extract knowledge without any pre-defined types or any human annotated data? We develop an effective bottom-up and unsupervised Liberal Information Extraction framework based on the hypothesis that the meaning and underlying knowledge conveyed by linguistic expressions is usually embodied by their usages in language, which makes it possible to automatically induces a type schema based on rich contextual representations of all knowledge elements by combining their symbolic and distributional semantics using unsupervised hierarchical clustering. How can machines benefit from available resources, e.g., large-scale ontologies or existing human annotations? My research has shown that pre-defined types can also be encoded by rich contextual or structured representations, through which knowledge elements can be mapped to their appropriate types. Therefore, we design a weakly supervised Zero-shot Learning and a Semi-Supervised Vector Quantized Variational Auto-Encoder approach that frames IE as a grounding problem instead of classification, where knowledge elements are grounded into any types from an extensible and large-scale target ontology or induced from the corpora, with available annotations for a few types. How can IE approaches be extent to low-resource languages without any extra human effort? There are more than 6000 living languages in the real world while public gold-standard annotations are only available for a few dominant languages. To facilitate the adaptation of these IE frameworks to other languages, especially low resource languages, a Multilingual Common Semantic Space is further proposed to serve as a bridge for transferring existing resources and annotated data from dominant languages to more than 300 low resource languages. Moreover, a Multi-Level Adversarial Transfer framework is also designed to learn language-agnostic features across various languages

    Introduction: Ways of Machine Seeing

    Get PDF
    How do machines, and, in particular, computational technologies, change the way we see the world? This special issue brings together researchers from a wide range of disciplines to explore the entanglement of machines and their ways of seeing from new critical perspectives. This 'editorial' is for a special issue of AI & Society, which includes contributions from: María Jesús Schultz Abarca, Peter Bell, Tobias Blanke, Benjamin Bratton, Claudio Celis Bueno, Kate Crawford, Iain Emsley, Abelardo Gil-Fournier, Daniel Chávez Heras, Vladan Joler, Nicolas Malevé, Lev Manovich, Nicholas Mirzoeff, Perle Møhl, Bruno Moreschi, Fabian Offert, Trevor Paglan, Jussi Parikka, Luciana Parisi, Matteo Pasquinelli, Gabriel Pereira, Carloalberto Treccani, Rebecca Uliasz, and Manuel van der Veen

    24th Nordic Conference on Computational Linguistics (NoDaLiDa)

    Get PDF

    Método semi-supervisado para detectar, clasificar y anotar en un corpus de suicidio textos extraídos de entornos digitales

    Get PDF
    La presente tesis doctoral, con un enfoque cualicuantitativo (mixto), se enmarca en la línea del análisis de sentimientos en redes sociales, forma parte del proyecto Life, que busca crear una plataforma integral para detectar y brindar apoyo especializado a usuarios de redes sociales que publican textos con contenido suicida. Por ello se desarrolló el Corpus Life para realizar experimentos con algoritmos de aprendizaje automático, mismo que originalmente constaba de 102 mensajes suicidas (71 textos en inglés y 31 textos en español), 70 de estas muestras Sin Riesgo y 32 con Riesgo. Pero debido al escaso número de muestras y al desbalance entre ellas, los resultados generados no eran confiables. Por ello esta investigación tuvo como objetivo general desarrollar un método semi-supervisado para detectar, clasificar y anotar en el Corpus Life, textos extraídos de entornos digitales, con el fin de incrementar su número de anotaciones, mediante un proceso de evaluación automática de su calidad, previo a su inclusión o exclusión. Anotaciones que fueron evaluadas manualmente, utilizando para ello la medida de concordancia Cohen´s Kappa, con la participación de anotadores especializados quienes evaluaron los textos, alcanzando un nivel de acuerdo entre anotadores de 0,86, cercano al 0,78-0,81 de significancia estadística alcanzado automáticamente por medio del índice macro f1, con el método semi-supervisado. Lo que conllevo a alcanzar experimentos de un mayor grado de confiabilidad, por medio de un método estructurado con actividades, roles y procesos bien definidos y enlazados.This doctoral thesis with a qualitative-quantitative (mixed) approach is part of the analysis of feelings in social networks that publish texts with suicidal content. For this reason, Corpus life was developed to carry out experiments with machine learning algorithms, which originally consisted of 102 suicide messages (71 texts in English and 31 texts in Spanish), 70 of these samples without risk and 32 with risk. But due to the small number of samples and the imbalance between them, the generated outcome was not reliable. Therefore, this research had the general objective of developing a semi-supervised method to detect, classify and annotate in the Corpus Life, texts extracted from digital environments, in order to increase their number of annotations, through a process of automatic assessments of their quality, prior to their inclusion or exclusion. Records which were tested manually, using the Cohen's Kappa concordance measure, with the participation of specialized annotators who evaluated the texts, reaching a level of agreement between annotators of 0.86, close to 0.78-0.81 of statistically significant reaching automatically by means of the f1 macro index, with the semi-supervised method. This led to achieving experiments with a higher degree of reliability, through a structured method with well-defined and linked activities, roles and processes
    corecore