110 research outputs found

    Datasets for Large Language Models: A Comprehensive Survey

    Full text link
    This paper embarks on an exploration into the Large Language Model (LLM) datasets, which play a crucial role in the remarkable advancements of LLMs. The datasets serve as the foundational infrastructure analogous to a root system that sustains and nurtures the development of LLMs. Consequently, examination of these datasets emerges as a critical topic in research. In order to address the current lack of a comprehensive overview and thorough analysis of LLM datasets, and to gain insights into their current status and future trends, this survey consolidates and categorizes the fundamental aspects of LLM datasets from five perspectives: (1) Pre-training Corpora; (2) Instruction Fine-tuning Datasets; (3) Preference Datasets; (4) Evaluation Datasets; (5) Traditional Natural Language Processing (NLP) Datasets. The survey sheds light on the prevailing challenges and points out potential avenues for future investigation. Additionally, a comprehensive review of the existing available dataset resources is also provided, including statistics from 444 datasets, covering 8 language categories and spanning 32 domains. Information from 20 dimensions is incorporated into the dataset statistics. The total data size surveyed surpasses 774.5 TB for pre-training corpora and 700M instances for other datasets. We aim to present the entire landscape of LLM text datasets, serving as a comprehensive reference for researchers in this field and contributing to future studies. Related resources are available at: https://github.com/lmmlzn/Awesome-LLMs-Datasets.Comment: 181 pages, 21 figure

    Gender bias in natural language processing

    Get PDF
    (English) Gender bias is a dangerous form of social bias impacting an essential group of people. The effect of gender bias is propagated to our data, causing the accuracy of the predictions in models to be different depending on gender. In the deep learning era, our models are highly impacted by the training data transferring the negative biases in the data to the models. Natural Language Processing models encounter this amplification of bias in the data. Our thesis is devoted to studying the issue of gender bias in NLP applications from different points of view. To understand and manage the effect of bias amplification, evaluation and mitigation approaches have to be explored. The scientific society has exerted significant efforts in these two directions to enable proposing solutions to the problem. Our thesis is devoted to these two main directions; proposing evaluation schemes, whether as datasets or mechanisms, besides suggesting mitigation techniques. For evaluation, we proposed techniques for evaluating bias in contextualized embeddings and multilingual translation models. Besides, we presented benchmarks for evaluating bias for speech translation and multilingual machine translation models. For mitigation direction, we proposed different approaches in machine translation models by adding contextual text, contextual embeddings, or relaxing the architecture’s constraints. Our evaluation studies concluded that gender bias is encoded strongly in contextual embeddings representing professions and stereotypical nouns. We also unveiled that algorithms amplify the bias and that the system’s architecture impacts the behavior. For the evaluation purposes, we contributed to creating several benchmarks for the evaluation purpose; we introduced a benchmark that evaluates gender bias in speech translation systems. This research suggests that the current state of speech translation systems does not enable us to evaluate gender bias accurately because of the low quality of speech translation systems. Additionally, we proposed a toolkit for building multilingual balanced datasets for training and evaluating NMT models. These datasets are balanced within the gender occupation-wise. We found out that high-resource languages usually tend to predict more precise male translations. Our mitigation studies in NMT suggest that the nature of datasets and languages needs to be considered to apply the right approach. Mitigating bias can rely on adding contextual information. However, in other cases, we need to rethink the model and relax some influencing conditions to the bias that do not affect the general performance but reduce the effect of bias amplification.(Español) El prejuicio de género es una forma peligrosa de sesgo social que afecta a un grupo esencial de personas. El efecto del prejuicio de género se propaga a nuestros datos, lo que hace quela precisión de las predicciones en los modelos sea diferente según el género. En la era del aprendizaje profundo, nuestros modelos se ven afectados por los datos de entrenamiento que transfieren los prejuicios de los datos a los modelos. Los modelos de procesamiento del lenguaje natural pueden además amplificar este sesgo en los datos. Para comprender el efecto de la amplificación del prejuicio de género, se deben explorar enfoques de evaluación y mitigación. La sociedad científica ha visto la importancía de estas dos direcciones para posibilitar la propuesta de soluciones al problema. Nuestra tesis está dedicada a estas dos direcciones principales; proponiendo esquemas de evaluación, ya sea como conjuntos de datos y mecanismos de evaluación, además de sugerir técnicas de mitigación. Para la evaluación, propusimos técnicas para evaluar el prejuicio en representaciones vectoriales contextualizadas y modelos de traducción multilingüe. Además, presentamos puntos de referencia para evaluar el prejuicio de la traducción de voz y los modelos de traducción automática multilingüe. Para la dirección de mitigación, propusimos diferentes enfoques en los modelos de traducción automática agregando texto contextual, incrustaciones contextuales o relajando las restricciones de la arquitectura. Nuestros estudios de evaluación concluyeron que el prejuicio de género está fuertemente codificado en representaciones vectoriales contextuales que representan profesiones y sustantivos estereotipados. También revelamos que los algoritmos amplifican el sesgo y que la arquitectura del sistema afecta el comportamiento. Para efectos de evaluación, contribuimos a la creación de varios datos de referencia para fines de evaluación; presentamos un conjunto de datos que evalúa el sesgo de género en los sistemas de traducción de voz. Esta investigación sugiere que el estado actual de los sistemas de traducción del habla no nos permite evaluar con precisión el sesgo de género debido a la baja calidad de los sistemas de traducción del habla. Además, propusimos un conjunto de herramientas para construir conjuntos de datos equilibrados multilingües para entrenar y evaluar modelos NMT. Estos conjuntos de datos están equilibrados dentro de la ocupación de género. Descubrimos que los idiomas con muchos recursos generalmente tienden a predecir traducciones masculinas más precisas. Nuestros estudios de mitigación en NMT sugieren que se debe considerar la naturaleza de los conjuntos de datos y los idiomas para aplicar el enfoque correcto. La mitigación del sesgo puede basarse en agregar información contextual. Sin embargo, en otros casos, necesitamos repensar el modelo y relajar algunas condiciones que influyen en el sesgo que no afectan el rendimiento general pero reducen el efecto de la amplificación del sesgo.Postprint (published version

    Argumentation Mining in User-Generated Web Discourse

    Full text link
    The goal of argumentation mining, an evolving research field in computational linguistics, is to design methods capable of analyzing people's argumentation. In this article, we go beyond the state of the art in several ways. (i) We deal with actual Web data and take up the challenges given by the variety of registers, multiple domains, and unrestricted noisy user-generated Web discourse. (ii) We bridge the gap between normative argumentation theories and argumentation phenomena encountered in actual data by adapting an argumentation model tested in an extensive annotation study. (iii) We create a new gold standard corpus (90k tokens in 340 documents) and experiment with several machine learning methods to identify argument components. We offer the data, source codes, and annotation guidelines to the community under free licenses. Our findings show that argumentation mining in user-generated Web discourse is a feasible but challenging task.Comment: Cite as: Habernal, I. & Gurevych, I. (2017). Argumentation Mining in User-Generated Web Discourse. Computational Linguistics 43(1), pp. 125-17

    Towards Commentary-Driven Soccer Player Analytics

    Get PDF
    Open information extraction (open IE) has been shown to be useful in a number of NLP Tasks, such as question answering, relation extraction, and information retrieval. Soccer is the most watched sport in the world. The dynamic nature of the game corresponds to the team strategy and individual contribution, which are the deciding factors for a team’s success. Generally, companies collect sports event data manually and very rarely they allow free-access to these data by third parties. However, a large amount of data is available freely on various social media platforms where different types of users discuss these very events. To rely on expert data, we are currently using the live-match commentary as our rich and unexplored data-source. Our aim out of this commentary analysis is to initially extract key events from each game and eventually key entities like players involved, player action and other player related attributes from these key events. We propose an end-to-end application to extract commentaries and extract player attributes from it. The study will primarily depend on an extensive crowd labelling of data involving precautionary periodical checks to prevent incorrectly tagged data. This research will contribute significantly towards analysis of commentary and acts as a cheap tool providing player performance analysis for smaller to intermediate budget soccer club

    Thinking outside the graph: scholarly knowledge graph construction leveraging natural language processing

    Get PDF
    Despite improved digital access to scholarly knowledge in recent decades, scholarly communication remains exclusively document-based. The document-oriented workflows in science publication have reached the limits of adequacy as highlighted by recent discussions on the increasing proliferation of scientific literature, the deficiency of peer-review and the reproducibility crisis. In this form, scientific knowledge remains locked in representations that are inadequate for machine processing. As long as scholarly communication remains in this form, we cannot take advantage of all the advancements taking place in machine learning and natural language processing techniques. Such techniques would facilitate the transformation from pure text based into (semi-)structured semantic descriptions that are interlinked in a collection of big federated graphs. We are in dire need for a new age of semantically enabled infrastructure adept at storing, manipulating, and querying scholarly knowledge. Equally important is a suite of machine assistance tools designed to populate, curate, and explore the resulting scholarly knowledge graph. In this thesis, we address the issue of constructing a scholarly knowledge graph using natural language processing techniques. First, we tackle the issue of developing a scholarly knowledge graph for structured scholarly communication, that can be populated and constructed automatically. We co-design and co-implement the Open Research Knowledge Graph (ORKG), an infrastructure capable of modeling, storing, and automatically curating scholarly communications. Then, we propose a method to automatically extract information into knowledge graphs. With Plumber, we create a framework to dynamically compose open information extraction pipelines based on the input text. Such pipelines are composed from community-created information extraction components in an effort to consolidate individual research contributions under one umbrella. We further present MORTY as a more targeted approach that leverages automatic text summarization to create from the scholarly article's text structured summaries containing all required information. In contrast to the pipeline approach, MORTY only extracts the information it is instructed to, making it a more valuable tool for various curation and contribution use cases. Moreover, we study the problem of knowledge graph completion. exBERT is able to perform knowledge graph completion tasks such as relation and entity prediction tasks on scholarly knowledge graphs by means of textual triple classification. Lastly, we use the structured descriptions collected from manual and automated sources alike with a question answering approach that builds on the machine-actionable descriptions in the ORKG. We propose JarvisQA, a question answering interface operating on tabular views of scholarly knowledge graphs i.e., ORKG comparisons. JarvisQA is able to answer a variety of natural language questions, and retrieve complex answers on pre-selected sub-graphs. These contributions are key in the broader agenda of studying the feasibility of natural language processing methods on scholarly knowledge graphs, and lays the foundation of which methods can be used on which cases. Our work indicates what are the challenges and issues with automatically constructing scholarly knowledge graphs, and opens up future research directions

    Cross-Platform Text Mining and Natural Language Processing Interoperability - Proceedings of the LREC2016 conference

    Get PDF
    No abstract available

    Cross-Platform Text Mining and Natural Language Processing Interoperability - Proceedings of the LREC2016 conference

    Get PDF
    No abstract available
    corecore