2,870 research outputs found

    Digging by Debating: Linking massive datasets to specific arguments

    Get PDF
    We will develop and implement a multi-scale workbench, called "InterDebates", with the goal of digging into data provided by hundreds of thousands, eventually millions, of digitized books, bibliographic databases of journal articles, and comprehensive reference works written by experts. Our hypotheses are: that detailed and identifiable arguments drive many aspects of research in the sciences and the humanities; that argumentative structures can be extracted from large datasets using a mixture of automated and social computing techniques; and, that the availability of such analyses will enable innovative interdisciplinary research, and may also play a role in supporting better-informed critical debates among students and the general public. A key challenge tackled by this project is thus to uncover and represent the argumentative structure of digitized documents, allowing users to find and interpret detailed arguments in the broad semantic landscape of books and articles

    ALGORITHM COMPOSITION OF TEXT THEMATIC SEGMENTATION AS INTELLECTUALIZATION INSTRUMENT FOR DESIGN OF TECHNICAL SYSTEMS

    Get PDF
    The paper considers the problem of thematic segmentation of extended texts aimed at the support of technical systems designer operation. The example shows that different segmentation algorithms allocate meaningfully different text fragments, and the composition of algorithms in a classical form, that is, by summarizing the results in order to single out the best one, seems to be wrong. At the same time, the simultaneous demonstration of several versions of the thematic segmentation enables the reader to obtain an integral representation of the text structure, thereby facilitating the choice of an effective strategy for mastering the text. The created system of thematic segmentation visualization of extended texts is described, providing the user to select and analyze not the whole text, but only fragments corresponding to his current information needs. The system gives the possibility to view simultaneously the results of text segmentation performed by various algorithms. Thus, the user's abilities for quick and efficient analysis and capturing of a large amount of textual information are enhanced

    Construindo grafos de conhecimento utilizando documentos textuais para análise de literatura científica

    Get PDF
    Orientador: Julio Cesar dos ReisDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O número de publicações científicas que pesquisadores tem que ler vem aumento nos últimos anos. Consequentemente, dentre várias opções, é difícil para eles identificarem documentos relevantes relacionados aos seus estudos. Ademais, para entender como um campo científico é organizado, e para estudar o seu estado da arte, pesquisadores geralmente se baseiam em artigos de revisão de uma área. Estes artigos podem estar indisponíveis ou desatualizados dependendo do tema estudado. Usualmente, pesquisadores têm que realizar esta árdua tarefa de pesquisa fundamental manualmente. Pesquisas recentes vêm desenvolvendo mecanismos para auxiliar outros pesquisadores a entender como campos científicos são estruturados. Entretanto, estes mecanismos são focados exclusivamente em recomendar artigos relevantes para os pesquisadores ou os auxiliar em entender como um ramo da ciência é organizado ao nível de publicação. Desta forma, estes métodos limitam o entendimento sobre o ramo estudado, não permitindo que interessados estudem os conceitos e relações abstratas que compõe um ramo da ciência e as suas subáreas. Esta dissertação de mestrado propõe um framework para estruturar, analisar, e rastrear a evolução de um campo científico no nível dos seus conceitos. Ela primeiramente estrutura o campo científico como um grafo-de-conhecimento utilizando os seus conceitos como vértices. A seguir, ela automaticamente identifica as principais subáreas do campo estudado, extrai as suas frases-chave, e estuda as suas relações. Nosso framework representa o campo científico em diferentes períodos do tempo. Esta dissertação compara estas representações, e identifica como as subáreas do campo estudado evoluiram no decorrer dos anos. Avaliamos cada etapa do nosso framework representando e analisando dados científicos provenientes de diferentes áreas de conhecimento em casos de uso. Nossas descobertas indicam o sucesso em detectar resultados similares em diferentes casos de uso, indicando que nossa abordagem é aplicável à diferentes domínios da ciência. Esta pesquisa também contribui com uma aplicação com interface web para auxiliar pesquisadores a utilizarem nosso framework de forma gráfica. Ao utilizar nossa aplicação, pesquisadores podem ter uma análise geral de como um campo científico é estruturado e como ele evoluiAbstract: The amount of publications a researcher must absorb has been increasing over the last years. Consequently, among so many options, it is hard for them to identify interesting documents to read related to their studies. Researchers usually search for review articles to understand how a scientific field is organized and to study its state of the art. This option can be unavailable or outdated depending on the studied area. Usually, they have to do such laborious task of background research manually. Recent researches have developed mechanisms to assist researchers in understanding the structure of scientific fields. However, those mechanisms focus on recommending relevant articles to researchers or supporting them in understanding how a scientific field is organized considering documents that belong to it. These methods limit the field understanding, not allowing researchers to study the underlying concepts and relations that compose a scientific field and its sub-areas. This Ms.c. thesis proposes a framework to structure, analyze, and track the evolution of a scientific field at a concept level. Given a set of textual documents as research papers, it first structures a scientific field as a knowledge graph using its detected concepts as vertices. Then, it automatically identifies the field's main sub-areas, extracts their keyphrases, and studies their relations. Our framework enables to represent the scientific field in distinct time-periods. It allows to compare its representations and identify how the field's areas changed over time. We evaluate each step of our framework representing and analyzing scientific data from distinct fields of knowledge in case studies. Our findings indicate the success in detecting the sub-areas based on the generated graph from natural language documents. We observe similar outcomes in the different case studies by indicating our approach applicable to distinct domains. This research also contributes with a web-based software tool that allows researchers to use the proposed framework graphically. By using our application, researchers can have an overview analysis of how a scientific field is structured and how it evolvedMestradoCiência da ComputaçãoMestre em Ciência da Computação2013/08293-7 ; 2017/02325-5FAPESPCAPE

    Data Analysis Methods for Software Systems

    Get PDF
    Using statistics, econometrics, machine learning, and functional data analysis methods, we evaluate the consequences of the lockdown during the COVID-19 pandemics for wage inequality and unemployment. We deduce that these two indicators mostly reacted to the first lockdown from March till June 2020. Also, analysing wage inequality, we conduct analysis separately for males and females and different age groups.We noticed that young females were affected mostly by the lockdown.Nevertheless, all the groups reacted to the lockdown at some level

    Neural End-to-End Learning for Computational Argumentation Mining

    Full text link
    We investigate neural techniques for end-to-end computational argumentation mining (AM). We frame AM both as a token-based dependency parsing and as a token-based sequence tagging problem, including a multi-task learning setup. Contrary to models that operate on the argument component level, we find that framing AM as dependency parsing leads to subpar performance results. In contrast, less complex (local) tagging models based on BiLSTMs perform robustly across classification scenarios, being able to catch long-range dependencies inherent to the AM problem. Moreover, we find that jointly learning 'natural' subtasks, in a multi-task learning setup, improves performance.Comment: To be published at ACL 201

    The Artificial Intelligence in Digital Pathology and Digital Radiology: Where Are We?

    Get PDF
    This book is a reprint of the Special Issue entitled "The Artificial Intelligence in Digital Pathology and Digital Radiology: Where Are We?". Artificial intelligence is extending into the world of both digital radiology and digital pathology, and involves many scholars in the areas of biomedicine, technology, and bioethics. There is a particular need for scholars to focus on both the innovations in this field and the problems hampering integration into a robust and effective process in stable health care models in the health domain. Many professionals involved in these fields of digital health were encouraged to contribute with their experiences. This book contains contributions from various experts across different fields. Aspects of the integration in the health domain have been faced. Particular space was dedicated to overviewing the challenges, opportunities, and problems in both radiology and pathology. Clinal deepens are available in cardiology, the hystopathology of breast cancer, and colonoscopy. Dedicated studies were based on surveys which investigated students and insiders, opinions, attitudes, and self-perception on the integration of artificial intelligence in this field

    Document image analysis and recognition: a survey

    Get PDF
    This paper analyzes the problems of document image recognition and the existing solutions. Document recognition algorithms have been studied for quite a long time, but despite this, currently, the topic is relevant and research continues, as evidenced by a large number of associated publications and reviews. However, most of these works and reviews are devoted to individual recognition tasks. In this review, the entire set of methods, approaches, and algorithms necessary for document recognition is considered. A preliminary systematization allowed us to distinguish groups of methods for extracting information from documents of different types: single-page and multi-page, with text and handwritten contents, with a fixed template and flexible structure, and digitalized via different ways: scanning, photographing, video recording. Here, we consider methods of document recognition and analysis applied to a wide range of tasks: identification and verification of identity, due diligence, machine learning algorithms, questionnaires, and audits. The groups of methods necessary for the recognition of a single page image are examined: the classical computer vision algorithms, i.e., keypoints, local feature descriptors, Fast Hough Transforms, image binarization, and modern neural network models for document boundary detection, document classification, document structure analysis, i.e., text blocks and tables localization, extraction and recognition of the details, post-processing of recognition results. The review provides a description of publicly available experimental data packages for training and testing recognition algorithms. Methods for optimizing the performance of document image analysis and recognition methods are described.The reported study was funded by RFBR, project number 20-17-50177. The authors thank Sc. D. Vladimir L. Arlazarov (FRC CSC RAS), Pavel Bezmaternykh (FRC CSC RAS), Elena Limonova (FRC CSC RAS), Ph. D. Dmitry Polevoy (FRC CSC RAS), Daniil Tropin (LLC “Smart Engines Service”), Yuliya Chernysheva (LLC “Smart Engines Service”), Yuliya Shemyakina (LLC “Smart Engines Service”) for valuable comments and suggestions

    Term-community-based topic detection with variable resolution

    Get PDF
    Network-based procedures for topic detection in huge text collections offer an intuitive alternative to probabilistic topic models. We present in detail a method that is especially designed with the requirements of domain experts in mind. Like similar methods, it employs community detection in term co-occurrence graphs, but it is enhanced by including a resolution parameter that can be used for changing the targeted topic granularity. We also establish a term ranking and use semantic word-embedding for presenting term communities in a way that facilitates their interpretation. We demonstrate the application of our method with a widely used corpus of general news articles and show the results of detailed social-sciences expert evaluations of detected topics at various resolutions. A comparison with topics detected by Latent Dirichlet Allocation is also included. Finally, we discuss factors that influence topic interpretation.Comment: 31 pages, 6 figure

    BAM: Benchmarking Argument Mining on Scientific Documents

    Full text link
    In this paper, we present BAM, a unified Benchmark for Argument Mining (AM). We propose a method to homogenize both the evaluation process and the data to provide a common view in order to ultimately produce comparable results. Built as a four stage and end-to-end pipeline, the benchmark allows for the direct inclusion of additional argument miners to be evaluated. First, our system pre-processes a ground truth set used both for training and testing. Then, the benchmark calculates a total of four measures to assess different aspects of the mining process. To showcase an initial implementation of our approach, we apply our procedure and evaluate a set of systems on a corpus of scientific publications. With the obtained comparable results we can homogeneously assess the current state of AM in this domain
    corecore