15 research outputs found

    Text Summarization Across High and Low-Resource Settings

    Get PDF
    Natural language processing aims to build automated systems that can both understand and generate natural language textual data. As the amount of textual data available online has increased exponentially, so has the need for intelligence systems to comprehend and present it to the world. As a result, automatic text summarization, the process by which a text\u27s salient content is automatically distilled into a concise form, has become a necessary tool. Automatic text summarization approaches and applications vary based on the input summarized, which may constitute single or multiple documents of different genres. Furthermore, the desired output style may consist of a sentence or sub-sentential units chosen directly from the input in extractive summarization or a fusion and paraphrase of the input document in abstractive summarization. Despite differences in the above use-cases, specific themes, such as the role of large-scale data for training these models, the application of summarization models in real-world scenarios, and the need for adequately evaluating and comparing summaries, are common across these settings. This dissertation presents novel data and modeling techniques for deep neural network-based summarization models trained across high-resource (thousands of supervised training examples) and low-resource (zero to hundreds of supervised training examples) data settings and a comprehensive evaluation of the model and metric progress in the field. We examine both Recurrent Neural Network (RNN)-based and Transformer-based models to extract and generate summaries from the input. To facilitate the training of large-scale networks, we introduce datasets applicable for multi-document summarization (MDS) for pedagogical applications and for news summarization. While the high-resource settings allow models to advance state-of-the-art performance, the failure of such models to adapt to settings outside of that in which it was initially trained requires smarter use of labeled data and motivates work in low-resource summarization. To this end, we propose unsupervised learning techniques for both extractive summarization in question answering, abstractive summarization on distantly-supervised data for summarization of community question answering forums, and abstractive zero and few-shot summarization across several domains. To measure the progress made along these axes, we revisit the evaluation of current summarization models. In particular, this dissertation addresses the following research objectives: 1) High-resource Summarization. We introduce datasets for multi-document summarization, focusing on pedagogical applications for NLP, news summarization, and Wikipedia topic summarization. Large-scale datasets allow models to achieve state-of-the-art performance on these tasks compared to prior modeling techniques, and we introduce a novel model to reduce redundancy. However, we also examine how models trained on these large-scale datasets fare when applied to new settings, showing the need for more generalizable models. 2) Low-resource Summarization. While high-resource summarization improves model performance, for practical applications, data-efficient models are necessary. We propose a pipeline for creating synthetic training data for training extractive question-answering models, a form of query-based extractive summarization with short-phrase summaries. In other work, we propose an automatic pipeline for training a multi-document summarizer in answer summarization on community question-answering forums without labeled data. Finally, we push the boundaries of abstractive summarization model performance when little or no training data is available across several domains. 3) Automatic Summarization Evaluation. To understand the extent of progress made across recent modeling techniques and better understand the current evaluation protocols, we examine the current metrics used to compare summarization output quality across 12 metrics across 23 deep neural network models and propose better-motivated summarization evaluation guidelines as well as point to open problems in summarization evaluation

    Preparation for the Novel Crisis: A Curriculum and Pedagogy for Emergent Crisis Leadership

    Get PDF
    The context for this study is the convergence of global trends and risks, especially environmental and social changes, with the interconnectedness of the modern world leading to new, larger-scale, and unforeseeable crises. This convergence has the potential for a shift from what the author describes as the current resilience paradigm to a new crisis paradigm, labelled the novel crisis. The proportion of the global critical infrastructure that is in private or non-state ownership exacerbates the challenges for crisis management systems and leadership. It means that a wider range of stakeholders will be involved, testing the skills and knowledge of the individuals confronting crises. This coincides with the changes to the nature and provision of Higher Education that are happening already or expected in the future and with changes to employment patterns and student profiles. A case study analyses the immediate impact Hurricane Katrina had on New Orleans in 2005 as an exemplar of the novel crisis. Secondary data are used to explore the organisational response of the authorities and the initiatives and leadership networks that emerged to respond to that catastrophe. There is still a need to improve and invest in conventional crisis management structures but the key to confronting future novel crises will be with the temporary networks that emerge of those with specialist knowledge, connections, or proximity to the event. An appropriate crisis leadership curriculum and pedagogy is developed from the literature and evidence from the case study to meet their needs

    CIRA annual report FY 2015/2016

    Get PDF
    Reporting period April 1, 2015-March 31, 2016

    Chemistry & Chemical Biology 2013 APR Self-Study & Documents

    Get PDF
    UNM Chemistry & Chemical Biology APR self-study report, review team report, response to review report, and initial action plan for Spring 2013, fulfilling requirements of the Higher Learning Commission
    corecore