50 research outputs found

    Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning

    Get PDF
    Most successful information extraction systems operate with access to a large collection of documents. In this work, we explore the task of acquiring and incorporating external evidence to improve extraction accuracy in domains where the amount of training data is scarce. This process entails issuing search queries, extraction from new sources and reconciliation of extracted values, which are repeated until sufficient evidence is collected. We approach the problem using a reinforcement learning framework where our model learns to select optimal actions based on contextual information. We employ a deep Q-network, trained to optimize a reward function that reflects extraction accuracy while penalizing extra effort. Our experiments on two databases -- of shooting incidents, and food adulteration cases -- demonstrate that our system significantly outperforms traditional extractors and a competitive meta-classifier baseline.Comment: Appearing in EMNLP 2016 (12 pages incl. supplementary material

    LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models

    Full text link
    Recent advancements in text-to-image generation with diffusion models have yielded remarkable results synthesizing highly realistic and diverse images. However, these models still encounter difficulties when generating images from prompts that demand spatial or common sense reasoning. We propose to equip diffusion models with enhanced reasoning capabilities by using off-the-shelf pretrained large language models (LLMs) in a novel two-stage generation process. First, we adapt an LLM to be a text-guided layout generator through in-context learning. When provided with an image prompt, an LLM outputs a scene layout in the form of bounding boxes along with corresponding individual descriptions. Second, we steer a diffusion model with a novel controller to generate images conditioned on the layout. Both stages utilize frozen pretrained models without any LLM or diffusion model parameter optimization. We validate the superiority of our design by demonstrating its ability to outperform the base diffusion model in accurately generating images according to prompts that necessitate both language and spatial reasoning. Additionally, our method naturally allows dialog-based scene specification and is able to handle prompts in a language that is not well-supported by the underlying diffusion model.Comment: Work in progres

    LLM-grounded Video Diffusion Models

    Full text link
    Text-conditioned diffusion models have emerged as a promising tool for neural video generation. However, current models still struggle with intricate spatiotemporal prompts and often generate restricted or incorrect motion. To address these limitations, we introduce LLM-grounded Video Diffusion (LVD). Instead of directly generating videos from the text inputs, LVD first leverages a large language model (LLM) to generate dynamic scene layouts based on the text inputs and subsequently uses the generated layouts to guide a diffusion model for video generation. We show that LLMs are able to understand complex spatiotemporal dynamics from text alone and generate layouts that align closely with both the prompts and the object motion patterns typically observed in the real world. We then propose to guide video diffusion models with these layouts by adjusting the attention maps. Our approach is training-free and can be integrated into any video diffusion model that admits classifier guidance. Our results demonstrate that LVD significantly outperforms its base video diffusion model and several strong baseline methods in faithfully generating videos with the desired attributes and motion patterns.Comment: ICLR 2024. Project Page: https://llm-grounded-video-diffusion.github.io

    Conformal Language Modeling

    Full text link
    We propose a novel approach to conformal prediction for generative language models (LMs). Standard conformal prediction produces prediction sets -- in place of single predictions -- that have rigorous, statistical performance guarantees. LM responses are typically sampled from the model's predicted distribution over the large, combinatorial output space of natural language. Translating this process to conformal prediction, we calibrate a stopping rule for sampling different outputs from the LM that get added to a growing set of candidates until we are confident that the output set is sufficient. Since some samples may be low-quality, we also simultaneously calibrate and apply a rejection rule for removing candidates from the output set to reduce noise. Similar to conformal prediction, we prove that the sampled set returned by our procedure contains at least one acceptable answer with high probability, while still being empirically precise (i.e., small) on average. Furthermore, within this set of candidate responses, we show that we can also accurately identify subsets of individual components -- such as phrases or sentences -- that are each independently correct (e.g., that are not "hallucinations"), again with statistical guarantees. We demonstrate the promise of our approach on multiple tasks in open-domain question answering, text summarization, and radiology report generation using different LM variants

    The effect of long-term Aronia melanocarpa extract supplementation on cognitive performance, mood, and vascular function: A randomized controlled trial in healthy, middle-aged individuals

    Get PDF
    Cognitive decline is associated with lifestyle-related factors such as overweight, blood pressure, and dietary composition. Studies have reported beneficial effects of dietary anthocyanins on cognition in older adults and children. However, the effect of anthocyanin-rich Aronia melanocarpa extract (AME) on cognition is unknown. Therefore, this study aimed to determine the effect of long-term supplementation with AME on cognitive performance, mood, and vascular function in healthy, middle-aged, overweight adults. In a randomized double-blind placebo-controlled parallel study, 101 participants either consumed 90 mg AME, 150 mg AME, or placebo for 24 weeks. The grooved pegboard test, number cross-out test, and Stroop test were performed as measures for psychomotor speed, attention, and cognitive flexibility. Mood was evaluated with a visual analogue scale, serum brain-derived neurotrophic factor (BDNF) was determined, and vascular function was assessed by carotid ultrasounds and blood pressure measurements. AME improved psychomotor speed compared to placebo (90 mg AME: change = -3.37; p = 0.009). Furthermore, 150 mg AME decreased brachial diastolic blood pressure compared to 90 mg AME (change = 2.44; p = 0.011), but not compared to placebo. Attention, cognitive flexibility, BDNF, and other vascular parameters were not affected. In conclusion, AME supplementation showed an indication of beneficial effects on cognitive performance and blood pressure in individuals at risk of cognitive decline

    An open-source fine-tuned large language model for radiological impression generation: a multi-reader performance study

    Get PDF
    BackgroundThe impression section integrates key findings of a radiology report but can be subjective and variable. We sought to fine-tune and evaluate an open-source Large Language Model (LLM) in automatically generating impressions from the remainder of a radiology report across different imaging modalities and hospitals.MethodsIn this institutional review board-approved retrospective study, we collated a dataset of CT, US, and MRI radiology reports from the University of California San Francisco Medical Center (UCSFMC) (n = 372,716) and the Zuckerberg San Francisco General (ZSFG) Hospital and Trauma Center (n = 60,049), both under a single institution. The Recall-Oriented Understudy for Gisting Evaluation (ROUGE) score, an automatic natural language evaluation metric that measures word overlap, was used for automatic natural language evaluation. A reader study with five cardiothoracic radiologists was performed to more strictly evaluate the model's performance on a specific modality (CT chest exams) with a radiologist subspecialist baseline. We stratified the results of the reader performance study based on the diagnosis category and the original impression length to gauge case complexity.ResultsThe LLM achieved ROUGE-L scores of 46.51, 44.2, and 50.96 on UCSFMC and upon external validation, ROUGE-L scores of 40.74, 37.89, and 24.61 on ZSFG across the CT, US, and MRI modalities respectively, implying a substantial degree of overlap between the model-generated impressions and impressions written by the subspecialist attending radiologists, but with a degree of degradation upon external validation. In our reader study, the model-generated impressions achieved overall mean scores of 3.56/4, 3.92/4, 3.37/4, 18.29 s,12.32 words, and 84 while the original impression written by a subspecialist radiologist achieved overall mean scores of 3.75/4, 3.87/4, 3.54/4, 12.2 s, 5.74 words, and 89 for clinical accuracy, grammatical accuracy, stylistic quality, edit time, edit distance, and ROUGE-L score respectively. The LLM achieved the highest clinical accuracy ratings for acute/emergent findings and on shorter impressions.ConclusionsAn open-source fine-tuned LLM can generate impressions to a satisfactory level of clinical accuracy, grammatical accuracy, and stylistic quality. Our reader performance study demonstrates the potential of large language models in drafting radiology report impressions that can aid in streamlining radiologists' workflows

    A Metric Framework for quantifying Data Concentration

    Get PDF
    Poor performance of artificial neural nets when applied to credit-related classification problems is investigated and contrasted with logistic regression classification. We propose that artificial neural nets are less successful because of the inherent structure of credit data rather than any particular aspect of the neural net structure. Three metrics are developed to rationalise the result with such data. The metrics exploit the distributional properties of the data to rationalise neural net results. They are used in conjunction with a variant of an established concentration measure that differentiates between class characteristics. The results are contrasted with those obtained using random data, and are compared with results obtained using logistic regression. We find, in general agreement with previous studies, that logistic regressions out-perform neural nets in the majority of cases. An approximate decision criterion is developed in order to explain adverse results

    Machine Learning Methods for Image-based Personalized Cancer Screening

    No full text
    While AI has the potential to transform patient care, the development of equitable clinical AI models and their translation to hospitals remains difficult. From a computational perspective, these tools must deliver consistent performance across diverse populations and adapt to diverse clinical needs, while learning from biased and scarce data. Moreover, the development of tools relies on our capacity to balance clinical AI utility and patient privacy concerns. In this thesis, I will discuss our contributions in addressing the above challenges in three areas: 1) cancer risk assessment from imaging, 2) personalized screening policy design and 3) private data sharing through neural obfuscation. I have demonstrated that our clinical models offer significant improvements over the current standard of care across globally diverse patient populations. The models now underlie prospective clinical trails.Ph.D

    Improving information extraction by acquiring external evidence with reinforcement learning

    No full text
    Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 33-35).Most successful information extraction systems operate with access to a large collection of documents. In this work, we explore the task of acquiring and incorporating external evidence to improve extraction accuracy in domains where the amount of training data is scarce. This process entails issuing search queries, extraction from new sources and reconciliation of extracted values, which are repeated until sufficient evidence is collected. We approach the problem using a reinforcement learning framework where our model learns to select optimal actions based on contextual information. We employ a deep Q-network, trained to optimize a reward function that reflects extraction accuracy while penalizing extra effort. Our experiments on two databases - of shooting incidents, and food adulteration cases - demonstrate that our system significantly outperforms traditional extractors and a competitive meta-classifier baselineby Adam Yala.M. Eng

    Design of a Residential Tower on Top of an Existing Parking Garage

    No full text
    corecore