4,946 research outputs found

    Effect of a machine learning-based severe sepsis prediction algorithm on patient survival and hospital length of stay: a randomised clinical trial.

    Get PDF
    IntroductionSeveral methods have been developed to electronically monitor patients for severe sepsis, but few provide predictive capabilities to enable early intervention; furthermore, no severe sepsis prediction systems have been previously validated in a randomised study. We tested the use of a machine learning-based severe sepsis prediction system for reductions in average length of stay and in-hospital mortality rate.MethodsWe conducted a randomised controlled clinical trial at two medical-surgical intensive care units at the University of California, San Francisco Medical Center, evaluating the primary outcome of average length of stay, and secondary outcome of in-hospital mortality rate from December 2016 to February 2017. Adult patients (18+) admitted to participating units were eligible for this factorial, open-label study. Enrolled patients were assigned to a trial arm by a random allocation sequence. In the control group, only the current severe sepsis detector was used; in the experimental group, the machine learning algorithm (MLA) was also used. On receiving an alert, the care team evaluated the patient and initiated the severe sepsis bundle, if appropriate. Although participants were randomly assigned to a trial arm, group assignments were automatically revealed for any patients who received MLA alerts.ResultsOutcomes from 75 patients in the control and 67 patients in the experimental group were analysed. Average length of stay decreased from 13.0 days in the control to 10.3 days in the experimental group (p=0.042). In-hospital mortality decreased by 12.4 percentage points when using the MLA (p=0.018), a relative reduction of 58.0%. No adverse events were reported during this trial.ConclusionThe MLA was associated with improved patient outcomes. This is the first randomised controlled trial of a sepsis surveillance system to demonstrate statistically significant differences in length of stay and in-hospital mortality.Trial registrationNCT03015454

    The Transformative Impact of Extracellular Vesicles on Developing Sperm

    Get PDF
    Objective: To review the role of extracellular vesicles (EVs) released from the male reproductive tract and their impact on developing sperm. We discuss how sperm exiting the seminiferous tubules, although developmentally mature, require further modification. Acquisition of various functions including increased motility, transfer of cargoes and ability to undertake the acrosome reaction is mediated through the interaction between sperm and EVs. Methods: A review of the literature identified that EVs are released from different portions of the male reproductive tract, notably the epididymis and prostate. These EVs interact with sperm as they pass from the seminiferous tubules to the epididymis and vas deferens prior to ejaculation. Results: EVs are small lipid-bound particles carrying bespoke RNA, protein and lipid cargoes. These cargoes are loaded based on the state of the parent cell and are used to communicate with recipient cells. In sperm, these cargoes are essential for post-testicular modification. Conclusions: Interactions between developing sperm and EVs are important for the subsequent function of sperm. Prior to ejaculation, these interactions confer important changes for the post-testicular modification and development of sperm. Little is known about the interaction between EVs from the testes and the spermatogonial stem cell niche or developing sperm within the seminiferous tubules. However, the numerous roles of EVs in the post-testicular modification of sperm have led many to suspect that they may also play important roles in developing sperm within the testes

    Trapping of magnetic flux by the plunge region of a black hole accretion disk

    Get PDF
    The existence of the radius of marginal stability means that accretion flows around black holes invariably undergo a transition from a MHD turbulent disk-like flow to an inward plunging flow. We argue that the plunging inflow can greatly enhance the trapping of large scale magnetic field on the black hole, and therefore may increase the importance of the Blandford-Znajek (BZ) effect relative to previous estimates that ignore the plunge region. We support this hypothesis by constructing and analyzing a toy-model of the dragging and trapping of a large scale field by a black hole disk, revealing a strong dependence of this effect on the effective magnetic Prandtl number of the MHD turbulent disk. Furthermore, we show that the enhancement of the BZ effect depends on the geometric thickness of the accretion disk. This may be, at least in part, the physical underpinnings of the empirical relation between the inferred geometric thickness of a black hole disk and the presence of a radio jet.Comment: 18 pages, 3 figures, accepted for publication in the Astrophysical Journal. See http://www.astro.umd.edu/~chris/publications/movies/flux_trapping.html for animation

    Extracellular vesicles in urological malignancies

    Get PDF
    Extracellular vesicles (EVs) are small lipid bound structures released from cells containing bioactive cargoes. Both the type of cargo and amount loaded varies compared to that of the parent cell. The characterisation of EVs in cancers of the male urogenital tract has identified several cargoes with promising diagnostic and disease monitoring potential. EVs released by cancers of the male urogenital tract promote cell-to-cell communication, migration, cancer progression and manipulate the immune system promoting metastasis by evading the immune response. Their use as diagnostic biomarkers represents a new area of screening and disease detection, potentially reducing the need for invasive biopsies. Many validated EV cargoes have been found to have superior sensitivity and specificity than current diagnostic tools currently in use. The use of EVs to improve disease monitoring and develop novel therapeutics will enable clinicians to individualise patient management in the exciting era of personalised medicine

    Meta-Learning Online Adaptation of Language Models

    Full text link
    Large language models encode surprisingly broad knowledge about the world into their parameters. However, the knowledge in static language models can fall out of date, limiting the model's effective "shelf life." While online fine-tuning can reduce this degradation, we find that fine-tuning on a stream of documents using standard optimizers such as Adam leads to a disappointingly low level of information uptake. We hypothesize that online fine-tuning does not sufficiently 'attend' to important information. That is, the gradient signal from important tokens representing factual information is drowned out by the gradient from inherently noisy tokens, suggesting a dynamic, context-aware learning rate may be beneficial. To test this hypothesis, we meta-train a small, autoregressive model to reweight the language modeling loss for each token during online fine-tuning, with the objective of maximizing the out-of-date base language model's ability to answer questions about a document after a single weighted gradient step. We call this approach Context-aware Meta-learned Loss Scaling (CaMeLS). Across three different distributions of documents, our experiments find that fine-tuning on streams of thousands of documents with CaMeLS substantially improves knowledge retention compared to standard online fine-tuning. Finally, we find that the meta-learned weights are general, and that a single reweighting model can be used to enhance the online adaptation of many LMs

    DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature

    Full text link
    The increasing fluency and widespread usage of large language models (LLMs) highlight the desirability of corresponding tools aiding detection of LLM-generated text. In this paper, we identify a property of the structure of an LLM's probability function that is useful for such detection. Specifically, we demonstrate that text sampled from an LLM tends to occupy negative curvature regions of the model's log probability function. Leveraging this observation, we then define a new curvature-based criterion for judging if a passage is generated from a given LLM. This approach, which we call DetectGPT, does not require training a separate classifier, collecting a dataset of real or generated passages, or explicitly watermarking generated text. It uses only log probabilities computed by the model of interest and random perturbations of the passage from another generic pre-trained language model (e.g., T5). We find DetectGPT is more discriminative than existing zero-shot methods for model sample detection, notably improving detection of fake news articles generated by 20B parameter GPT-NeoX from 0.81 AUROC for the strongest zero-shot baseline to 0.95 AUROC for DetectGPT. See https://ericmitchell.ai/detectgpt for code, data, and other project information.Comment: ICML 202

    Self-Destructing Models: Increasing the Costs of Harmful Dual Uses in Foundation Models

    Full text link
    A growing ecosystem of large, open-source foundation models has reduced the labeled data and technical expertise necessary to apply machine learning to many new problems. Yet foundation models pose a clear dual-use risk, indiscriminately reducing the costs of building both harmful and beneficial machine learning systems. To mitigate this risk, we propose the task blocking paradigm, in which foundation models are trained with an additional mechanism to impede adaptation to harmful tasks while retaining good performance on desired tasks. We call the resulting models self-destructing models, inspired by mechanisms that prevent adversaries from using tools for harmful purposes. We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning, showing that it can largely prevent a BERT-based model from learning to perform gender identification without harming the model's ability to perform profession classification. We conclude with a discussion of future directions.Comment: Presented at the First Workshop of Pre-training: Perspectives, Pitfalls, and Paths Forward (ICML, 2022) and New Frontiers in Adversarial Machine Learning Workshop (ICML, 2022

    Fine-tuning Language Models for Factuality

    Full text link
    The fluency and creativity of large pre-trained language models (LLMs) have led to their widespread use, sometimes even as a replacement for traditional search engines. Yet language models are prone to making convincing but factually inaccurate claims, often referred to as 'hallucinations.' These errors can inadvertently spread misinformation or harmfully perpetuate misconceptions. Further, manual fact-checking of model responses is a time-consuming process, making human factuality labels expensive to acquire. In this work, we fine-tune language models to be more factual, without human labeling and targeting more open-ended generation settings than past work. We leverage two key recent innovations in NLP to do so. First, several recent works have proposed methods for judging the factuality of open-ended text by measuring consistency with an external knowledge base or simply a large model's confidence scores. Second, the direct preference optimization algorithm enables straightforward fine-tuning of language models on objectives other than supervised imitation, using a preference ranking over possible model responses. We show that learning from automatically generated factuality preference rankings, generated either through existing retrieval systems or our novel retrieval-free approach, significantly improves the factuality (percent of generated claims that are correct) of Llama-2 on held-out topics compared with RLHF or decoding strategies targeted at factuality. At 7B scale, compared to Llama-2-chat, we observe 58% and 40% reduction in factual error rate when generating biographies and answering medical questions, respectively
    corecore