16 research outputs found

    Morphine for elective endotracheal intubation in neonates: a randomized trial [ISRCTN43546373]

    Get PDF
    BACKGROUND: Elective endotracheal intubations are still commonly performed without premedication in many institutions. The hypothesis tested in this study was that morphine given prior to elective intubations in neonates would decrease fluctuations in vital signs, shorten the duration of intubation and reduce the number of attempts. METHODS: From December 1999 to September 2000, infants of all gestations admitted to a level III neonatal intensive care unit and requiring an elective endotracheal intubation were randomly assigned to receive morphine 0.2 mg/kg IV or placebo 5 minutes before intubation. Duration of severe hypoxemia (HR< 90/min and Sp0(2)<85%), duration of procedure, duration of hypoxemia (Sp0(2)<85%), number of attempts and change in mean blood pressure were compared between groups. RESULTS: 34 infants (median 989 g and 28 weeks gestation) were included. The duration of severe hypoxemia was similar between groups. Duration of procedure, duration of hypoxemia, number of attempts and increases in mean blood pressure were also similar between groups. 94% of infants experienced bradycardia during the procedure. CONCLUSION: We failed to demonstrate the effectiveness of morphine in reducing the physiological instability or time needed to perform elective intubations. Alternatives, perhaps with more rapid onset of action, should be considered

    Get it from the Source: Identifying Library Resources and Software Used in Faculty Research

    Get PDF
    Libraries and Information Technology departments aim to support the educational and research needs of students, researchers, and faculty members. Close matches between the resources those departments provide and the resources the institution’s community members actually use highlight the value of the departments, demonstrate fiscally responsibility, and show attentiveness to the community’s needs. Traditionally, libraries rely on usage statistics to guide collection development decisions, but usage statistics can only imply value. Identifying a resource by name in a publication demonstrates the value of that resource more clearly. This pilot project examined the full-text of articles published in 2016-2017 by faculty members at a mid-sized, special-focus institution to answer the questions “Do faculty members have university-provided access to the research tools they need to publish?” and “If not, where are they getting them?” Using a custom database, the presenters indexed every publication by author, publication, resources used, availability of the identified resources, and more. This pilot study can be adapted to projects at other institutions, allowing them to gain a better understanding of the strengths and weaknesses of their own institution’s offerings. In addition, they will be able to identify ways to use that data to negotiate for additional resources, inform strategic partnerships, and facilitate open discussions with the institution’s community

    Systematic reviews and tech mining: A methodological comparison with case study

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/147169/1/jrsm1318_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/147169/2/jrsm1318.pd

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead

    Comparison of Amitriptyline and US Food and Drug Administration-Approved Treatments for Fibromyalgia: A Systematic Review and Network Meta-analysis

    No full text
    IMPORTANCE: Amitriptyline is an established medication used off-label for the treatment of fibromyalgia, but pregabalin, duloxetine, and milnacipran are the only pharmacological agents approved by the US Food and Drug Administration (FDA) to treat fibromyalgia. OBJECTIVE: To investigate the comparative effectiveness and acceptability associated with pharmacological treatment options for fibromyalgia. DATA SOURCES: Searches of PubMed/MEDLINE, Cochrane Library, Embase, and Clinicaltrials.gov were conducted on November 20, 2018, and updated on July 29, 2020. STUDY SELECTION: Randomized clinical trials (RCTs) comparing amitriptyline or any FDA-approved doses of investigated drugs. DATA EXTRACTION AND SYNTHESIS: This study follows the Preferred Reporting Items for Systematic Reviews and Meta-analyses reporting guideline. Four independent reviewers extracted data using a standardized data extraction sheet and assessed quality of RCTs. A random-effects bayesian network meta-analysis (NMA) was conducted. Data were analyzed from August 2020 to January 2021. MAIN OUTCOMES AND MEASURES: Comparative effectiveness and acceptability (defined as discontinuation of treatment owing to adverse drug reactions) associated with amitriptyline (off-label), pregabalin, duloxetine, and milnacipran (on-label) in reducing fibromyalgia symptoms. The following doses were compared: 60-mg and 120-mg duloxetine; 150-mg, 300-mg, 450-mg, and 600-mg pregabalin; 100-mg and 200-mg milnacipran; and amitriptyline. Effect sizes are reported as standardized mean differences (SMDs) for continuous outcomes and odds ratios (ORs) for dichotomous outcomes with 95% credible intervals (95% CrIs). Findings were considered statistically significant when the 95% CrI did not include the null value (0 for SMD and 1 for OR). Relative treatment ranking using the surface under the cumulative ranking curve (SUCRA) was also evaluated. RESULTS: A total of 36 studies (11 930 patients) were included. The mean (SD) age of patients was 48.4 (10.4) years, and 11 261 patients (94.4%) were women. Compared with placebo, amitriptyline was associated with reduced sleep disturbances (SMD, -0.97; 95% CrI, -1.10 to -0.83), fatigue (SMD, -0.64; 95% CrI, -0.75 to -0.53), and improved quality of life (SMD, -0.80; 95% CrI, -0.94 to -0.65). Duloxetine 120 mg was associated with the highest improvement in pain (SMD, -0.33; 95% CrI, -0.36 to -0.30) and depression (SMD, -0.25; 95% CrI, -0.32 to -0.17) vs placebo. All treatments were associated with inferior acceptability (higher dropout rate) than placebo, except amitriptyline (OR, 0.78; 95% CrI, 0.31 to 1.66). According to the SUCRA-based relative ranking of treatments, duloxetine 120 mg was associated with higher efficacy for treating pain and depression, while amitriptyline was associated with higher efficacy for improving sleep, fatigue, and overall quality of life. CONCLUSIONS AND RELEVANCE: These findings suggest that clinicians should consider how treatments could be tailored to individual symptoms, weighing the benefits and acceptability, when prescribing medications to patients with fibromyalgia

    FDA-approved machine learning algorithms in neuroradiology: A systematic review of the current evidence for approval

    No full text
    Over the past decade, machine learning (ML) and artificial intelligence (AI) have become increasingly prevalent in the medical field. In the United States, the Food and Drug Administration (FDA) is responsible for regulating AI algorithms as medical devices to ensure patient safety. However, recent work has shown that the FDA approval process may be deficient. In this study, we evaluate the evidence supporting FDA-approved neuroalgorithms, the subset of machine learning algorithms with applications in the central nervous system (CNS), through a systematic review of the primary literature. Articles covering the 53 FDA-approved algorithms with applications in the CNS published in PubMed, EMBASE, Google Scholar and Scopus between database inception and January 25, 2022 were queried. Initial searches identified 1505 studies, of which 92 articles met the criteria for extraction and inclusion. Studies were identified for 26 of the 53 neuroalgorithms, of which 10 algorithms had only a single peer-reviewed publication. Performance metrics were available for 15 algorithms, external validation studies were available for 24 algorithms, and studies exploring the use of algorithms in clinical practice were available for 7 algorithms. Papers studying the clinical utility of these algorithms focused on three domains: workflow efficiency, cost savings, and clinical outcomes. Our analysis suggests that there is a meaningful gap between the FDA approval of machine learning algorithms and their clinical utilization. There appears to be room for process improvement by implementation of the following recommendations: the provision of compelling evidence that algorithms perform as intended, mandating minimum sample sizes, reporting of a predefined set of performance metrics for all algorithms and clinical application of algorithms prior to widespread use. This work will serve as a baseline for future research into the ideal regulatory framework for AI applications worldwide

    Systematic Reviews and Tech Mining: A Methodological Comparison with Case Study

    No full text
    When the Medical Library Association identified questions critical for the future of the profession, it assigned groups to use systematic reviews to find the answers to these questions. Group 6, whose question was on emerging technologies, recognized early on that the systematic review process would not work well for this question, which looks forward to predict future trends, whereas the systematic review process looks back in time. We searched for new methodologies that were more appropriate to our question, developing a process that combined systematic review, text mining, and visualization techniques. We then discovered tech mining, which is very similar to the process we had created. In this paper, we describe our research design and compare tech mining and systematic review methodologies. There are similarities and differences in each process: Both use a defined research question, deliberate database selection, careful and iterative search strategy development, broad data collection, and thoughtful data analysis. However, the focus of the research differs significantly, with systematic reviews looking to the past and tech mining mainly to the future. Our comparison demonstrates that each process can be enhanced from a purposeful consideration of the procedures of the other. Tech mining would benefit from the inclusion of a librarian on their research team and a greater attention to standards and collaboration in the research project. Systematic reviews would gain from the use of tech mining tools to enrich their data analysis and corporate management communication techniques to promote the adoption of their findings
    corecore