2,483 research outputs found

    In vivo UTE-MRI reveals positive effects of raloxifene on skeletal bound water in skeletally mature beagle dogs

    Get PDF
    Raloxifene positively affects mechanical properties of the bone matrix in part through modification of skeletal bound water. The goal of this study was to determine if raloxifene induced alterations in skeletal hydration could be measured in vivo using ultra-short echotime magnetic resonance imaging (UTE-MRI). Twelve skeletally mature female beagle dogs (n=6/group) were treated for 6 months with oral doses of saline vehicle (VEH, 1 ml/kg/day) or raloxifene (RAL, 0.5 mg/kg/day). Following six months of treatment, all animals underwent in vivo UTE-MRI of the proximal tibial cortical bone. UTE-MRI signal intensity versus echotime curves were analyzed by fitting a double exponential to determine the short and long relaxation times of water with the bone (dependent estimations of bound and free water, respectively). Raloxifene-treated animals had significantly higher bound water (+14%; p = 0.05) and lower free water (-20%) compared to vehicle-treated animals. These data provide the first evidence that drug-induced changes in skeletal hydration can be non-invasively assessed using UTE-MRI.Funding for this study was provided by NIH (AR 62002 and a BIRT supplement). Raloxifene was provided by through an MTA with Eli Lilly

    Systematic review finds "spin" practices and poor reporting standards in studies on machine learning-based prediction models

    Get PDF
    Objectives We evaluated the presence and frequency of spin practices and poor reporting standards in studies that developed and/or validated clinical prediction models using supervised machine learning techniques. Study Design and Setting We systematically searched PubMed from 01/2018 to 12/2019 to identify diagnostic and prognostic prediction model studies using supervised machine learning. No restrictions were placed on data source, outcome, or clinical specialty. Results We included 152 studies: 38% reported diagnostic models and 62% prognostic models. When reported, discrimination was described without precision estimates in 53/71 abstracts (74.6% [95% CI 63.4–83.3]) and 53/81 main texts (65.4% [95% CI 54.6–74.9]). Of the 21 abstracts that recommended the model to be used in daily practice, 20 (95.2% [95% CI 77.3–99.8]) lacked any external validation of the developed models. Likewise, 74/133 (55.6% [95% CI 47.2–63.8]) studies made recommendations for clinical use in their main text without any external validation. Reporting guidelines were cited in 13/152 (8.6% [95% CI 5.1–14.1]) studies. Conclusion Spin practices and poor reporting standards are also present in studies on prediction models using machine learning techniques. A tailored framework for the identification of spin will enhance the sound reporting of prediction model studies

    Overinterpretation of findings in machine learning prediction model studies in oncology: a systematic review

    Get PDF
    Objectives In biomedical research, spin is the overinterpretation of findings, and it is a growing concern. To date, the presence of spin has not been evaluated in prognostic model research in oncology, including studies developing and validating models for individualized risk prediction. Study Design and Setting We conducted a systematic review, searching MEDLINE and EMBASE for oncology-related studies that developed and validated a prognostic model using machine learning published between 1st January, 2019, and 5th September, 2019. We used existing spin frameworks and described areas of highly suggestive spin practices. Results We included 62 publications (including 152 developed models; 37 validated models). Reporting was inconsistent between methods and the results in 27% of studies due to additional analysis and selective reporting. Thirty-two studies (out of 36 applicable studies) reported comparisons between developed models in their discussion and predominantly used discrimination measures to support their claims (78%). Thirty-five studies (56%) used an overly strong or leading word in their title, abstract, results, discussion, or conclusion. Conclusion The potential for spin needs to be considered when reading, interpreting, and using studies that developed and validated prognostic models in oncology. Researchers should carefully report their prognostic model research using words that reflect their actual results and strength of evidence

    Systematic review identifies the design and methodological conduct of studies on machine learning-based prediction models

    Get PDF
    Background and ObjectivesWe sought to summarize the study design, modelling strategies, and performance measures reported in studies on clinical prediction models developed using machine learning techniques.MethodsWe search PubMed for articles published between 01/01/2018 and 31/12/2019, describing the development or the development with external validation of a multivariable prediction model using any supervised machine learning technique. No restrictions were made based on study design, data source, or predicted patient-related health outcomes.ResultsWe included 152 studies, 58 (38.2% [95% CI 30.8–46.1]) were diagnostic and 94 (61.8% [95% CI 53.9–69.2]) prognostic studies. Most studies reported only the development of prediction models (n = 133, 87.5% [95% CI 81.3–91.8]), focused on binary outcomes (n = 131, 86.2% [95% CI 79.8–90.8), and did not report a sample size calculation (n = 125, 82.2% [95% CI 75.4–87.5]). The most common algorithms used were support vector machine (n = 86/522, 16.5% [95% CI 13.5–19.9]) and random forest (n = 73/522, 14% [95% CI 11.3–17.2]). Values for area under the Receiver Operating Characteristic curve ranged from 0.45 to 1.00. Calibration metrics were often missed (n = 494/522, 94.6% [95% CI 92.4–96.3]).ConclusionOur review revealed that focus is required on handling of missing values, methods for internal validation, and reporting of calibration to improve the methodological conduct of studies on machine learning–based prediction models

    Systematic review identifies the design and methodological conduct of studies on machine learning-based prediction models

    Get PDF
    Background and Objectives We sought to summarize the study design, modelling strategies, and performance measures reported in studies on clinical prediction models developed using machine learning techniques. Methods We search PubMed for articles published between 01/01/2018 and 31/12/2019, describing the development or the development with external validation of a multivariable prediction model using any supervised machine learning technique. No restrictions were made based on study design, data source, or predicted patient-related health outcomes. Results We included 152 studies, 58 (38.2% [95% CI 30.8–46.1]) were diagnostic and 94 (61.8% [95% CI 53.9–69.2]) prognostic studies. Most studies reported only the development of prediction models (n = 133, 87.5% [95% CI 81.3–91.8]), focused on binary outcomes (n = 131, 86.2% [95% CI 79.8–90.8), and did not report a sample size calculation (n = 125, 82.2% [95% CI 75.4–87.5]). The most common algorithms used were support vector machine (n = 86/522, 16.5% [95% CI 13.5–19.9]) and random forest (n = 73/522, 14% [95% CI 11.3–17.2]). Values for area under the Receiver Operating Characteristic curve ranged from 0.45 to 1.00. Calibration metrics were often missed (n = 494/522, 94.6% [95% CI 92.4–96.3]). Conclusion Our review revealed that focus is required on handling of missing values, methods for internal validation, and reporting of calibration to improve the methodological conduct of studies on machine learning–based prediction models. Systematic review registration PROSPERO, CRD42019161764

    Risk of bias assessments in individual participant data meta-analyses of test accuracy and prediction models: a review shows improvements are needed

    Get PDF
    Objectives: Risk of bias assessments are important in meta-analyses of both aggregate and individual participant data (IPD). There is limited evidence on whether and how risk of bias of included studies or datasets in IPD meta-analyses (IPDMAs) is assessed. We review how risk of bias is currently assessed, reported, and incorporated in IPDMAs of test accuracy and clinical prediction model studies and provide recommendations for improvement. Study Design and Setting: We searched PubMed (January 2018–May 2020) to identify IPDMAs of test accuracy and prediction models, then elicited whether each IPDMA assessed risk of bias of included studies and, if so, how assessments were reported and subsequently incorporated into the IPDMAs. Results: Forty-nine IPDMAs were included. Nineteen of 27 (70%) test accuracy IPDMAs assessed risk of bias, compared to 5 of 22 (23%) prediction model IPDMAs. Seventeen of 19 (89%) test accuracy IPDMAs used Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2), but no tool was used consistently among prediction model IPDMAs. Of IPDMAs assessing risk of bias, 7 (37%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided details on the information sources (e.g., the original manuscript, IPD, primary investigators) used to inform judgments, and 4 (21%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided information or whether assessments were done before or after obtaining the IPD of the included studies or datasets. Of all included IPDMAs, only seven test accuracy IPDMAs (26%) and one prediction model IPDMA (5%) incorporated risk of bias assessments into their meta-analyses. For future IPDMA projects, we provide guidance on how to adapt tools such as Prediction model Risk Of Bias ASsessment Tool (for prediction models) and QUADAS-2 (for test accuracy) to assess risk of bias of included primary studies and their IPD. Conclusion: Risk of bias assessments and their reporting need to be improved in IPDMAs of test accuracy and, especially, prediction model studies. Using recommended tools, both before and after IPD are obtained, will address this

    Iron Phosphate Glass-Containing Hanford Waste Simulant

    Full text link
    Resolution of the nation's high-level tank waste legacy requires the design, construction, and operation of large and technically complex one-of-a-kind processing waste treatment and vitrification facilities. While the ultimate limits for waste loading and melter efficiency have yet to be defined or realized, significant reductions in glass volumes for disposal and mission life may be possible with advancements in melter technologies and/or glass formulations. This test report describes the experimental results from a small-scale test using the research-scale melter (RSM) at Pacific Northwest National Laboratory (PNNL) to demonstrate the viability of iron-phosphate-based glass with a selected waste composition that is high in sulfate (4.37 wt% SO3). The primary objective of the test was to develop data to support a cost-benefit analysis related to the implementation of phosphate-based glasses for Hanford low-activity waste (LAW) and/or other high-level waste streams within the U.S. Department of Energy complex. The testing was performed by PNNL and supported by Idaho National Laboratory, Savannah River National Laboratory, Missouri University of Science and Technology, and Mo-Sci Corporation

    Risk of bias assessments in individual participant data meta-analyses of test accuracy and prediction models:a review shows improvements are needed

    Get PDF
    OBJECTIVES: Risk of bias assessments are important in meta-analyses of both aggregate and individual participant data (IPD). There is limited evidence on whether and how risk of bias of included studies or datasets in IPD meta-analyses (IPDMAs) is assessed. We review how risk of bias is currently assessed, reported, and incorporated in IPDMAs of test accuracy and clinical prediction model studies and provide recommendations for improvement.STUDY DESIGN AND SETTING: We searched PubMed (January 2018-May 2020) to identify IPDMAs of test accuracy and prediction models, then elicited whether each IPDMA assessed risk of bias of included studies and, if so, how assessments were reported and subsequently incorporated into the IPDMAs.RESULTS: Forty-nine IPDMAs were included. Nineteen of 27 (70%) test accuracy IPDMAs assessed risk of bias, compared to 5 of 22 (23%) prediction model IPDMAs. Seventeen of 19 (89%) test accuracy IPDMAs used Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2), but no tool was used consistently among prediction model IPDMAs. Of IPDMAs assessing risk of bias, 7 (37%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided details on the information sources (e.g., the original manuscript, IPD, primary investigators) used to inform judgments, and 4 (21%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided information or whether assessments were done before or after obtaining the IPD of the included studies or datasets. Of all included IPDMAs, only seven test accuracy IPDMAs (26%) and one prediction model IPDMA (5%) incorporated risk of bias assessments into their meta-analyses. For future IPDMA projects, we provide guidance on how to adapt tools such as Prediction model Risk Of Bias ASsessment Tool (for prediction models) and QUADAS-2 (for test accuracy) to assess risk of bias of included primary studies and their IPD.CONCLUSION: Risk of bias assessments and their reporting need to be improved in IPDMAs of test accuracy and, especially, prediction model studies. Using recommended tools, both before and after IPD are obtained, will address this.</p

    3D Coronal Density Reconstruction and Retrieving the Magnetic Field Structure during Solar Minimum

    Full text link
    Measurement of the coronal magnetic field is a crucial ingredient in understanding the nature of solar coronal phenomena at all scales. We employed STEREO/COR1 data obtained during a deep minimum of solar activity in February 2008 (Carrington rotation CR 2066) to retrieve and analyze the three-dimensional (3D) coronal electron density in the range of heights from 1.5 to 4 Rsun using a tomography method. With this, we qualitatively deduced structures of the coronal magnetic field. The 3D electron density analysis is complemented by the 3D STEREO/EUVI emissivity in the 195 A band obtained by tomography for the same CR. A global 3D MHD model of the solar corona was used to relate the reconstructed 3D density and emissivity to open/closed magnetic field structures. We show that the density maximum locations can serve as an indicator of current sheet position, while the locations of the density gradient maximum can be a reliable indicator of coronal hole boundaries. We find that the magnetic field configuration during CR 2066 has a tendency to become radially open at heliocentric distances greater than 2.5 Rsun. We also find that the potential field model with a fixed source surface (PFSS) is inconsistent with the boundaries between the regions with open and closed magnetic field structures. This indicates that the assumption of the potential nature of the coronal global magnetic field is not satisfied even during the deep solar minimum. Results of our 3D density reconstruction will help to constrain solar coronal field models and test the accuracy of the magnetic field approximations for coronal modeling.Comment: Published in "Solar Physics
    • …
    corecore