2,160 research outputs found

    Artificial intelligence in lung cancer diagnostic imaging: a review of the reporting and conduct of research published 2018–2019

    Get PDF
    Objective: This study aimed to describe the methodologies used to develop and evaluate models that use artificial intelligence (AI) to analyse lung images in order to detect, segment (outline borders of), or classify pulmonary nodules as benign or malignant. Methods: In October 2019, we systematically searched the literature for original studies published between 2018 and 2019 that described prediction models using AI to evaluate human pulmonary nodules on diagnostic chest images. Two evaluators independently extracted information from studies, such as study aims, sample size, AI type, patient characteristics, and performance. We summarised data descriptively. Results: The review included 153 studies: 136 (89%) development-only studies, 12 (8%) development and validation, and 5 (3%) validation-only. CT scans were the most common type of image type used (83%), often acquired from public databases (58%). Eight studies (5%) compared model outputs with biopsy results. 41 studies (26.8%) reported patient characteristics. The models were based on different units of analysis, such as patients, images, nodules, or image slices or patches. Conclusion: The methods used to develop and evaluate prediction models using AI to detect, segment, or classify pulmonary nodules in medical imaging vary, are poorly reported, and therefore difficult to evaluate. Transparent and complete reporting of methods, results and code would fill the gaps in information we observed in the study publications. Advances in knowledge: We reviewed the methodology of AI models detecting nodules on lung images and found that the models were poorly reported and had no description of patient characteristics, with just a few comparing models’ outputs with biopsies results. When lung biopsy is not available, lung-RADS could help standardise the comparisons between the human radiologist and the machine. The field of radiology should not give up principles from the diagnostic accuracy studies, such as the choice for the correct ground truth, just because AI is used. Clear and complete reporting of the reference standard used would help radiologists trust in the performance that AI models claim to have. This review presents clear recommendations about the essential methodological aspects of diagnostic models that should be incorporated in studies using AI to help detect or segmentate lung nodules. The manuscript also reinforces the need for more complete and transparent reporting, which can be helped using the recommended reporting guidelines

    Poor handling of continuous predictors in clinical prediction models using logistic regression: a systematic review

    Get PDF
    Background and Objectives When developing a clinical prediction model, assuming a linear relationship between the continuous predictors and outcome is not recommended. Incorrect specification of the functional form of continuous predictors could reduce predictive accuracy. We examine how continuous predictors are handled in studies developing a clinical prediction model. Methods We searched PubMed for clinical prediction model studies developing a logistic regression model for a binary outcome, published between July 01, 2020, and July 30, 2020. Results In total, 118 studies were included in the review (18 studies (15%) assessed the linearity assumption or used methods to handle nonlinearity, and 100 studies (85%) did not). Transformation and splines were commonly used to handle nonlinearity, used in 7 (n = 7/18, 39%) and 6 (n = 6/18, 33%) studies, respectively. Categorization was most often used method to handle continuous predictors (n = 67/118, 56.8%) where most studies used dichotomization (n = 40/67, 60%). Only ten models included nonlinear terms in the final model (n = 10/18, 56%). Conclusion Though widely recommended not to categorize continuous predictors or assume a linear relationship between outcome and continuous predictors, most studies categorize continuous predictors, few studies assess the linearity assumption, and even fewer use methodology to account for nonlinearity. Methodological guidance is provided to guide researchers on how to handle continuous predictors when developing a clinical prediction model

    Poor handling of continuous predictors in clinical prediction models using logistic regression:a systematic review

    Get PDF
    Background and Objectives When developing a clinical prediction model, assuming a linear relationship between the continuous predictors and outcome is not recommended. Incorrect specification of the functional form of continuous predictors could reduce predictive accuracy. We examine how continuous predictors are handled in studies developing a clinical prediction model. Methods We searched PubMed for clinical prediction model studies developing a logistic regression model for a binary outcome, published between July 01, 2020, and July 30, 2020. Results In total, 118 studies were included in the review (18 studies (15%) assessed the linearity assumption or used methods to handle nonlinearity, and 100 studies (85%) did not). Transformation and splines were commonly used to handle nonlinearity, used in 7 (n = 7/18, 39%) and 6 (n = 6/18, 33%) studies, respectively. Categorization was most often used method to handle continuous predictors (n = 67/118, 56.8%) where most studies used dichotomization (n = 40/67, 60%). Only ten models included nonlinear terms in the final model (n = 10/18, 56%). Conclusion Though widely recommended not to categorize continuous predictors or assume a linear relationship between outcome and continuous predictors, most studies categorize continuous predictors, few studies assess the linearity assumption, and even fewer use methodology to account for nonlinearity. Methodological guidance is provided to guide researchers on how to handle continuous predictors when developing a clinical prediction model

    Sample size requirements are not being considered in studies developing prediction models for binary outcomes: a systematic review

    Get PDF
    Background Having an appropriate sample size is important when developing a clinical prediction model. We aimed to review how sample size is considered in studies developing a prediction model for a binary outcome. Methods We searched PubMed for studies published between 01/07/2020 and 30/07/2020 and reviewed the sample size calculations used to develop the prediction models. Using the available information, we calculated the minimum sample size that would be needed to estimate overall risk and minimise overfitting in each study and summarised the difference between the calculated and used sample size. Results A total of 119 studies were included, of which nine studies provided sample size justification (8%). The recommended minimum sample size could be calculated for 94 studies: 73% (95% CI: 63–82%) used sample sizes lower than required to estimate overall risk and minimise overfitting including 26% studies that used sample sizes lower than required to estimate overall risk only. A similar number of studies did not meet the ≥ 10EPV criteria (75%, 95% CI: 66–84%). The median deficit of the number of events used to develop a model was 75 [IQR: 234 lower to 7 higher]) which reduced to 63 if the total available data (before any data splitting) was used [IQR:225 lower to 7 higher]. Studies that met the minimum required sample size had a median c-statistic of 0.84 (IQR:0.80 to 0.9) and studies where the minimum sample size was not met had a median c-statistic of 0.83 (IQR: 0.75 to 0.9). Studies that met the ≥ 10 EPP criteria had a median c-statistic of 0.80 (IQR: 0.73 to 0.84). Conclusions Prediction models are often developed with no sample size calculation, as a consequence many are too small to precisely estimate the overall risk. We encourage researchers to justify, perform and report sample size calculations when developing a prediction model

    The Evolution of the Dark Halo Spin Parameters lambda and lambda' in a LCDM Universe: The Role of Minor and Major Mergers

    Full text link
    The evolution of the spin parameter of dark halos and the dependence on the halo merging history in a set of dissipationless cosmological LCDM simulations is investigated. Special focus is placed on the differences of the two commonly used versions of the spin parameter, namely lambda=J*E^1/2/(G*M^5/2) (Peebles 80) and lambda'=J/(sqrt(2)*M_vir*R_vir*V_vir) (Bullock et al. 01). Though the distribution of the spin transfer rate defined as the ratio of the spin parameters after and prior to a merger is similar to a high degree for both, lambda and lambda', we find considerable differences in the time evolution: while lambda' is roughly independent of redshift, lambda turns out to increase significantly with decreasing redshift. This distinct behaviour arises from small differences in the spin transfer during accretion events. The evolution of the spin parameter is strongly coupled with the virial ratio eta:=2*E_kin/|E_pot| of dark halos. Major mergers disturb halos and increase both their virial ratio and spin parameter for 1-2 Gyrs. At high redshifts (z=2-3) many halos are disturbed with an average virial ratio of eta = 1.3 which approaches unity until z=0. We find that the redshift evolution of the spin parameters is dominated by the huge number of minor mergers rather than the rare major merger events.Comment: 10 pages, 11 figures, submitted to MNRA

    Effects of halo substructure on the power spectrum and bispectrum

    Full text link
    We study the effects of halo substructure and a distribution in the concentration parameter of haloes on large-scale structure statistics. The effects on the power spectrum and bispectrum are studied on the smallest scales accessible from future surveys. We compare halo-model predictions with results based on N-body simulations, but also extend our predictions to 10-kpc scales which will be probed by future simulations. We find that weak-lensing surveys proposed for the coming decade can probe the power spectrum on small enough scales to detect substructure in massive haloes. We discuss the prospects of constraining the mass fraction in substructure in view of partial degeneracies with parameters such as the tilt and running of the primordial power spectrum.Comment: 9 pages, 10 figures; replaced with version published in MNRAS; removed grey-scale versions of figures which were being included at the end by the serve

    Sequestration of Martian CO2 by mineral carbonation

    Get PDF
    Carbonation is the water-mediated replacement of silicate minerals, such as olivine, by carbonate, and is commonplace in the Earth’s crust. This reaction can remove significant quantities of CO2 from the atmosphere and store it over geological timescales. Here we present the first direct evidence for CO2 sequestration and storage on Mars by mineral carbonation. Electron beam imaging and analysis show that olivine and a plagioclase feldspar-rich mesostasis in the Lafayette meteorite have been replaced by carbonate. The susceptibility of olivine to replacement was enhanced by the presence of smectite veins along which CO2-rich fluids gained access to grain interiors. Lafayette was partially carbonated during the Amazonian, when liquid water was available intermittently and atmospheric CO2 concentrations were close to their present-day values. Earlier in Mars’ history, when the planet had a much thicker atmosphere and an active hydrosphere, carbonation is likely to have been an effective mechanism for sequestration of CO2

    The influence of surface energy on the self-cleaning of insect adhesive devices

    Get PDF
    The ability of insects to adhere to surfaces is facilitated by the use of adhesive organs found on the terminal leg segments. These adhesive pads are inherently 'tacky' and are expected to be subject to contamination by particulates, leading to loss of function. Here, we investigated the self-cleaning of ants and beetles by comparing the abilities of both hairy and smooth pad forms to selfclean on both high and low energy surfaces after being fouled with microspheres of two sizes and surface energies. We focused on the time taken to regain adhesive potential in unrestrained Hymenopterans (Polyrhachis dives and Myrmica scabrinodis) and Coccinellids (Harmonia axyridis and Adalia bipunctata) fouled with microspheres. We found that the reattainment of adhesion is influenced by particle type and size in Hymenopterans, with an interaction between the surface energy of the contaminating particle and substrate. In Coccinellids, reattainment of adhesion was only influenced by particle size and substrate properties. The adhesive organs of Coccinellids appear to possess superior self-cleaning abilities compared with those of Hymenopterans, although Hymenopterans exhibit better adhesion to both surface types. © 2012. Published by The Company of Biologists Ltd

    Power spectrum for the small-scale Universe

    Full text link
    The first objects to arise in a cold dark matter universe present a daunting challenge for models of structure formation. In the ultra small-scale limit, CDM structures form nearly simultaneously across a wide range of scales. Hierarchical clustering no longer provides a guiding principle for theoretical analyses and the computation time required to carry out credible simulations becomes prohibitively high. To gain insight into this problem, we perform high-resolution (N=720^3 - 1584^3) simulations of an Einstein-de Sitter cosmology where the initial power spectrum is P(k) propto k^n, with -2.5 < n < -1. Self-similar scaling is established for n=-1 and n=-2 more convincingly than in previous, lower-resolution simulations and for the first time, self-similar scaling is established for an n=-2.25 simulation. However, finite box-size effects induce departures from self-similar scaling in our n=-2.5 simulation. We compare our results with the predictions for the power spectrum from (one-loop) perturbation theory and demonstrate that the renormalization group approach suggested by McDonald improves perturbation theory's ability to predict the power spectrum in the quasilinear regime. In the nonlinear regime, our power spectra differ significantly from the widely used fitting formulae of Peacock & Dodds and Smith et al. and a new fitting formula is presented. Implications of our results for the stable clustering hypothesis vs. halo model debate are discussed. Our power spectra are inconsistent with predictions of the stable clustering hypothesis in the high-k limit and lend credence to the halo model. Nevertheless, the fitting formula advocated in this paper is purely empirical and not derived from a specific formulation of the halo model.Comment: 30 pages including 10 figures; accepted for publication in MNRA

    A redshift distortion free correlation function at third order in the nonlinear regime

    Full text link
    The zeroth-order component of the cosine expansion of the projected three-point correlation function is proposed for clustering analysis of cosmic large scale structure. These functions are third order statistics but can be measured similarly to the projected two-point correlations. Numerical experiments with N-body simulations indicate that the advocated statistics are redshift distortion free within 10% in the non-linear regime on scales ~0.2-10Mpc/h. Halo model prediction of the zeroth-order component of the projected three-point correlation function agrees with simulations within ~10%. This lays the ground work for using these functions to perform joint analyses with the projected two-point correlation functions, exploring galaxy clustering properties in the framework of the halo model and relevant extensions.Comment: 10 pages, 6 figs; MNRAS accepte
    • …
    corecore