14 research outputs found

    A practical solution to estimate the sample size required for clinical prediction models generated from observational research on data

    Get PDF
    [EN] Background Estimating the required sample size is crucial when developing and validating clinical prediction models. However, there is no consensus about how to determine the sample size in such a setting. Here, the goal was to compare available methods to define a practical solution to sample size estimation for clinical predictive models, as applied to Horizon 2020 PRIMAGE as a case study. Methods Three different methods (Riley's; "rule of thumb" with 10 and 5 events per predictor) were employed to calculate the sample size required to develop predictive models to analyse the variation in sample size as a function of different parameters. Subsequently, the sample size for model validation was also estimated. Results To develop reliable predictive models, 1397 neuroblastoma patients are required, 1060 high-risk neuroblastoma patients and 1345 diffuse intrinsic pontine glioma (DIPG) patients. This sample size can be lowered by reducing the number of variables included in the model, by including direct measures of the outcome to be predicted and/or by increasing the follow-up period. For model validation, the estimated sample size resulted to be 326 patients for neuroblastoma, 246 for high-risk neuroblastoma, and 592 for DIPG. Conclusions Given the variability of the different sample sizes obtained, we recommend using methods based on epidemiological data and the nature of the results, as the results are tailored to the specific clinical problem. In addition, sample size can be reduced by lowering the number of parameter predictors, by including direct measures of the outcome of interest.This work is funded by the HORIZON2020 PRIMAGE project (RIA, topic SC1DTH 07-2018), from the EU Framework Programme for Research and Innovation of the European Commission.Baeza-Delgado, C.; Cerdá Alberich, L.; Carot Sierra, JM.; Veiga-Canuto, D.; Martinez De Las Heras, B.; Raza, B.; Marti-Bonmati, L. (2022). A practical solution to estimate the sample size required for clinical prediction models generated from observational research on data. European Radiology Experimental. 6(1). https://doi.org/10.1186/s41747-022-00276-y6

    Independent Validation of a Deep Learning nnU-Net Tool for Neuroblastoma Detection and Segmentation in MR Images

    Get PDF
    [EN] Tumor segmentation is a key step in oncologic imaging processing. We have recently developed a model to detect and segment neuroblastic tumors on MR images based on deep learning architecture nnU-Net. In this work, we performed an independent validation of the automatic segmentation tool with a large heterogeneous dataset. We reviewed the automatic segmentations and manually edited them when necessary. We were able to show that the automatic network was able to locate and segment the primary tumor on the T2 weighted images in the majority of cases, with an extremely high agreement between the automatic tool and the manually edited masks. The time needed for manual adjustment was very low. Objectives. To externally validate and assess the accuracy of a previously trained fully automatic nnU-Net CNN algorithm to identify and segment primary neuroblastoma tumors in MR images in a large children cohort. Methods. An international multicenter, multivendor imaging repository of patients with neuroblastic tumors was used to validate the performance of a trained Machine Learning (ML) tool to identify and delineate primary neuroblastoma tumors. The dataset was heterogeneous and completely independent from the one used to train and tune the model, consisting of 300 children with neuroblastic tumors having 535 MR T2-weighted sequences (486 sequences at diagnosis and 49 after finalization of the first phase of chemotherapy). The automatic segmentation algorithm was based on a nnU-Net architecture developed within the PRIMAGE project. For comparison, the segmentation masks were manually edited by an expert radiologist, and the time for the manual editing was recorded. Different overlaps and spatial metrics were calculated to compare both masks. Results. The median Dice Similarity Coefficient (DSC) was high 0.997; 0.944-1.000 (median; Q1-Q3). In 18 MR sequences (6%), the net was not able neither to identify nor segment the tumor. No differences were found regarding the MR magnetic field, type of T2 sequence, or tumor location. No significant differences in the performance of the net were found in patients with an MR performed after chemotherapy. The time for visual inspection of the generated masks was 7.9 +/- 7.5 (mean +/- Standard Deviation (SD)) seconds. Those cases where manual editing was needed (136 masks) required 124 +/- 120 s. Conclusions. The automatic CNN was able to locate and segment the primary tumor on the T2-weighted images in 94% of cases. There was an extremely high agreement between the automatic tool and the manually edited masks. This is the first study to validate an automatic segmentation model for neuroblastic tumor identification and segmentation with body MR images. The semi-automatic approach with minor manual editing of the deep learning segmentation increases the radiologist's confidence in the solution with a minor workload for the radiologist.This study was funded by PRIMAGE (PRedictive In-silico Multiscale Analytics to support cancer personalized diagnosis and prognosis, empowered by imaging biomarkers), a Horizon 2020|RIA project (Topic SC1-DTH-07-2018), grant agreement no: 826494.Veiga-Canuto, D.; Cerdá-Alberich, L.; Jimenez-Pastor, A.; Carot Sierra, JM.; Gomis-Maya, A.; Sangüesa Nebot, C.; Fernandez-Patón, M.... (2023). Independent Validation of a Deep Learning nnU-Net Tool for Neuroblastoma Detection and Segmentation in MR Images. Cancers. 15(5). https://doi.org/10.3390/cancers1505162215

    Artificial Intelligence on FDG PET Images Identifies Mild Cognitive Impairment Patients with Neurodegenerative Disease

    Full text link
    [EN] The purpose of this project is to develop and validate a Deep Learning (DL) FDG PET imaging algorithm able to identify patients with any neurodegenerative diseases (Alzheimer's Disease (AD), Frontotemporal Degeneration (FTD) or Dementia with Lewy Bodies (DLB)) among patients with Mild Cognitive Impairment (MCI). A 3D Convolutional neural network was trained using images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The ADNI dataset used for the model training and testing consisted of 822 subjects (472 AD and 350 MCI). The validation was performed on an independent dataset from La Fe University and Polytechnic Hospital. This dataset contained 90 subjects with MCI, 71 of them developed a neurodegenerative disease (64 AD, 4 FTD and 3 DLB) while 19 did not associate any neurodegenerative disease. The model had 79% accuracy, 88% sensitivity and 71% specificity in the identification of patients with neurodegenerative diseases tested on the 10% ADNI dataset, achieving an area under the receiver operating characteristic curve (AUC) of 0.90. On the external validation, the model preserved 80% balanced accuracy, 75% sensitivity, 84% specificity and 0.86 AUC. This binary classifier model based on FDG PET images allows the early prediction of neurodegenerative diseases in MCI patients in standard clinical settings with an overall 80% classification balanced accuracy.This work was financially supported by INBIO 2019 (DEEPBRAIN), INNVA1/2020/83(DEEPPET) funded by Generalitat Valenciana, and PID2019-107790RB-C22 funded by MCIN/AEI/10.13039/501100011033/. Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer's Association; Alzheimer's Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org).The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer's Therapeutic Research Institute at the University of Southern California. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California.Prats-Climent, J.; Gandia-Ferrero, MT.; Torres-Espallardo, I.; Álvarez-Sanchez, L.; Martinez-Sanchis, B.; Cháfer-Pericás, C.; Gómez-Rico, I.... (2022). Artificial Intelligence on FDG PET Images Identifies Mild Cognitive Impairment Patients with Neurodegenerative Disease. Journal of Medical Systems. 46(8):1-13. https://doi.org/10.1007/s10916-022-01836-w11346

    Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Full text link
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    Measurement of Higgs boson properties in the diphoton decay channel and a search for di-Higgs production in the gamma gamma b anti-b final state with the ATLAS detector

    No full text
    La búsqueda del bosón de Higgs fue la pieza central de los programas de física para los experimentos en el Gran Colisionador de Hadrones durante el Run 1 de la toma de datos. El descubrimiento de esta partícula, anunciado el 4 de julio de 2012 por las colaboraciones de ATLAS y CMS, representó un hito a la hora de comprender el mecanismo de ruptura de simetría electrodébil, por el cual las partículas fundamentales adquieren masa. Ahora es esencial que el bosón de Higgs sea ampliamente estudiado. Las mediciones precisas de sus propiedades confirmarán su naturaleza y cualquier desviación de la predicción del Modelo Estándar representará un signo claro de nueva física. Esta tesis presenta dos análisis de física realizados con el detector ATLAS en el Gran Colisionador de Hadrones. Se utilizaron datos de colisiones protón-protón, correspondientes a una luminosidad integrada de 36.1 1/fb, obtenida con una energía de centro de masa de 13 TeV, durante 2015 y 2016. El primer análisis es una búsqueda de la producción de pares de bosones de Higgs resonantes y no resonantes en el estado final gamma gamma b anti-b. No se observan desviaciones significativas de las predicciones del Modelo Estándar. El límite superior (con el 95% CL) observado (esperado) en la sección eficaz para la producción no resonante es de 0.73 pb (0.93 pb) y corresponde a 22 (28) veces la sección eficaz predicha por el Modelo Estándar, lo que mejora el resultado no resonante anterior obtenido por ATLAS con datos del Run 2 en un factor de cinco. Para la producción resonante de pares de bosones de Higgs decayendo a gamma gamma b anti-b, se presenta un límite en función de la masa de resonancia. Los límites observados (esperados) oscilan entre 1.14 pb (0.90 pb) y 0.12 pb (0.15 pb) para masas de resonancia en el rango de 260 GeV hasta 1000 GeV. El segundo análisis de física es la medición de las secciones eficaces totales del modo de producción del bosón de Higgs, las intensidades de la señal y las secciones eficaces simplificadas, así como la medición de las secciones eficaces fiduciales y diferenciales en el canal de desintegración del bosón de Higgs a dos fotones. La intensidad de la señal medida confirma la medición de la intensidad de la señal obtenida por ATLAS en el mismo canal de desintegración con datos del Run 1 en aproximadamente un factor de dos de mejora en cada componente de la incertidumbre. La precisión de estas mediciones actualmente está dominada por sus incertidumbres estadísticas, pero se espera que mejore en los próximos años del LHC, ya que a medida que se recopilen más datos, la incertidumbre estadística disminuirá. Se hace especial hincapié en la estrategia seguida para estimar la incertidumbre en el modelado de la lluvia de partones, el evento subyacente y la hadronización, lo cual es especialmente desafiante debido a la dificultad de generar eventos suficientes para cada categoría de reconstrucción de eventos, región fiducial o cada intervalo de una sección eficaz diferencial fiducial. Las técnicas de reconstrucción e identificación de los objetos relevantes, como fotones y jets, se cubren ampliamente, y se realiza una validación de la escala de energía del calorímetro utilizando la respuesta del calorímetro hadrónico de ATLAS, TileCal, a hadrones individuales y cargados con datos de colisiones protón-protón obtenidos con una energía del centro de masas de 7 y 8 TeV, durante 2010-2012 con el detector ATLAS. Los resultados presentados en esta tesis muestran que el cociente doble del valor medio () entre los datos y la simulación de Monte Carlo es aproximadamente uno, con desviaciones de la unidad de menos del 5% posiblemente debidas a una mala calibración de la escala electromagnética en los datos o a diferencias en la descripción de Monte Carlo debido a un desarrollo de cascada hadrónica relativamente complejo. En la región de barril del calorímetro el nivel de acuerdo del 3% se mantiene a pesar de los cambios considerables en las condiciones del haz./ La recerca del bosó de Higgs fou l’objectiu principal dels programes de física dels experiments de l’LHC durant el Run 1. El descobriment d’aquesta partícula, anunciat l’any 2012 per les col·laboracions ATLAS i CMS, constituí una fita molt important per a la física de partícules a l’hora d’entendre el mecanisme de trencament espontani de simetria del Model Estàndard pel qual les partícules fonamentals adquireixen massa. Ara és essencial que el bosó de Higgs siga estudiat extensivament. Mesures precises de les seues propietats confirmaran la seua naturalesa, i qualsevol desviació de la predicció del Model Estàndard representarà un signe inequívoc de nova física. Aquest fi no es pot aconseguir sense un bon enteniment de l’aparell experimental. Els estudis de rendiment descrits en aquesta tesi se centren en validar els mètodes de reconstrucció i calibratge del calorímetre TileCal del detector ATLAS mitjançant l’ús de la resposta del calorímetre als hadrons aïllats amb dades recollides des del 2010 fins al 2012. Els resultats mostren que el quocient doble del valor mitjà () entre les dades i la simulació de MC és compatible amb la unitat. Concretament en la regió de barril del TileCal s’observa un 3% de discrepància màxima a pesar de canvis importants en les condicions del feix al llarg dels tres anys. La producció de parells de bosons de Higgs és el procés de producció més senzill que és sensible a l’autoacoblament i proporciona una gran quantitat de possibilitats per investigar interaccions multidimensionals, així com l’existència d’estats més pesats acoblats al Higgs. Aquesta tesi presenta una recerca de la producció de parells de bosons de Higgs en l’estat final gamma gamma b anti-b amb dades recollides a una energia del centre de massa de 13 TeV amb el detector ATLAS. No s’observen desviacions significatives de les prediccions del Model Estàndard. El límit superior observat (esperat) amb un nivell de confiança del 95% en la secció eficaç de producció no ressonant és 0.73 pb (0.93 pb) i correspon a 22 (28) vegades la predicció del Model Estàndard, el qual millora el resultat precedent publicat per l’experiment ATLAS en un factor de cinc. En el cas de la producció ressonant, els límits observats (esperats) en la secció eficaç oscil·len entre 1.14 (0.90) pb i 0.12 (0.15) pb per a ressonàncies de massa entre 260 GeV i 1000 GeV. Aquesta tesi també presenta la mesura de les seccions eficaces dels modes de producció del bosó de Higgs, la força dels senyals i les seccions eficaces fiducials i diferencials en el canal de desintegració a dos fotons. La força del senyal mesurada confirma la mesura realitzada per l’experiment ATLAS amb dades recollides durant el Run 1 i la millora en un factor de dos en cada component de la incertesa. Actualment la precisió d’aquestes mesures està dominada per les seues incerteses estadístiques, però s’espera que millori en els propers anys de l’LHC, ja que a mesura que es recopilin més dades la incertesa estadística disminuirà. Aleshores, ser capaços de realitzar estimacions precises de les incerteses sistemàtiques serà essencial, en particular de la incertesa en la modelització de la pluja de partons, l’esdeveniment subjacent i l’hadronització, que suposa un repte a causa de la dificultat de generar esdeveniments suficients per a cada categoria de reconstrucció, regió fiducial o interval d’una secció eficaç diferencial. Aquesta tesi estableix els fonaments per a l’estimació d’aquesta incertesa que s’espera millorar en un futur pròxim.The hunt for the Higgs boson was the centerpiece of the physics programs for the experiments at the Large Hadron Collider during Run 1 of data-taking. The discovery of this particle, announced on July 4th 2012 by the ATLAS and CMS collaborations, represented a milestone in clarifying the mechanism of electroweak symmetry breaking, by which fundamental particles acquire mass. It is now essential that the Higgs boson is extensively studied. Precise measurements of its properties will confirm its nature, and any deviations from the Standard Model prediction will represent a clear sign of new physics. This thesis presents two physics analyses performed with the ATLAS detector at the Large Hadron Collider. Proton-proton collision data was used, corresponding to an integrated luminosity of 36.1 1/fb, obtained at a center-of-mass energy of 13 TeV, during 2015 and 2016. The first analysis is a search for resonant and non-resonant Higgs boson pair production in the gamma gamma b anti-b final state. No significant deviations from the Standard Model predictions are observed. The observed (expected) 95% CL upper limit on the cross section for non-resonant production is 0.73 pb (0.93 pb) and corresponds to 22 (28) times the predicted SM cross section, which improves the previous ATLAS Run 2 non-resonant result in a factor of five. For resonant production of di-Higgs to gamma gamma b anti-b, a limit is presented for the narrow-width approximation as a function of the resonance mass. The observed (expected) limits range between 1.14 pb (0.90 pb) and 0.12 pb (0.15 pb) for resonance masses in the range from 260 GeV until 1000 GeV. The second physics analysis is the measurement of the total Higgs boson production-mode cross sections, signal strengths, and simplified template cross sections, as well as the measurement of the fiducial and differential cross sections in the diphoton decay channel. The measured signal strength confirms the ATLAS Run 1 diphoton signal strength measurement with around a factor of two improvement in each component of the uncertainty. The precision of these measurements is currently dominated by their statistical uncertainties, but it is expected to improve in the next years of the LHC, as more data is collected the statistical uncertainty will decrease. Special emphasis is given to the strategy followed to estimate the uncertainty in the modeling of the parton shower, underlying event and hadronization, which is especially challenging due to the difficulty of generating sufficient events for each event reconstruction category, fiducial region, or each bin of a fiducial differential cross section. The reconstruction and identification techniques of the relevant objects such as photons and jets are covered extensively, and a validation of the calorimeter energy scale is performed by using the ATLAS Tile Calorimeter response to single hadrons with proton-proton collision data obtained at center-of-mass energies of 7 and 8 TeV, during 2010-2012 with the ATLAS detector. Results presented in this thesis show that the double ratio of the mean value () between data and MC simulation is approximately one, with deviations from unity of less than 5% possibly due to poor electromagnetic scale calibration in the data or differences in the MC description due to a relatively complex hadron shower development. In the Long Barrel region, the 3% level agreement is maintained despite sizeable changes in beam conditions

    A Confidence Habitats Methodology in MR Quantitative Diffusion for the Classification of Neuroblastic Tumors

    No full text
    Background/Aim: In recent years, the apparent diffusion coefficient (ADC) has been used in many oncology applications as a surrogate marker of tumor cellularity and aggressiveness, although several factors may introduce bias when calculating this coefficient. The goal of this study was to develop a novel methodology (Fit-Cluster-Fit) based on confidence habitats that could be applied to quantitative diffusion-weighted magnetic resonance images (DWIs) to enhance the power of ADC values to discriminate between benign and malignant neuroblastic tumor profiles in children. Methods: Histogram analysis and clustering-based algorithms were applied to DWIs from 33 patients to perform tumor voxel discrimination into two classes. Voxel uncertainties were quantified and incorporated to obtain a more reproducible and meaningful estimate of ADC values within a tumor habitat. Computational experiments were performed by smearing the ADC values in order to obtain confidence maps that help identify and remove noise from low-quality voxels within high-signal clustered regions. The proposed Fit-Cluster-Fit methodology was compared with two other methods: conventional voxel-based and a cluster-based strategy. Results: The cluster-based and Fit-Cluster-Fit models successfully differentiated benign and malignant neuroblastic tumor profiles when using values from the lower ADC habitat. In particular, the best sensitivity (91%) and specificity (89%) of all the combinations and methods explored was achieved by removing uncertainties at a 70% confidence threshold, improving standard voxel-based sensitivity and negative predictive values by 4% and 10%, respectively. Conclusions: The Fit-Cluster-Fit method improves the performance of imaging biomarkers in classifying pediatric solid tumor cancers and it can probably be adapted to dynamic signal evaluation for any tumor
    corecore