1,252 research outputs found

    Applications of machine learning algorithms using texture analysis-derived features extracted from computed tomography and magnetic resonance images

    Get PDF
    Radiomics relies on post-processing images derived from diagnostic examinations such as ultrasound, computed tomography (CT), magnetic resonance (MR) or positron emission tomography, by means of appropriate created algorithms with the extraction of a big amount of data. One of the main applications of radiomics is texture analysis (TA), a post processing imaging technique that analyzes the spatial variation of pixel intensity levels within an image obtaining quantitative data reflecting image heterogeneity. Machine learning (ML) is an application of artificial intelligence for recognizing patterns that can be applied to medical images, enabling the development of algorithms that can learn and make prediction. The aim of the present work is to illustrate our experience in TA and ML field using MR and CT images acquired in patients with adrenal lesions and head and neck cancer imaging, respectively. In particular, we aimed to assess the accuracy of ML algorithms in the differential diagnosis of adrenal lesions and to predict tumor grade and nodal involvement in oropharynx and oral cavity squamocellular carcinoma using MR and CT images, respectively. According to our results, the ML algorithm using MR-derived texture features correctly classified the 80% of adrenal lesions, performing better than a senior radiologist. When applied to CT-derived texture features, the ML classifier was also useful to accurately predict tumor grade, the presence of nodal involvement and to define N stage in patients with OC and OP SCC with a diagnostic accuracy of 91.6%, 85.5% and 90%, respectively Our results support the potential use of ML software employing TA-derived features for the differential diagnosis of solid lesions as well as for the prediction of histological features and the presence of nodal metastases in oncologic patients. The proven potential of ML to provide quantitative imaging biomarkers as well as the fast development of this technique will probably lead to its clinical implementation in radiological practice

    Radiological Society of North America (RSNA) 3D printing Special Interest Group (SIG): Guidelines for medical 3D printing and appropriateness for clinical scenarios

    Get PDF
    Este número da revista Cadernos de Estudos Sociais estava em organização quando fomos colhidos pela morte do sociólogo Ernesto Laclau. Seu falecimento em 13 de abril de 2014 surpreendeu a todos, e particularmente ao editor Joanildo Burity, que foi seu orientando de doutorado na University of Essex, Inglaterra, e que recentemente o trouxe à Fundação Joaquim Nabuco para uma palestra, permitindo que muitos pudessem dialogar com um dos grandes intelectuais latinoamericanos contemporâneos. Assim, buscamos fazer uma homenagem ao sociólogo argentino publicando uma entrevista inédita concedida durante a sua passagem pelo Recife, em 2013, encerrando essa revista com uma sessão especial sobre a sua trajetória

    Comparative Multicentric Evaluation of Inter-Observer Variability in Manual and Automatic Segmentation of Neuroblastic Tumors in Magnetic Resonance Images

    Full text link
    [EN] Simple Summary Tumor segmentation is a key step in oncologic imaging processing and is a time-consuming process usually performed manually by radiologists. To facilitate it, there is growing interest in applying deep-learning segmentation algorithms. Thus, we explore the variability between two observers performing manual segmentation and use the state-of-the-art deep learning architecture nnU-Net to develop a model to detect and segment neuroblastic tumors on MR images. We were able to show that the variability between nnU-Net and manual segmentation is similar to the inter-observer variability in manual segmentation. Furthermore, we compared the time needed to manually segment the tumors from scratch with the time required for the automatic model to segment the same cases, with posterior human validation with manual adjustment when needed. Tumor segmentation is one of the key steps in imaging processing. The goals of this study were to assess the inter-observer variability in manual segmentation of neuroblastic tumors and to analyze whether the state-of-the-art deep learning architecture nnU-Net can provide a robust solution to detect and segment tumors on MR images. A retrospective multicenter study of 132 patients with neuroblastic tumors was performed. Dice Similarity Coefficient (DSC) and Area Under the Receiver Operating Characteristic Curve (AUC ROC) were used to compare segmentation sets. Two more metrics were elaborated to understand the direction of the errors: the modified version of False Positive (FPRm) and False Negative (FNR) rates. Two radiologists manually segmented 46 tumors and a comparative study was performed. nnU-Net was trained-tuned with 106 cases divided into five balanced folds to perform cross-validation. The five resulting models were used as an ensemble solution to measure training (n = 106) and validation (n = 26) performance, independently. The time needed by the model to automatically segment 20 cases was compared to the time required for manual segmentation. The median DSC for manual segmentation sets was 0.969 (+/- 0.032 IQR). The median DSC for the automatic tool was 0.965 (+/- 0.018 IQR). The automatic segmentation model achieved a better performance regarding the FPRm. MR images segmentation variability is similar between radiologists and nnU-Net. Time leverage when using the automatic model with posterior visual validation and manual adjustment corresponds to 92.8%.This study was funded by PRIMAGE (PRedictive In silico Multiscale Analytics to support cancer personalized diaGnosis and prognosis, empowered by imaging biomarkers), a Horizon 2020 | RIA project (Topic SC1-DTH-07-2018), grant agreement no: 826494.Veiga-Canuto, D.; Cerdà-Alberich, L.; Sangüesa Nebot, C.; Martínez De Las Heras, B.; Pötschger, U.; Gabelloni, M.; Carot Sierra, JM.... (2022). Comparative Multicentric Evaluation of Inter-Observer Variability in Manual and Automatic Segmentation of Neuroblastic Tumors in Magnetic Resonance Images. Cancers. 14(15):1-15. https://doi.org/10.3390/cancers14153648115141

    U-Mamba: Enhancing Long-range Dependency for Biomedical Image Segmentation

    Full text link
    Convolutional Neural Networks (CNNs) and Transformers have been the most popular architectures for biomedical image segmentation, but both of them have limited ability to handle long-range dependencies because of inherent locality or computational complexity. To address this challenge, we introduce U-Mamba, a general-purpose network for biomedical image segmentation. Inspired by the State Space Sequence Models (SSMs), a new family of deep sequence models known for their strong capability in handling long sequences, we design a hybrid CNN-SSM block that integrates the local feature extraction power of convolutional layers with the abilities of SSMs for capturing the long-range dependency. Moreover, U-Mamba enjoys a self-configuring mechanism, allowing it to automatically adapt to various datasets without manual intervention. We conduct extensive experiments on four diverse tasks, including the 3D abdominal organ segmentation in CT and MR images, instrument segmentation in endoscopy images, and cell segmentation in microscopy images. The results reveal that U-Mamba outperforms state-of-the-art CNN-based and Transformer-based segmentation networks across all tasks. This opens new avenues for efficient long-range dependency modeling in biomedical image analysis. The code, models, and data are publicly available at https://wanglab.ai/u-mamba.html
    • …
    corecore