33 research outputs found

    Solid Spherical Energy (SSE) CNNs for Efficient 3D Medical Image Analysis

    Get PDF
    Invariance to local rotation, to differentiate from the global rotation of images and objects, is required in various texture analysis problems. It has led to several breakthrough methods such as local binary patterns, maximum response and steerable filterbanks. In particular, textures in medical images often exhibit local structures at arbitrary orientations. Locally Rotation Invariant (LRI) Convolutional Neural Networks (CNN) were recently proposed using 3D steerable filters to combine LRI with Directional Sensitivity (DS). The steerability avoids the expensive cost of convolutions with rotated kernels and comes with a parametric representation that results in a drastic reduction of the number of trainable parameters. Yet, the potential bottleneck (memory and computation) of this approach lies in the necessity to recombine responses for a set of predefined discretized orientations. In this paper, we propose to calculate invariants from the responses to the set of spherical harmonics projected onto 3D kernels in the form of a lightweight Solid Spherical Energy (SSE) CNN. It offers a compromise between the high kernel specificity of the LRI-CNN and a low memory/operations requirement. The computational gain is evaluated on 3D synthetic and pulmonary nodule classification experiments. The performance of the proposed approach is compared with steerable LRI-CNNs and standard 3D CNNs, showing competitive results with the state of the art

    Validating and optimizing the specificity of imaging biomarkers for personalized medicine

    No full text
    In the last decade, biomedical image analysis has advanced significantly, thanks to radiomics and Convolutional Neural Networks (CNNs) being used for constructing predictive models in personalized medicine. These models are built using image features that can be divided into three main categories: intensity, shape, and texture. Although intensity and shape features are essential, texture features have the potential to reveal complex relationships between tumor architecture and patient outcomes. Therefore, the advancement and comprehension of texture features hold potential as effective strategies for clinicians to enhance disease characterization and facilitate personalized medicine. However, common texture features suffer from several limitations. In this thesis, we reviewed the most common texture features, explaining their advantages and disadvantages, and focusing on their robustness to rotations of the images and structures of interest. We proposed a novel method for designing directional image operators that are Locally Rotation Invariant (LRI), which we implemented based on the power spectrum and bispectrum of the circular harmonics expansion for 2D images or the spherical harmonics expansion for 3D images. We further integrated these LRI operators into a convolutional layer and used them in a CNN to obtain various LRI CNNs. We tested several shallow 3D LRI CNNs to classify benign or malignant lung nodules and demonstrated the advantages of bispectral LRI CNNs in terms of accuracy and data efficiency. Additionally, we evaluated our bispectral LRI layer in a 2D U-Net to segment nuclei in histopathological images and obtained comparable performance between the LRI U-Net and a standard U-Net. Furthermore, we showed that the LRI U-Net was more resilient to input rotations than the standard U-Net. The development of machine learning in biomedical imaging requires large datasets and benchmarks to create robust predictive models. Therefore, the second contribution of this thesis was to participate in organizing the HEad and neCK tumOR segmentation and outcome prediction in PET/CT images (HECKTOR) challenge. The challenge aimed to benchmark automatic head and neck tumor segmentation methods and prognosis radiomics models. We obtained satisfying participation and scientific outcomes during the previous editions, and the third edition of the challenge is currently ongoing. A notable finding was that fully automatic prognosis methods could be effectively tested on large datasets without requiring human-made segmentation. This could open the door to more comprehensive benchmarking efforts in the field. -- Au cours de la dernière décennie, l’analyse d’images biomédicales a considérablement progressé, grâce à l’utilisation de la radiomique et des réseaux de neurones convolutionnels (CNN) pour construire des modèles prédictifs en médecine personnalisée. Ces modèles sont construits à partir de caractéristiques d’images qui peuvent être divisées en trois catégories principales : l’intensité, la forme et la texture. Bien que les caractéristiques d’intensité et de forme soient essentielles, les caractéristiques de texture ont le potentiel de révéler des relations complexes entre l’architecture tumorale et l’évolution clinique des patients. Par conséquent, le développement et la compréhension des caractéristiques de texture sont des approches prometteuses pour aider les cliniciens à mieux caractériser les maladies et à permettre une médecine plus personnalisée. Cependant, les caractéristiques de texture communément utilisées présentent plusieurs limitations. Dans cette thèse, nous avons examiné les caractéristiques de texture les plus courantes, en expliquant leurs avantages et leurs inconvénients, et en mettant l’accent sur leur robustesse aux rotations des images et des structures d’intérêt. Nous avons proposé une nouvelle méthode pour concevoir des opérateurs d’image directionnels qui sont localement invariants à la rotation (LRI), que nous avons implémentée en nous basant sur le spectre de puissance et le bispectre de l’expansion harmonique circulaire pour les images 2D ou l’expansion harmonique sphérique pour les images 3D. Nous avons ensuite intégré ces opérateurs LRI dans une couche de convolution et les avons utilisés dans un CNN pour obtenir différents CNN LRI. Nous avons testé plusieurs CNN LRI peu profonds en 3D pour classifier les nodules pulmonaires bénins ou malins et avons démontré les avantages des CNN LRI bispectraux en termes d’exactitude et d’efficacité des données. De plus, nous avons évalué notre couche LRI bispectrale dans un U-Net 2D pour segmenter des noyaux dans des images histopathologiques et avons obtenu des performances comparables entre l’U-Net LRI et un U-Net standard. De plus, nous avons montré que l’U-Net LRI était plus résilient aux rotations de l’image que l’U-Net standard. Le développement de l’apprentissage automatique en imagerie biomédicale nécessite des ensembles de données volumineux et des benchmarks pour créer des modèles prédictifs robustes. Par conséquent, la deuxième contribution de cette thèse a été de participer à l’organisation du challenge HEad and neCK tumOR segmentation and outcome prediction in PET/CT images (HECKTOR). Le challenge visait à évaluer les méthodes automatiques de segmentation de tumeurs de la tête et du cou et les modèles de radiomique de pronostic. Nous avons obtenu une participation et des résultats scientifiques satisfaisants lors des éditions précédentes, et la troisième édition du challenge est actuellement en cours. Une découverte notable était que les méthodes de pronostic entièrement automatiques pourraient être testées de manière efficace sur de grands ensembles de données sans nécessiter de segmentation humaine. Cela pourrait ouvrir la voie à des efforts d’évaluation plus complets dans le domaine

    Oropharynx detection in PET-CT for tumor segmentation

    No full text
    We propose an automatic detection of the oropharyngeal area in PET-CT images. This detection can be used to preprocess images for efficient segmentation of Head and Neck (H&N) tumors in the cropped regions by a Convolutional Neural Network (CNN) for treatment planning and large-scale radiomics studies (e.g. prognosis prediction). The developed method is based on simple image processing steps to segment the brain on the PET image and retrieve a fixed size bounding box of the extended oropharyngeal region. We evaluate the results by measuring whether the primary Gross Tumor Volume (GTV) is fully contained in the bounding box. 194 out of 201 regions (96.5%) are correctly detected. The code is available on our GitHub repository

    Head and neck tumor segmentation ::first challenge, HECKTOR 2020, held in conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Proceedings

    No full text
    This book constitutes the First 3D Head and Neck Tumor Segmentation in PET/CT Challenge, HECKTOR 2020, which was held in conjunction with the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2020, in Lima, Peru, in October 2020. The challenge took place virtually due to the COVID-19 pandemic. The 2 full and 8 short papers presented together with an overview paper in this volume were carefully reviewed and selected form numerous submissions. This challenge aims to evaluate and compare the current state-of-the-art methods for automatic head and neck tumor segmentation. In the context of this challenge, a dataset of 204 delineated PET/CT images was made available for training as well as 53 PET/CT images for testing. Various deep learning methods were developed by the participants with excellent results

    Exploring local rotation invariance in 3D CNNs with steerable filters

    No full text
    Locally Rotation Invariant (LRI) image analysis was shown to be fundamental in many applications and in particular in medical imaging where local structures of tissues occur at arbitrary rotations. LRI constituted the cornerstone of several breakthroughs in texture analysis, including Local Binary Patterns (LBP), Maximum Response 8 (MR8) and steerable lterbanks. Whereas globally rotation invariant Convolutional Neural Networks (CNN) were recently proposed, LRI was very little investigated in the context of deep learning. We use trainable 3D steerable lters in CNNs in order to obtain LRI with directional sensitivity, i.e. non-isotropic. Pooling across orientation channels after the rst convolution layer releases the constraint on nite rotation groups as assumed in several recent works. Steerable lters are used to achieve a ne and ecient sampling of 3D rotations. We only convolve the input volume with a set of Spherical Harmonics (SHs) modulated by trainable radial supports and directly steer the responses, resulting in a drastic reduction of trainable parameters and of convolution operations, as well as avoiding approximations due to interpolation of rotated kernels. The proposed method is evaluated and compared to standard CNNs on 3D texture datasets including synthetic volumes with rotated patterns and pulmonary nodule classication in CT. The results show the importance of LRI in CNNs and the need for a ne rotation sampling

    Local rotation invariance in 3D CNNs

    No full text
    Locally Rotation Invariant (LRI) image analysis was shown to be fundamental in many applications and in particular in medical imaging where local structures of tissues occur at arbitrary rotations. LRI constituted the cornerstone of several breakthroughs in texture analysis, including Local Binary Patterns (LBP), Maximum Response 8 (MR8) and steerable filterbanks. Whereas globally rotation invariant Convolutional Neural Networks (CNN) were recently proposed, LRI was very little investigated in the context of deep learning. LRI designs allow learning filters accounting for all orientations, which enables a drastic reduction of trainable parameters and training data when compared to standard 3D CNNs. In this paper, we propose and compare several methods to obtain LRI CNNs with directional sensitivity. Two methods use orientation channels (responses to rotated kernels), either by explicitly rotating the kernels or using steerable filters. These orientation channels constitute a locally rotation equivariant representation of the data. Local pooling across orientations yields LRI image analysis. Steerable filters are used to achieve a fine and efficient sampling of 3D rotations as well as a reduction of trainable parameters and operations, thanks to a parametric representations involving solid Spherical Harmonics (SH),which are products of SH with associated learned radial profiles. Finally, we investigate a third strategy to obtain LRI based on rotational invariants calculated from responses to a learned set of solid SHs. The proposed methods are evaluated and compared to standard CNNs on 3D datasets including synthetic textured volumes composed of rotated patterns, and pulmonary nodule classification in CT. The results show the importance of LRI image analysis while resulting in a drastic reduction of trainable parameters, outperforming standard 3D CNNs trained with rotational data augmentation

    Robust multi-organ nucleus segmentation using a locally rotation invariant bispectral U-Net

    No full text
    Locally Rotation Invariant (LRI) operators have shown great potential to robustly identify biomedical textures where discriminative patterns appear at random positions and orientations. We build LRI operators through the local projection of the image on circular harmonics followed by the computation of the bispectrum, which is LRI by design. This formulation allows to avoid the discretization of the orientations and does not require any criterion to locally align the descriptors. This operator is used in a convolutional layer resulting in LRI Convolutional Neural Networks (LRI CNN). To evaluate the relevance of this approach, we used it to segment cellular nuclei in histopathological images. We compared the proposed bispectral LRI layer against a standard convolutional layer in a U-Net architecture. While they performed equally in terms of F-score, the LRI CNN provided more robust segmentation with respect to orientation, even when rotational data augmentation was used. This robustness is essential when the relevant pattern may vary in orientation, which is often the case in medical images
    corecore