39 research outputs found
Spin-related magnetoresistance of n-type ZnO:Al and Zn_{1-x}Mn_{x}O:Al thin films
Effects of spin-orbit coupling and s-d exchange interaction are probed by
magnetoresistance measurements carried out down to 50 mK on ZnO and
Zn_{1-x}Mn_{x}O with x = 3 and 7%. The films were obtained by laser ablation
and doped with Al to electron concentration ~10^{20} cm^{-3}. A quantitative
description of the data for ZnO:Al in terms of weak-localization theory makes
it possible to determine the coupling constant \lambda_{so} = (4.4 +-
0.4)*10^{-11} eVcm of the kp hamiltonian for the wurzite structure, H_{so} =
\lambda_{so}*c(s x k). A complex and large magnetoresistance of
Zn_{1-x}Mn_{x}O:Al is interpreted in terms of the influence of the s-d
spin-splitting and magnetic polaron formation on the disorder-modified
electron-electron interactions. It is suggested that the proposed model
explains the origin of magnetoresistance observed recently in many magnetic
oxide systems.Comment: 4 pages, 4 figure
Making Radiomics More Reproducible across Scanner and Imaging Protocol Variations: A Review of Harmonization Methods
Radiomics converts medical images into mineable data via a high-throughput extraction of quantitative features used for clinical decision support. However, these radiomic features are susceptible to variation across scanners, acquisition protocols, and reconstruction settings. Various investigations have assessed the reproducibility and validation of radiomic features across these discrepancies. In this narrative review, we combine systematic keyword searches with prior domain knowledge to discuss various harmonization solutions to make the radiomic features more reproducible across various scanners and protocol settings. Different harmonization solutions are discussed and divided into two main categories: image domain and feature domain. The image domain category comprises methods such as the standardization of image acquisition, post-processing of raw sensor-level image data, data augmentation techniques, and style transfer. The feature domain category consists of methods such as the identification of reproducible features and normalization techniques such as statistical normalization, intensity harmonization, ComBat and its derivatives, and normalization using deep learning. We also reflect upon the importance of deep learning solutions for addressing variability across multi-centric radiomic studies especially using generative adversarial networks (GANs), neural style transfer (NST) techniques, or a combination of both. We cover a broader range of methods especially GANs and NST methods in more detail than previous reviews
Automatic Head and Neck Tumor segmentation and outcome prediction relying on FDG-PET/CT images: Findings from the second edition of the HECKTOR challenge.
By focusing on metabolic and morphological tissue properties respectively, FluoroDeoxyGlucose (FDG)-Positron Emission Tomography (PET) and Computed Tomography (CT) modalities include complementary and synergistic information for cancerous lesion delineation and characterization (e.g. for outcome prediction), in addition to usual clinical variables. This is especially true in Head and Neck Cancer (HNC). The goal of the HEad and neCK TumOR segmentation and outcome prediction (HECKTOR) challenge was to develop and compare modern image analysis methods to best extract and leverage this information automatically. We present here the post-analysis of HECKTOR 2nd edition, at the 24th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2021. The scope of the challenge was substantially expanded compared to the first edition, by providing a larger population (adding patients from a new clinical center) and proposing an additional task to the challengers, namely the prediction of Progression-Free Survival (PFS). To this end, the participants were given access to a training set of 224 cases from 5 different centers, each with a pre-treatment FDG-PET/CT scan and clinical variables. Their methods were subsequently evaluated on a held-out test set of 101 cases from two centers. For the segmentation task (Task 1), the ranking was based on a Borda counting of their ranks according to two metrics: mean Dice Similarity Coefficient (DSC) and median Hausdorff Distance at 95th percentile (HD95). For the PFS prediction task, challengers could use the tumor contours provided by experts (Task 3) or rely on their own (Task 2). The ranking was obtained according to the Concordance index (C-index) calculated on the predicted risk scores. A total of 103 teams registered for the challenge, for a total of 448 submissions and 29 papers. The best method in the segmentation task obtained an average DSC of 0.759, and the best predictions of PFS obtained a C-index of 0.717 (without relying on the provided contours) and 0.698 (using the expert contours). An interesting finding was that best PFS predictions were reached by relying on DL approaches (with or without explicit tumor segmentation, 4 out of the 5 best ranked) compared to standard radiomics methods using handcrafted features extracted from delineated tumors, and by exploiting alternative tumor contours (automated and/or larger volumes encompassing surrounding tissues) rather than relying on the expert contours. This second edition of the challenge confirmed the promising performance of fully automated primary tumor delineation in PET/CT images of HNC patients, although there is still a margin for improvement in some difficult cases. For the first time, the prediction of outcome was also addressed and the best methods reached relatively good performance (C-index above 0.7). Both results constitute another step forward toward large-scale outcome prediction studies in HNC
Ferromagnetic semiconductors
The current status and prospects of research on ferromagnetism in
semiconductors are reviewed. The question of the origin of ferromagnetism in
europium chalcogenides, chromium spinels and, particularly, in diluted magnetic
semiconductors is addressed. The nature of electronic states derived from 3d of
magnetic impurities is discussed in some details. Results of a quantitative
comparison between experimental and theoretical results, notably for Mn-based
III-V and II-VI compounds, are presented. This comparison demonstrates that the
current theory of the exchange interactions mediated by holes in the valence
band describes correctly the values of Curie temperatures T_C magnetic
anisotropy, domain structure, and magnetic circular dichroism. On this basis,
chemical trends are examined and show to lead to the prediction of
semiconductor systems with T_C that may exceed room temperature, an expectation
that are being confirmed by recent findings. Results for materials containing
magnetic ions other than Mn are also presented emphasizing that the double
exchange involving hoping through d states may operate in those systems.Comment: 18 pages, 8 figures; special issue of Semicon. Sci. Technol. on
semiconductor spintronic
Why is the Winner the Best?
International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multicenter study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and post-processing (66%). The “typical” lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work
Why is the winner the best?
International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multicenter study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and post-processing (66%). The 'typical' lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work
Image Magnification Regression Using DenseNet for Exploiting Histopathology Open Access Content
Open access medical content databases such as PubMed Central and TCGA offer possibilities to obtain large amounts of images for training deep learning models. Nevertheless, accurate labeling of large-scale medical datasets is not available and poses challenging tasks for using such datasets. Predicting unknown magnification levels and standardize staining procedures is a necessary preprocessing step for using this data in retrieval and classification tasks. In this paper, a CNN-based regression approach to learn the magnification of histopathology images is presented, comparing two deep learning architectures tailored to regress the magnification. A comparison of the performance of the models is done in a dataset of 34,441 breast cancer patches with several magnifications. The best model, a fusion of DenseNet-based CNNs, obtained a kappa score of 0.888. The methods are also evaluated qualitatively on a set of images from biomedical journals and TCGA prostate patches
Local rotation invariance in 3D CNNs.
Locally Rotation Invariant (LRI) image analysis was shown to be fundamental in many applications and in particular in medical imaging where local structures of tissues occur at arbitrary rotations. LRI constituted the cornerstone of several breakthroughs in texture analysis, including Local Binary Patterns (LBP), Maximum Response 8 (MR8) and steerable filterbanks. Whereas globally rotation invariant Convolutional Neural Networks (CNN) were recently proposed, LRI was very little investigated in the context of deep learning. LRI designs allow learning filters accounting for all orientations, which enables a drastic reduction of trainable parameters and training data when compared to standard 3D CNNs. In this paper, we propose and compare several methods to obtain LRI CNNs with directional sensitivity. Two methods use orientation channels (responses to rotated kernels), either by explicitly rotating the kernels or using steerable filters. These orientation channels constitute a locally rotation equivariant representation of the data. Local pooling across orientations yields LRI image analysis. Steerable filters are used to achieve a fine and efficient sampling of 3D rotations as well as a reduction of trainable parameters and operations, thanks to a parametric representations involving solid Spherical Harmonics (SH),which are products of SH with associated learned radial profiles. Finally, we investigate a third strategy to obtain LRI based on rotational invariants calculated from responses to a learned set of solid SHs. The proposed methods are evaluated and compared to standard CNNs on 3D datasets including synthetic textured volumes composed of rotated patterns, and pulmonary nodule classification in CT. The results show the importance of LRI image analysis while resulting in a drastic reduction of trainable parameters, outperforming standard 3D CNNs trained with rotational data augmentation
Generalizing convolution neural networks on stain color heterogeneous data for computational pathology
Hematoxylin and Eosin (H&E) are one of the main tissue stains used in histopathology to discriminate between nuclei and extracellular material while performing a visual analysis of the tissue. However, histopathology slides are often characterized by stain color heterogeneity, due to different tissue preparation settings at different pathology institutes. Stain color heterogeneity poses challenges for machine learning-based computational analysis, increasing the difficulty of producing consistent diagnostic results and systems that generalize well. In other words, it is challenging for a deep learning architecture to generalize on stain color heterogeneous data, when the data are acquired at several centers, and particularly if test data are from a center not present in the training data. In this paper, several methods that deal with stain color heterogeneity are compared regarding their capability to solve center-dependent heterogeneity. Systematic and extensive experimentation is performed on a normal versus tumor tissue classification problem. Stain color normalization and augmentation procedures are used while training a convolutional neural networks (CNN) to generalize on unseen data from several centers. The performance is compared on an internal test set (test data from the same pathology institutes as the training set) and an external test set (test data from institutes not included in the training set). This also allows to measure generalization performance. An improved performance is observed when the predictions of the two best-performed stain color normalization methods with augmentation are aggregated. An average AUC and F1-score on external test are observed as 0.892 ± 0.021 and 0.817 ± 0.032 compared to the baseline 0.860 ± 0.027 and 0.772 ± 0.024 respectively
Fully Automatic Head and Neck Cancer Prognosis Prediction in PET/CT
International audienceSeveral recent PET/CT radiomics studies have shown promising results for the prediction of patient outcomes in Head and Neck (HandN) cancer. These studies, however, are most often conducted on relatively small cohorts (up to 300 patients) and using manually delineated tumors. Recently, deep learning reached high performance in the automatic segmentation of HandN primary tumors in PET/CT. The automatic segmentation could be used to validate these studies on larger-scale cohorts while obviating the burden of manual delineation. We propose a complete PET/CT processing pipeline gathering the automatic segmentation of primary tumors and prognosis prediction of patients with HandN cancer treated with radiotherapy and chemotherapy. Automatic contours of the primary Gross Tumor Volume (GTVt) are obtained from a 3D UNet. A radiomics pipeline that automatically predicts the patient outcome (Disease Free Survival, DFS) is compared when using either the automatically or the manually annotated contours. In addition, we extract deep features from the bottleneck layers of the 3D UNet to compare them with standard radiomics features (first- and second-order as well as shape features) and to test the performance gain when added to them. The models are evaluated on the HECKTOR 2020 dataset consisting of 239 HandN patients with PET, CT, GTVt contours and DFS data available (five centers). Regarding the results, using Hand-Crafted (HC) radiomics features extracted from manual GTVt achieved the best performance and is associated with an average Concordance (C) index of 0.672. The fully automatic pipeline (including deep and HC features from automatic GTVt) achieved an average C index of 0.626, which is lower but relatively close to using manual GTVt (p-value = 0.20). This suggests that large-scale studies could be conducted using a fully automatic pipeline to further validate the current state of the art HandN radiomics. The code will be shared publicly for reproducibility. © 2021, Springer Nature Switzerland AG