11 research outputs found

    Near- and mid-infrared bandgaps in a 1D photonic crystal containing superconductor and semiconductor-metamaterial

    No full text
    By using the transfer matrix method, the theoretical investigation has been carried out in the near- and mid-infrared bandgaps for a periodic multilayered structure that was composed of superconductor (SC) and semiconductor-metamaterial. It was found that two bandgaps appeared within the computational regions which are effectively optimized by manipulating the thickness of the SC film, fill factor of the semiconductor-metamaterial and the incidence angle of the incident electromagnetic wave. However, the thickness of the SC film and fill factor of the semiconductor-metamaterial are responsible for the red-shift of bandgaps, while the blue-shift is accounted for by the angle of incidence for both transverse electric (TE) and transverse magnetic (TM) waves. It is notable, for the TM wave, that the bandgaps disappeared at the incident angles of approximately 60[Formula: see text]. Such properties are quite useful in designing any new types of edge filters and other optical devices in the near- and mid-infrared frequencies

    Breast tumor segmentation in DCE-MRI using fully convolutional networks with an application in radiogenomics

    No full text
    Breast tumor segmentation based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) remains an active as well as a challenging problem. Previous studies often rely on manual annotation for tumor regions, which is not only time-consuming but also error-prone. Recent studies have shown high promise of deep learning-based methods in various segmentation problems. However, these methods are usually faced with the challenge of limited number (e.g., tens or hundreds) of medical images for training, leading to sub-optimal segmentation performance. Also, previous methods cannot efficiently deal with prevalent class-imbalance problems in tumor segmentation, where the number of voxels in tumor regions is much lower than that in the background area. To address these issues, in this study, we propose a mask-guided hierarchical learning (MHL) framework for breast tumor segmentation via fully convolutional networks (FCN). Our strategy is first decomposing the original difficult problem into several sub-problems and then solving these relatively simpler sub-problems in a hierarchical manner. To precisely identify locations of tumors that underwent a biopsy, we further propose an FCN model to detect two landmarks defined on nipples. Finally, based on both segmentation probability maps and our identified landmarks, we proposed to select biopsied tumors from all detected tumors via a tumor selection strategy using the pathology location. We validate our MHL method using data for 272 patients, and achieve a mean Dice similarity coefficient (DSC) of 0.72 in breast tumor segmentation. Finally, in a radiogenomic analysis, we show that a previously developed image features show a comparable performance for identifying luminal A subtype when applied to the automatic segmentation and a semi-manual segmentation demonstrating a high promise for fully automated radiogenomic analysis in breast cancer

    Breast tumor segmentation in DCE-MRI using fully convolutional networks with an application in radiogenomics

    No full text
    Breast tumor segmentation based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) remains an active as well as a challenging problem. Previous studies often rely on manual annotation for tumor regions, which is not only time-consuming but also error-prone. Recent studies have shown high promise of deep learning-based methods in various segmentation problems. However, these methods are usually faced with the challenge of limited number (e.g., tens or hundreds) of medical images for training, leading to sub-optimal segmentation performance. Also, previous methods cannot efficiently deal with prevalent class-imbalance problems in tumor segmentation, where the number of voxels in tumor regions is much lower than that in the background area. To address these issues, in this study, we propose a mask-guided hierarchical learning (MHL) framework for breast tumor segmentation via fully convolutional networks (FCN). Our strategy is first decomposing the original difficult problem into several sub-problems and then solving these relatively simpler sub-problems in a hierarchical manner. To precisely identify locations of tumors that underwent a biopsy, we further propose an FCN model to detect two landmarks defined on nipples. Finally, based on both segmentation probability maps and our identified landmarks, we proposed to select biopsied tumors from all detected tumors via a tumor selection strategy using the pathology location. We validate our MHL method using data for 272 patients, and achieve a mean Dice similarity coefficient (DSC) of 0.72 in breast tumor segmentation. Finally, in a radiogenomic analysis, we show that a previously developed image features show a comparable performance for identifying luminal A subtype when applied to the automatic segmentation and a semi-manual segmentation demonstrating a high promise for fully automated radiogenomic analysis in breast cancer

    Breast mass detection in mammography and tomosynthesis via fully convolutional network-based heatmap regression

    No full text
    Breast mass detection in mammography and digital breast tomosynthesis (DBT) is an essential step in computerized breast cancer analysis. Deep learning-based methods incorporate feature extraction and model learning into a unified framework and have achieved impressive performance in various medical applications (e.g., disease diagnosis, tumor detection, and landmark detection). However, these methods require large-scale accurately annotated data. Unfortunately, it is challenging to get precise annotations of breast masses. To address this issue, we propose a fully convolutional network (FCN) based heatmap regression method for breast mass detection, using only weakly annotated mass regions in mammography images. Specifically, we first generate heat maps of masses based on human-annotated rough regions for breast masses. We then develop an FCN model for end-to-end heatmap regression with an F-score loss function, where the mammography images are regarded as the input and heatmaps for breast masses are used as the output. Finally, the probability map of mass locations can be estimated with the trained model. Experimental results on a mammography dataset with 439 subjects demonstrate the effectiveness of our method. Furthermore, we evaluate whether we can use mammography data to improve detection models for DBT, since mammography shares similar structure with tomosynthesis. We propose a transfer learning strategy by fine-tuning the learned FCN model from mammography images. We test this approach on a small tomosynthesis dataset with only 40 subjects, and we show an improvement in the detection performance as compared to training the model from scratch

    Breast mass detection in mammography and tomosynthesis via fully convolutional network-based heatmap regression

    No full text
    Breast mass detection in mammography and digital breast tomosynthesis (DBT) is an essential step in computerized breast cancer analysis. Deep learning-based methods incorporate feature extraction and model learning into a unified framework and have achieved impressive performance in various medical applications (e.g., disease diagnosis, tumor detection, and landmark detection). However, these methods require large-scale accurately annotated data. Unfortunately, it is challenging to get precise annotations of breast masses. To address this issue, we propose a fully convolutional network (FCN) based heatmap regression method for breast mass detection, using only weakly annotated mass regions in mammography images. Specifically, we first generate heat maps of masses based on human-annotated rough regions for breast masses. We then develop an FCN model for end-to-end heatmap regression with an F-score loss function, where the mammography images are regarded as the input and heatmaps for breast masses are used as the output. Finally, the probability map of mass locations can be estimated with the trained model. Experimental results on a mammography dataset with 439 subjects demonstrate the effectiveness of our method. Furthermore, we evaluate whether we can use mammography data to improve detection models for DBT, since mammography shares similar structure with tomosynthesis. We propose a transfer learning strategy by fine-tuning the learned FCN model from mammography images. We test this approach on a small tomosynthesis dataset with only 40 subjects, and we show an improvement in the detection performance as compared to training the model from scratch

    Breast cancer molecular subtype classification using deep features: Preliminary results

    No full text
    Radiogenomics is a field of investigation that attempts to examine the relationship between imaging characteris-tics of cancerous lesions and their genomic composition. This could offer a noninvasive alternative to establishing genomic characteristics of tumors and aid cancer treatment planning. While deep learning has shown its supe-riority in many detection and classification tasks, breast cancer radiogenomic data suffers from a very limited number of training examples, which renders the training of the neural network for this problem directly and with no pretraining a very difficult task. In this study, we investigated an alternative deep learning approach referred to as deep features or off-the-shelf network approach to classify breast cancer molecular subtypes using breast dynamic contrast enhanced MRIs. We used the feature maps of different convolution layers and fully connected layers as features and trained support vector machines using these features for prediction. For the feature maps that have multiple layers, max-pooling was performed along each channel. We focused on distinguishing the Luminal A subtype from other subtypes. To evaluate the models, 10 fold cross-validation was performed and the final AUC was obtained by averaging the performance of all the folds. The highest average AUC obtained was 0.64 (0.95 CI: 0.57-0.71), using the feature maps of the last fully connected layer. This indicates the promise of using this approach to predict the breast cancer molecular subtypes. Since the best performance appears in the last fully connected layer, it also implies that breast cancer molecular subtypes may relate to high level image features

    Deep learning-based features of breast MRI for prediction of occult invasive disease following a diagnosis of ductal carcinoma in situ: Preliminary data

    No full text
    Approximately 25% of patients with ductal carcinoma in situ (DCIS) diagnosed from core needle biopsy are subsequently upstaged to invasive cancer at surgical excision. Identifying patients with occult invasive disease is important as it changes treatment and precludes enrollment in active surveillance for DCIS. In this study, we investigated upstaging of DCIS to invasive disease using deep features. While deep neural networks require large amounts of training data, the available data to predict DCIS upstaging is sparse and thus directly training a neural network is unlikely to be successful. In this work, a pre-trained neural network is used as a feature extractor and a support vector machine (SVM) is trained on the extracted features. We used the dynamic contrast-enhanced (DCE) MRIs of patients at our institution from January 1, 2000, through March 23, 2014 who underwent MRI following a diagnosis of DCIS. Among the 131 DCIS patients, there were 35 patients who were upstaged to invasive cancer. Area under the ROC curve within the 10-fold cross-validation scheme was used for validation of our predictive model. The use of deep features was able to achieve an AUC of 0.68 (95% CI: 0.56-0.78) to predict occult invasive disease. This preliminary work demonstrates the promise of deep features to predict surgical upstaging following a diagnosis of DCIS

    Deep learning-based features of breast MRI for prediction of occult invasive disease following a diagnosis of ductal carcinoma in situ: Preliminary data

    No full text
    Approximately 25% of patients with ductal carcinoma in situ (DCIS) diagnosed from core needle biopsy are subsequently upstaged to invasive cancer at surgical excision. Identifying patients with occult invasive disease is important as it changes treatment and precludes enrollment in active surveillance for DCIS. In this study, we investigated upstaging of DCIS to invasive disease using deep features. While deep neural networks require large amounts of training data, the available data to predict DCIS upstaging is sparse and thus directly training a neural network is unlikely to be successful. In this work, a pre-trained neural network is used as a feature extractor and a support vector machine (SVM) is trained on the extracted features. We used the dynamic contrast-enhanced (DCE) MRIs of patients at our institution from January 1, 2000, through March 23, 2014 who underwent MRI following a diagnosis of DCIS. Among the 131 DCIS patients, there were 35 patients who were upstaged to invasive cancer. Area under the ROC curve within the 10-fold cross-validation scheme was used for validation of our predictive model. The use of deep features was able to achieve an AUC of 0.68 (95% CI: 0.56-0.78) to predict occult invasive disease. This preliminary work demonstrates the promise of deep features to predict surgical upstaging following a diagnosis of DCIS

    Convolutional encoder-decoder for breast mass segmentation in digital breast tomosynthesis

    No full text
    Digital breast tomosynthesis (DBT) is a relatively new modality for breast imaging that can provide detailed assessment of dense tissue within the breast. In the domains of cancer diagnosis, radiogenomics, and resident education, it is important to accurately segment breast masses. However, breast mass segmentation is a very challenging task, since mass regions have low contrast difference between their neighboring tissues. Notably, the task might become more difficult in cases that were assigned BI-RADS 0 category since this category includes many lesions that are of low conspicuity and locations that were deemed to be overlapping normal tissue upon further imaging and were not sent to biopsy. Segmentation of such lesions is of particular importance in the domain of reader performance analysis and education. In this paper, we propose a novel deep learning-based method for segmentation of BI-RADS 0 lesions in DBT. The key components of our framework are an encoding path for local-to-global feature extraction, and a decoding patch to expand the images. To address the issue of limited training data, in the training stage, we propose to sample patches not only in mass regions but also in non-mass regions. We utilize a Dice-like loss function in the proposed network to alleviate the class-imbalance problem. The preliminary results on 40 subjects show promise of our method. In addition to quantitative evaluation of the method, we present a visualization of the results that demonstrate both the performance of the algorithm as well as the difficulty of the task at hand

    Convolutional encoder-decoder for breast mass segmentation in digital breast tomosynthesis

    No full text
    Digital breast tomosynthesis (DBT) is a relatively new modality for breast imaging that can provide detailed assessment of dense tissue within the breast. In the domains of cancer diagnosis, radiogenomics, and resident education, it is important to accurately segment breast masses. However, breast mass segmentation is a very challenging task, since mass regions have low contrast difference between their neighboring tissues. Notably, the task might become more difficult in cases that were assigned BI-RADS 0 category since this category includes many lesions that are of low conspicuity and locations that were deemed to be overlapping normal tissue upon further imaging and were not sent to biopsy. Segmentation of such lesions is of particular importance in the domain of reader performance analysis and education. In this paper, we propose a novel deep learning-based method for segmentation of BI-RADS 0 lesions in DBT. The key components of our framework are an encoding path for local-to-global feature extraction, and a decoding patch to expand the images. To address the issue of limited training data, in the training stage, we propose to sample patches not only in mass regions but also in non-mass regions. We utilize a Dice-like loss function in the proposed network to alleviate the class-imbalance problem. The preliminary results on 40 subjects show promise of our method. In addition to quantitative evaluation of the method, we present a visualization of the results that demonstrate both the performance of the algorithm as well as the difficulty of the task at hand
    corecore