78 research outputs found

    Comparative Study on Local Binary Patterns for Mammographic Density and Risk Scoring

    Get PDF
    Breast density is considered to be one of the major risk factors in developing breast cancer. High breast density can also affect the accuracy of mammographic abnormality detection due to the breast tissue characteristics and patterns. We reviewed variants of local binary pattern descriptors to classify breast tissue which are widely used as texture descriptors for local feature extraction. In our study, we compared the classification results for the variants of local binary patterns such as classic LBP (Local Binary Pattern), ELBP (Elliptical Local Binary Pattern), Uniform ELBP, LDP (Local Directional Pattern) and M-ELBP (Mean-ELBP). A wider comparison with alternative texture analysis techniques was studied to investigate the potential of LBP variants in density classification. In addition, we investigated the effect on classification when using descriptors for the fibroglandular disk region and the whole breast region. We also studied the effect of the Region-of-Interest (ROI) size and location, the descriptor size, and the choice of classifier. The classification results were evaluated based on the MIAS database using a ten-run ten-fold cross validation approach. The experimental results showed that the Elliptical Local Binary Pattern descriptors and Local Directional Patterns extracted most relevant features for mammographic tissue classification indicating the relevance of directional filters. Similarly, the study showed that classification of features from ROIs of the fibroglandular disk region performed better than classification based on the whole breast region

    Breast mass segmentation from mammograms with deep transfer learning

    Get PDF
    Abstract. Mammography is an x-ray imaging method used in breast cancer screening, which is a time consuming process. Many different computer assisted diagnosis have been created to hasten the image analysis. Deep learning is the use of multilayered neural networks for solving different tasks. Deep learning methods are becoming more advanced and popular for segmenting images. One deep transfer learning method is to use these neural networks with pretrained weights, which typically improves the neural networks performance. In this thesis deep transfer learning was used to segment cancerous masses from mammography images. The convolutional neural networks used were pretrained and fine-tuned, and they had an an encoder-decoder architecture. The ResNet22 encoder was pretrained with mammography images, while the ResNet34 encoder was pretrained with various color images. These encoders were paired with either a U-Net or a Feature Pyramid Network decoder. Additionally, U-Net model with random initialization was also tested. The five different models were trained and tested on the Oulu Dataset of Screening Mammography (9204 images) and on the Portuguese INbreast dataset (410 images) with two different loss functions, binary cross-entropy loss with soft Jaccard loss and a loss function based on focal Tversky index. The best models were trained on the Oulu Dataset of Screening Mammography with the focal Tversky loss. The best segmentation result achieved was a Dice similarity coefficient of 0.816 on correctly segmented masses and a classification accuracy of 88.7% on the INbreast dataset. On the Oulu Dataset of Screening Mammography, the best results were a Dice score of 0.763 and a classification accuracy of 83.3%. The results between the pretrained models were similar, and the pretrained models had better results than the non-pretrained models. In conclusion, deep transfer learning is very suitable for mammography mass segmentation and the choice of loss function had a large impact on the results.Rinnan massojen segmentointi mammografiakuvista syvä- ja siirto-oppimista hyödyntäen. Tiivistelmä. Mammografia on röntgenkuvantamismenetelmä, jota käytetään rintäsyövän seulontaan. Mammografiakuvien seulonta on aikaa vievää ja niiden analysoimisen avuksi on kehitelty useita tietokoneavusteisia ratkaisuja. Syväoppimisella tarkoitetaan monikerroksisten neuroverkkojen käyttöä eri tehtävien ratkaisemiseen. Syväoppimismenetelmät ovat ajan myötä kehittyneet ja tulleet suosituiksi kuvien segmentoimiseen. Yksi tapa yhdistää syvä- ja siirtooppimista on hyödyntää neuroverkkoja esiopetettujen painojen kanssa, mikä auttaa parantamaan neuroverkkojen suorituskykyä. Tässä diplomityössä tutkittiin syvä- ja siirto-oppimisen käyttöä syöpäisten massojen segmentoimiseen mammografiakuvista. Käytetyt konvoluutioneuroverkot olivat esikoulutettuja ja hienosäädettyjä. Lisäksi niillä oli enkooderi-dekooderi arkkitehtuuri. ResNet22 enkooderi oli esikoulutettu mammografia kuvilla, kun taas ResNet34 enkooderi oli esikoulutettu monenlaisilla värikuvilla. Näihin enkoodereihin yhdistettiin joko U-Net:n tai piirrepyramidiverkon dekooderi. Lisäksi käytettiin U-Net mallia ilman esikoulutusta. Nämä viisi erilaista mallia koulutettiin ja testattiin sekä Oulun Mammografiaseulonta Datasetillä (9204 kuvaa) että portugalilaisella INbreast datasetillä (410 kuvaa) käyttäen kahta eri tavoitefunktiota, jotka olivat binääriristientropia yhdistettynä pehmeällä Jaccard-indeksillä ja fokaaliin Tversky indeksiin perustuva tavoitefunktiolla. Parhaat mallit olivat koulutettu Oulun datasetillä fokaalilla Tversky tavoitefunktiolla. Parhaat tulokset olivat 0,816 Dice kerroin oikeissa positiivisissa segmentaatioissa ja 88,7 % luokittelutarkkuus INbreast datasetissä. Esikoulutetut mallit antoivat parempia tuloksia kuin mallit joita ei esikoulutettu. Oulun datasetillä parhaat tulokset olivat 0,763:n Dice kerroin ja 83,3 % luokittelutarkkuus. Tuloksissa ei ollut suurta eroa esikoulutettujen mallien välillä. Tulosten perusteella syvä- ja siirto-oppiminen soveltuvat hyvin massojen segmentoimiseen mammografiakuvista. Lisäksi tavoitefunktiovalinnalla saatiin suuri vaikutus tuloksiin

    A Systematic Survey of Classification Algorithms for Cancer Detection

    Get PDF
    Cancer is a fatal disease induced by the occurrence of a count of inherited issues and also a count of pathological changes. Malignant cells are dangerous abnormal areas that could develop in any part of the human body, posing a life-threatening threat. To establish what treatment options are available, cancer, also referred as a tumor, should be detected early and precisely. The classification of images for cancer diagnosis is a complex mechanism that is influenced by a diverse of parameters. In recent years, artificial vision frameworks have focused attention on the classification of images as a key problem. Most people currently rely on hand-made features to demonstrate an image in a specific manner. Learning classifiers such as random forest and decision tree were used to determine a final judgment. When there are a vast number of images to consider, the difficulty occurs. Hence, in this paper, weanalyze, review, categorize, and discuss current breakthroughs in cancer detection utilizing machine learning techniques for image recognition and classification. We have reviewed the machine learning approaches like logistic regression (LR), Naïve Bayes (NB), K-nearest neighbors (KNN), decision tree (DT), and Support Vector Machines (SVM)

    A Self-adaptive Discriminative Autoencoder for Medical Applications

    Get PDF
    Computer aided diagnosis (CAD) systems play an essential role in the early detection and diagnosis of developing disease for medical applications. In order to obtain the highly recognizable representation for the medical images, a self-adaptive discriminative autoencoder (SADAE) is proposed in this paper. The proposed SADAE system is implemented under a deep metric learning framework which consists of K local autoencoders, employed to learn the K subspaces that represent the diverse distribution of the underlying data, and a global autoencoder to restrict the spatial scale of the learned representation of images. Such community of autoencoders is aided by a self-adaptive metric learning method that extracts the discriminative features to recognize the different categories in the given images. The quality of the extracted features by SADAE is compared against that of those extracted by other state-of-the-art deep learning and metric learning methods on five popular medical image data sets. The experimental results demonstrate that the medical image recognition results gained by SADAE are much improved over those by the alternatives
    corecore