277 research outputs found
Studies on Category Prediction of Ovarian Cancers Based on Magnetic Resonance Images
Ovarian cancer is the gynecological malignant tumor with low early diagnosis rate and high mortality. Ovarian epithelial cancer (OEC) is the most common subtype of ovarian cancer. Pathologically, OEC is divided into two subtypes: Type I and Type II. These two subtypes of OEC have different biological characteristics and treatment response. Therefore, it is important to accurately categorize these two groups of patients and provide the reference for clinicians in designing treatment plans.
In the current magnetic resonance (MR) examination, the diagnoses given by the radiologists are largely based on individual judgment and not sufficiently accurate. Because of the low accuracy of the results and the risk of suffering Type II OEC, most patients will undertake the fine-needle aspiration, which may cause harm to patients’ bodies. Therefore, there is need for the method for OEC subtype classification based on MR images.
This thesis proposes the automatic diagnosis system of ovarian cancer based on the combination of deep learning and radiomics. The method utilizes four common useful sequences for ovarian cancer diagnosis: sagittal fat-suppressed T2WI (Sag-fs-T2WI), coronal T2WI (Cor-T2WI), axial T1WI (Axi-T1WI), and apparent diffusion coefficient map (ADC) to establish a multi-sequence diagnostic model. The system starts with the segmentation of the ovarian tumors, and then obtains the radiomic features from lesion parts together with the network features. Selected Features are used to build model to predict the malignancy of ovarian cancers, the subtype of OEC and the survival condition.
Bi-atten-ResUnet is proposed in this thesis as the segmentation model. The network is established on the basis of U-Net with adopting Residual block and non-local attention module. It preserves the classic encoder/decoder architecture in the U-Net network. The encoder part is reconstructed by the pretrained ResNet to make use of transfer learning knowledge, and bi-non-local attention modules are added to the decoder part on each level. The application of these techniques enhances the network’s performance in segmentation tasks. The model achieves 0.918, 0.905, 0.831, and 0.820 Dice coefficient respectively in segmenting on four MR sequences.
After the segmentation work, the thesis proposes a diagnostic model with three steps: quantitative description feature extraction, feature selection, and establishment of prediction models. First, radiomic features and network features are obtained. Then iterative sparse representation (ISR) method is adopted as the feature selection to reduce the redundancy and correlation. The selected features are used to establish a predictive model, and support vector machine (SVM) is used as the classifier.
The model achieves an AUC of 0.967 in distinguishing between benign and malignant ovarian tumors. For discriminating Type I and Type II OEC, the model yields an AUC of 0.823. In the survival prediction, patients categorized in high risk group are more likely to have poor prognosis with hazard ratio 4.169
Analysis of interpretability methods applied to DCE-MRI of Breasts Images
Tese de mestrado integrado, Engenharia Biomédica e BiofÃsica (Sinais e Imagens Biomédicas), 2022, Universidade de Lisboa; Faculdade de CiênciasO cancro da mama é uma doença que afeta um elevado número de mulheres a uma escala mundial [1].
Os exames fÃsicos e a mamografia são as formas mais eficazes de detetar lesões e nódulos na mama.
Contudo, estes métodos podem revelar-se inconclusivos. Uma maneira de solidificar o diagnóstico de
cancro da mama é a realização de testes suplementares, tal como a ressonância magnética. O exame de
ressonância magnética mais comum para detetar cancro da mama é DCE-MRI, um exame que obtém
imagens através da injeção de um agente de contraste [2]. A consolidação do diagnóstico pode também
ser realizada via meios de machine learning. Vários métodos de machine learning têm vindo a ajudar
técnicos a realizar tarefas como deteção e segmentação de tumores. Apesar destes métodos serem
eficazes, as tarefas que este realizam são caracterizadas por um elevado grau de responsabilidade visto
que estão diretamente relacionadas com o bem-estar de um ser humano. Isto leva à necessidade de
justificar os resultados destes métodos de maneira a aumentar a confiança nos mesmos. As técnicas que
tentam explicar os resultados de métodos de machine learning pertencem à área de Explainable Artificial
Intelligence [3].
Esta dissertação foca-se em aplicar e analisar métodos state-of-the-art de Explainable Artificial
Intelligence a modelos de machine learning. Como estes modelos foram construÃdos tendo como base
imagens de DCE-MR de mamas, os métodos aplicados a estes modelos visam explicar os seus resultados
visualmente. Um dos métodos aplicados foi SHAP, SHapley Addictive exPlanations. Este método pode
ser aplicado a uma variedade de modelos e baseia-se nos Shapley Values da teoria de jogos para explicar
a importância das caracterÃsticas da imagem de acordo com os resultados do modelo [4]. Outro método
aplicado foi Local Interpretable Model-agnostic Explanations, ou LIME. Este método cria imagens
alteradas e testa-as nos modelos criados. Estas imagens perturbadas têm um peso de acordo com o grau
das perturbações. Quando testadas nos modelos, LIME calcula quais as perturbações que influenciam a
mudança do resultado do modelo e, consequentemente, encontra as áreas da imagem que são mais
importantes para a classificação da imagem de acordo com o modelo [5]. O último método aplicado foi
o Gradient-weighted Class Activation Mapping, ou Grad-CAM. Este método pode ser aplicado em
diversos modelos, sendo uma generalização do método CAM [6], mas apenas pode ser aplicado em
tarefas de classificação. O método de Grad-CAM utiliza os valores dos gradientes especÃficos de classes
e as feature maps extraÃdas de convolutional layers para realçar áreas discriminativas de uma certa classe
na imagem. Estas layers são componentes importantes que constituem o corpo dos modelos. Para lá
destes métodos, extraiu-se e analisou-se matrizes convolucionais, chamadas de filtros, usadas pelas
convolutional layers para obter o output destas layers. Esta tarefa foi realizada para observar os padrões
que estão a ser filtrados nestas camadas.
Para aplicar estes métodos, foi necessário construir e treinar vários modelos. Nesse sentido, três modelos
com a mesma estrutura foram criados para realizar tarefas de regressão. Estes modelos têm uma
arquitetura constituÃda por três convolutional layers seguidas de uma linear layer, uma dropout layer e
outra linear layer. Um dos modelos tem como objetivo medir a área do tumor em maximum intensity
projections dos volumes. Os outros dois modelos têm como objetivo medir a percentagem de redução
do tumor quando introduzido dois maximum intensity projections. A diferença entre estes dois modelos
está nas labels criadas para os inputs. Um dos modelos usa valores calculados através da diferença entre
a área dos tumores dos duas maximum intensity projections, enquanto o outro modelo usa valores da
regressão da área do tumor fornecidos por técnicos. A performance destes modelos foi avaliada através
da computação dos coeficientes de correlação de Pearson e de Spearman. Estes coeficientes são
calculados usando a covariância e o produto do desvio-padrão de duas variáveis, e diferem no facto de
o coeficiente de Pearson apenas captar relações lineares enquanto o coeficiente de Spearman capta
qualquer tipo de relação. Do modelo que teve como objetivo medir a área do tumor calculou-se os
coeficientes de Pearson e de Spearman de 0.53 e 0.69, respetivamente. O modelo que teve como objetivo calcular a percentagem de redução do tumor e que usou valores calculados como labels teve a
melhor performance dos três modelos, com coeficientes de Pearson e de Spearman com valores de 0.82
e 0.87, respetivamente. O último modelo utilizado não conseguiu prever corretamente os valores
fornecidos pelos técnicos e, consequentemente, este modelo foi descartado. De seguida, os métodos de
visualização de filtros e SHAP foram aplicados aos dois restantes modelos. A técnica de visualização
de filtros permitiu demonstrar as partes da imagem que estão a ser filtradas nas convolutional layers,
sendo possÃvel observar certos padrões nestes filtros. O método SHAP realçou áreas da mama que
contribuÃram para as previsões dos modelos. Como ambas as tarefas se focam em calcular algo através
da área dos tumores, consideramos imagens SHAP bem-sucedidas aquelas que realçam áreas do tumor.
Com isto em mente, as imagens obtidas através do método SHAP tiveram um sucesso de 57% e de 69%
para o modelo que mede a área do tumor e para o modelo que mede a percentagem de redução do tumor,
respetivamente.
Outro modelo foi construÃdo com o objetivo de classificar pares de maximum intensity projections de
acordo com percentagem de redução de área do tumor. Cada par foi previamente classificado numa de
quatro classes, sendo que cada classe corresponde a uma redução incremental de 25%, ou seja, a primeira
classe corresponde a uma redução do tumor de 0% a 25%, enquanto a última classe corresponde a uma
redução do tumor de 75% a 100%. Este modelo tem uma arquitetura semelhante à de um modelo de
Resnet18 [7]. A performance deste modelo foi avaliada através de uma matriz de confusão. Através
desta matriz podemos observar um sucesso de 70% no que toca a previsões corretas feitas pelo modelo.
De seguida, os três métodos, SHAP, LIME e Grad-CAM, foram aplicados neste modelo. Como o
objetivo deste modelo baseia-se em classificar as imagens de acordo com a percentagem de redução de
tumor, também se considerou imagens de SHAP com sucesso aquelas que realçam áreas do tumor.
Tendo isto em conta, observou-se uma taxa de sucesso de 82% em realçar a zona do tumor nas maximum
intensity projections. As perturbações criadas para serem aplicadas no método LIME correspondem a
áreas quadradas na imagem. O método LIME cria imagens atribuindo valores nulos a estas áreas
aleatoriamente. O método LIME atribui um peso à s imagens perturbadas de acordo com o nÃvel de
perturbação que estas sofrem. Neste trabalho, duas diferentes perturbações foram criadas, sendo a
primeira perturbação áreas quadradas de 10 por 10 pixéis e a segunda áreas quadradas de 25 por 25
pixéis. Após a perturbação das imagens, estas foram inseridas novamente no modelo e as diferenças na
previsão do modelo foram aprendidas pelo algoritmo do LIME. Imagens criadas com as perturbações
mais pequenas tiveram uma maior taxa de sucesso que as perturbações maiores, realçando perturbações
na área do tumor com uma certidão de 48%. Apesar deste facto, as imagens criadas com as perturbações
de 25 por 25 pixéis tiveram os resultados mais claros no que toca a localizar o tumor visto que o tamanho
das perturbações permitiu englobar todo o tumor. Por último, o método Grad-CAM foi aplicado a todas
as importantes convolutional layers do modelo. Este método foi bastante eficaz a localizar as áreas
discriminativas de uma certa classe, visto que conseguiu localizar o tumor bastante facilmente quando
aplicado na última convolutional layer. Para lá deste facto, foi possÃvel observar as áreas discriminativas
de uma certa classe nas imagens quando se aplica este método a convolutional layers intermédias.
Concluindo, a aplicação destas técnicas permitiu explicar parte das decisões feitas pelos modelos de
machine learning no âmbito da análise de imagens de DCE-MRI de cancro da mama.Computer aided diagnosis has had an exponential growth in medical imaging. Machine learning has
helped technicians in tasks such as tumor segmentation and tumor detection. Despite the growth in this
area, there is still a need to justify and fully understand the computer results, in order to increase the
trust of medical professionals in these computer tasks. By applying explainable methods to the machine
learning algorithms, we can extract information from techniques that are often considered black boxes.
This dissertation focuses on applying and analyzing state-of-the-art XAI (eXplainable Artificial
Intelligence) methods to machine learning models that handle DCE-MR (Dynamic Contrast-Enhanced
Magnetic Resonance) breast images. The methods used to justify the model’s decisions were SHAP
(SHapley Additive exPlanations) [4], LIME (Local Interpretable Model-agnostic Explanations) [5] and
Grad-CAM (Gradient-weighted Class Activation Mapping) [8], which correspond to three visual
explanation methods. SHAP uses Shapley Values from game theory to explain the importance of
features in the image to the model’s prediction. LIME is a method that uses weighted perturbed images
and tests then using the existing models. From the model’s response to these perturbed images, the
algorithm can find which perturbations cause the model to change its prediction and, consequently, can
find the important areas in the image that lead to the model’s prediction. Grad-CAM is a visual
explanation method that can be applied to a variety of neural network architectures. It uses gradient
scores from a specific class and feature maps extracted from convolutional layers to highlight classdiscriminative regions in the images.
Two neural network models were built to perform regression tasks such as measuring tumor area and
measuring tumor shrinkage. To justify the network’s results, filters were extracted from the network’s
convolutional layers and the SHAP method was applied. The filter visualization technique was able to
demonstrate which parts of the image are being convoluted by the layer’s filters while the SHAP method
highlighted the areas of the tumor that contributed most to the model’s predictions. The SHAP method
had a success rate of 57% at highlighting the correct area of the breast when applied to the neural network
which measured the tumor area, and a success rate of 69% when applied to the neural network which
measured the tumor shrinkage. Another model was created using a Resnet18’s architecture. This
network had the task of classifying the breast images according to the shrinkage of the tumor and the
SHAP, LIME and Grad-CAM methods were applied to it. The SHAP method had a success rate of 82%.
The LIME method was applied two times by using perturbations of different sizes. The smaller sized
perturbations performed better, having a success rate of 48% at highlighting the tumor area, but the
larger sized perturbations had better results in terms of locating the entire tumor, because the area
covered was larger. Lastly, the Grad-CAM method excelled at locating the tumor in the breast when
applied to the last important convolutional layer in the network
Machine learning in oral squamous cell carcinoma: current status, clinical concerns and prospects for future-A systematic review
Background: Oral cancer can show heterogenous patterns of behavior. For proper and effective management of oral cancer, early diagnosis and accurate prediction of prognosis are important. To achieve this, artificial intelligence (AI) or its subfield, machine learning, has been touted for its potential to revolutionize cancer management through improved diagnostic precision and prediction of outcomes. Yet, to date, it has made only few contributions to actual medical practice or patient care. Objectives: This study provides a systematic review of diagnostic and prognostic application of machine learning in oral squamous cell carcinoma (OSCC) and also highlights some of the limitations and concerns of clinicians towards the implementation of machine learning-based models for daily clinical practice. Data sources: We searched OvidMedline, PubMed, Scopus, Web of Science, and Institute of Electrical and Electronics Engineers (IEEE) databases from inception until February 2020 for articles that used machine learning for diagnostic or prognostic purposes of OSCC. Eligibility criteria: Only original studies that examined the application of machine learning models for prognostic and/or diagnostic purposes were considered. Data extraction: Independent extraction of articles was done by two researchers (A.R. & O.Y) using predefine study selection criteria. We used the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) in the searching and screening processes. We also used Prediction model Risk of Bias Assessment Tool (PROBAST) for assessing the risk of bias (ROB) and quality of included studies. Results: A total of 41 studies were published to have used machine learning to aid in the diagnosis/or prognosis of OSCC. The majority of these studies used the support vector machine (SVM) and artificial neural network (ANN) algorithms as machine learning techniques. Their specificity ranged from 0.57 to 1.00, sensitivity from 0.70 to 1.00, and accuracy from 63.4 % to 100.0 % in these studies. The main limitations and concerns can be grouped as either the challenges inherent to the science of machine learning or relating to the clinical implementations. Conclusion: Machine learning models have been reported to show promising performances for diagnostic and prognostic analyses in studies of oral cancer. These models should be developed to further enhance explainability, interpretability, and externally validated for generalizability in order to be safely integrated into daily clinical practices. Also, regulatory frameworks for the adoption of these models in clinical practices are necessary.Peer reviewe
Machine learning application in cancer research: Mini Review
Nowadays, due to the significant growth of medical data production, utilization of interdisciplinary science, such as data mining, is also increasing. In order to discover the knowledge, form an enormous quantity of medical data, data mining would be helpful tools. One of the common data mining techniques is machine learning. This approach is the ability of learning without being explicitly programmed by computers through sets of algorithms. In the past few years, many researches have been carried out the machine learning algorithms utilization in cancer research. In this mini-review, besides defining the concepts of machine learning, the application of machine learning on cancer data also has been reviewed. The repeated studies are divided into four categories, including Identification of high-risk people, Prediction of cancer staging, Prediction of cancer clinical outcomes and Medical image analysis. Studies show that the use of machine learning in medical fields is increasing and there is a promising progress in this area
Dual-path convolutional neural network using micro-FTIR imaging to predict breast cancer subtypes and biomarkers levels: estrogen receptor, progesterone receptor, HER2 and Ki67
Breast cancer molecular subtypes classification plays an import role to sort
patients with divergent prognosis. The biomarkers used are Estrogen Receptor
(ER), Progesterone Receptor (PR), HER2, and Ki67. Based on these biomarkers
expression levels, subtypes are classified as Luminal A (LA), Luminal B (LB),
HER2 subtype, and Triple-Negative Breast Cancer (TNBC). Immunohistochemistry is
used to classify subtypes, although interlaboratory and interobserver
variations can affect its accuracy, besides being a time-consuming technique.
The Fourier transform infrared micro-spectroscopy may be coupled with deep
learning for cancer evaluation, where there is still a lack of studies for
subtypes and biomarker levels prediction. This study presents a novel 2D deep
learning approach to achieve these predictions. Sixty micro-FTIR images of
320x320 pixels were collected from a human breast biopsies microarray. Data
were clustered by K-means, preprocessed and 32x32 patches were generated using
a fully automated approach. CaReNet-V2, a novel convolutional neural network,
was developed to classify breast cancer (CA) vs adjacent tissue (AT) and
molecular subtypes, and to predict biomarkers level. The clustering method
enabled to remove non-tissue pixels. Test accuracies for CA vs AT and subtype
were above 0.84. The model enabled the prediction of ER, PR, and HER2 levels,
where borderline values showed lower performance (minimum accuracy of 0.54).
Ki67 percentage regression demonstrated a mean error of 3.6%. Thus, CaReNet-V2
is a potential technique for breast cancer biopsies evaluation, standing out as
a screening analysis technique and helping to prioritize patients.Comment: 32 pages, 3 figures, 6 table
Utilizing Deep Machine Learning for Prognostication of Oral Squamous Cell Carcinoma—A Systematic Review
Peer reviewe
NOVEL APPLICATIONS OF MACHINE LEARNING IN BIOINFORMATICS
Technological advances in next-generation sequencing and biomedical imaging have led to a rapid increase in biomedical data dimension and acquisition rate, which is challenging the conventional data analysis strategies. Modern machine learning techniques promise to leverage large data sets for finding hidden patterns within them, and for making accurate predictions. This dissertation aims to design novel machine learning-based models to transform biomedical big data into valuable biological insights. The research presented in this dissertation focuses on three bioinformatics domains: splice junction classification, gene regulatory network reconstruction, and lesion detection in mammograms.
A critical step in defining gene structures and mRNA transcript variants is to accurately identify splice junctions. In the first work, we built the first deep learning-based splice junction classifier, DeepSplice. It outperforms the state-of-the-art classification tools in terms of both classification accuracy and computational efficiency. To uncover transcription factors governing metabolic reprogramming in non-small-cell lung cancer patients, we developed TFmeta, a machine learning approach to reconstruct relationships between transcription factors and their target genes in the second work. Our approach achieves the best performance on benchmark data sets. In the third work, we designed deep learning-based architectures to perform lesion detection in both 2D and 3D whole mammogram images
Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions
Breast cancer has reached the highest incidence rate worldwide among all
malignancies since 2020. Breast imaging plays a significant role in early
diagnosis and intervention to improve the outcome of breast cancer patients. In
the past decade, deep learning has shown remarkable progress in breast cancer
imaging analysis, holding great promise in interpreting the rich information
and complex context of breast imaging modalities. Considering the rapid
improvement in the deep learning technology and the increasing severity of
breast cancer, it is critical to summarize past progress and identify future
challenges to be addressed. In this paper, we provide an extensive survey of
deep learning-based breast cancer imaging research, covering studies on
mammogram, ultrasound, magnetic resonance imaging, and digital pathology images
over the past decade. The major deep learning methods, publicly available
datasets, and applications on imaging-based screening, diagnosis, treatment
response prediction, and prognosis are described in detail. Drawn from the
findings of this survey, we present a comprehensive discussion of the
challenges and potential avenues for future research in deep learning-based
breast cancer imaging.Comment: Survey, 41 page
Analyzing the breast tissue in mammograms using deep learning
La densitat mamogrà fica de la mama (MBD) reflecteix la quantitat d'à rea fibroglandular del teixit mamari que apareix blanca i brillant a les mamografies, comunament coneguda com a densitat percentual de la mama (PD%). El MBD és un factor de risc per al cà ncer de mama i un factor de risc per emmascarar tumors. Tot i això, l'estimació precisa de la DMO amb avaluació visual continua sent un repte a causa del contrast feble i de les variacions significatives en els teixits grassos de fons en les mamografies. A més, la interpretació correcta de les imatges de mamografia requereix experts mèdics altament capacitats: És difÃcil, laboriós, car i propens a errors. No obstant això, el teixit mamari dens pot dificultar la identificació del cà ncer de mama i associar-se amb un risc més gran de cà ncer de mama. Per exemple, s'ha informat que les dones amb una alta densitat mamà ria en comparació amb les dones amb una densitat mamà ria baixa tenen un risc de quatre a sis vegades més gran de desenvolupar la malaltia.
La clau principal de la computació de densitat de mama i la classificació de densitat de mama és detectar correctament els teixits densos a les imatges mamogrà fiques. S'han proposat molts mètodes per estimar la densitat mamà ria; no obstant això, la majoria no estan automatitzats. A més, s'han vist greument afectats per la baixa relació senyal-soroll i la variabilitat de la densitat en aparença i textura.
Seria més útil tenir un sistema de diagnòstic assistit per ordinador (CAD) per ajudar el metge a analitzar-lo i diagnosticar-lo automà ticament. El desenvolupament actual de mètodes daprenentatge profund ens motiva a millorar els sistemes actuals danà lisi de densitat mamà ria.
L'enfocament principal de la present tesi és desenvolupar un sistema per automatitzar l'anà lisi de densitat de la mama ( tal com; Segmentació de densitat de mama (BDS), percentatge de densitat de mama (BDP) i classificació de densitat de mama (BDC) ), utilitzant tècniques d'aprenentatge profund i aplicant-la a les mamografies temporals després del tractament per analitzar els canvis de densitat de mama per trobar un pacient perillós i sospitós.La densidad mamográfica de la mama (MBD) refleja la cantidad de área fibroglandular del tejido mamario que aparece blanca y brillante en las mamografÃas, comúnmente conocida como densidad porcentual de la mama (PD%). El MBD es un factor de riesgo para el cáncer de mama y un factor de riesgo para enmascarar tumores. Sin embargo, la estimación precisa de la DMO con evaluación visual sigue siendo un reto debido al contraste débil y a las variaciones significativas en los tejidos grasos de fondo en las mamografÃas. Además, la interpretación correcta de las imágenes de mamografÃa requiere de expertos médicos altamente capacitados: Es difÃcil, laborioso, caro y propenso a errores. Sin embargo, el tejido mamario denso puede dificultar la identificación del cáncer de mama y asociarse con un mayor riesgo de cáncer de mama. Por ejemplo, se ha informado que las mujeres con una alta densidad mamaria en comparación con las mujeres con una densidad mamaria baja tienen un riesgo de cuatro a seis veces mayor de desarrollar la enfermedad.
La clave principal de la computación de densidad de mama y la clasificación de densidad de mama es detectar correctamente los tejidos densos en las imágenes mamográficas. Se han propuesto muchos métodos para la estimación de la densidad mamaria; sin embargo, la mayorÃa de ellos no están automatizados. Además, se han visto gravemente afectados por la baja relación señal-ruido y la variabilidad de la densidad en apariencia y textura.
SerÃa más útil disponer de un sistema de diagnóstico asistido por ordenador (CAD) para ayudar al médico a analizarlo y diagnosticarlo automáticamente. El desarrollo actual de métodos de aprendizaje profundo nos motiva a mejorar los sistemas actuales de análisis de densidad mamaria.
El enfoque principal de la presente tesis es desarrollar un sistema para automatizar el análisis de densidad de la mama ( tal como; Segmentación de densidad de mama (BDS), porcentaje de densidad de mama (BDP) y clasificación de densidad de mama (BDC)), utilizando técnicas de aprendizaje profundo y aplicándola en las mamografÃas temporales después del tratamiento para analizar los cambios de densidad de mama para encontrar un paciente peligroso y sospechoso.Mammographic breast density (MBD) reflects the amount of fibroglandular breast tissue area that appears white and bright on mammograms, commonly referred to as breast percent density (PD%). MBD is a risk factor for breast cancer and a risk factor for masking tumors. However, accurate MBD estimation with visual assessment is still a challenge due to faint contrast and significant variations in background fatty tissues in mammograms. In addition, correctly interpreting mammogram images requires highly trained medical experts: it is difficult, time-consuming, expensive, and error-prone. Nevertheless, dense breast tissue can make it harder to identify breast cancer and be associated with an increased risk of breast cancer. For example, it has been reported that women with a high breast density compared to women with a low breast density have a four- to six-fold increased risk of developing the disease.
The primary key of breast density computing and breast density classification is to detect the dense tissues in the mammographic images correctly. Many methods have been proposed for breast density estimation; however, most are not automated. Besides, they have been badly affected by low signal-to-noise ratio and variability of density in appearance and texture. It would be more helpful to have a computer-aided diagnosis (CAD) system to assist the doctor analyze and diagnosing it automatically. Current development in deep learning methods motivates us to improve current breast density analysis systems.
The main focus of the present thesis is to develop a system for automating the breast density analysis ( such as; breast density segmentation(BDS), breast density percentage (BDP), and breast density classification ( BDC)), using deep learning techniques and applying it on the temporal mammograms after treatment for analyzing the breast density changes to find a risky and suspicious patient
Circulating tumor identification using neural networks for monitoring cancer progression
Thesis submitted in partial fulfillment of the requirements for the Degree of Master of Science in Information Technology (MSIT) at Strathmore UniversityCancer is the third most killer disease in Kenya after infectious and cardiovascular diseases. It contributes to a significant portion of annual national deaths, led by breast and prostate cancer.
Existing cancer treatment methods vary from patient to another based on the type and stage of tumor development. The treatment modalities such as surgery, chemotherapy and radiation have been successful when the disease is detected early and constantly monitored. Ineffective treatment method and development of complications such as cancer relapse must be monitored as they are likely to cause more deaths. Detection of circulating tumor cells (CTC’s) is a pivotal monitoring method which involves identification of cancer related substances known as tumor markers. These are often excreted by primary tumors into patient’s blood. The presence, absence or number of
CTC’s can be used to evaluate patient’s disease progression and determine the effectiveness of current treatment option. This research work proposed an adaptive learning-based, computational model to help in cancer monitoring. It identifies and enumerates CTC’s based on the auto-learned features from stained CTC images using deep learning methodology. The 3.0% error rate model, without human intervention, automatically learned the best set of representative features from labelled samples. The representations were used in enumerating and identifying CTC’s given a new test example
- …