39 research outputs found
Recommended from our members
Advancing Artificial Intelligence in Sensors, Signals, and Imaging Informatics.
ObjectiveTo identify research works that exemplify recent developments in the field of sensors, signals, and imaging informatics.MethodA broad literature search was conducted using PubMed and Web of Science, supplemented with individual papers that were nominated by section editors. A predefined query made from a combination of Medical Subject Heading (MeSH) terms and keywords were used to search both sources. Section editors then filtered the entire set of retrieved papers with each paper having been reviewed by two section editors. Papers were assessed on a three-point Likert scale by two section editors, rated from 0 (do not include) to 2 (should be included). Only papers with a combined score of 2 or above were considered.ResultsA search for papers was executed at the start of January 2019, resulting in a combined set of 1,459 records published in 2018 in 119 unique journals. Section editors jointly filtered the list of candidates down to 14 nominations. The 14 candidate best papers were then ranked by a group of eight external reviewers. Four papers, representing different international groups and journals, were selected as the best papers by consensus of the International Medical Informatics Association (IMIA) Yearbook editorial board.ConclusionsThe fields of sensors, signals, and imaging informatics have rapidly evolved with the application of novel artificial intelligence/machine learning techniques. Studies have been able to discover hidden patterns and integrate different types of data towards improving diagnostic accuracy and patient outcomes. However, the quality of papers varied widely without clear reporting standards for these types of models. Nevertheless, a number of papers have demonstrated useful techniques to improve the generalizability, interpretability, and reproducibility of increasingly sophisticated models
Analyzing the breast tissue in mammograms using deep learning
La densitat mamogràfica de la mama (MBD) reflecteix la quantitat d'àrea fibroglandular del teixit mamari que apareix blanca i brillant a les mamografies, comunament coneguda com a densitat percentual de la mama (PD%). El MBD és un factor de risc per al càncer de mama i un factor de risc per emmascarar tumors. Tot i això, l'estimació precisa de la DMO amb avaluació visual continua sent un repte a causa del contrast feble i de les variacions significatives en els teixits grassos de fons en les mamografies. A més, la interpretació correcta de les imatges de mamografia requereix experts mèdics altament capacitats: És difícil, laboriós, car i propens a errors. No obstant això, el teixit mamari dens pot dificultar la identificació del càncer de mama i associar-se amb un risc més gran de càncer de mama. Per exemple, s'ha informat que les dones amb una alta densitat mamària en comparació amb les dones amb una densitat mamària baixa tenen un risc de quatre a sis vegades més gran de desenvolupar la malaltia.
La clau principal de la computació de densitat de mama i la classificació de densitat de mama és detectar correctament els teixits densos a les imatges mamogràfiques. S'han proposat molts mètodes per estimar la densitat mamària; no obstant això, la majoria no estan automatitzats. A més, s'han vist greument afectats per la baixa relació senyal-soroll i la variabilitat de la densitat en aparença i textura.
Seria més útil tenir un sistema de diagnòstic assistit per ordinador (CAD) per ajudar el metge a analitzar-lo i diagnosticar-lo automàticament. El desenvolupament actual de mètodes daprenentatge profund ens motiva a millorar els sistemes actuals danàlisi de densitat mamària.
L'enfocament principal de la present tesi és desenvolupar un sistema per automatitzar l'anàlisi de densitat de la mama ( tal com; Segmentació de densitat de mama (BDS), percentatge de densitat de mama (BDP) i classificació de densitat de mama (BDC) ), utilitzant tècniques d'aprenentatge profund i aplicant-la a les mamografies temporals després del tractament per analitzar els canvis de densitat de mama per trobar un pacient perillós i sospitós.La densidad mamográfica de la mama (MBD) refleja la cantidad de área fibroglandular del tejido mamario que aparece blanca y brillante en las mamografías, comúnmente conocida como densidad porcentual de la mama (PD%). El MBD es un factor de riesgo para el cáncer de mama y un factor de riesgo para enmascarar tumores. Sin embargo, la estimación precisa de la DMO con evaluación visual sigue siendo un reto debido al contraste débil y a las variaciones significativas en los tejidos grasos de fondo en las mamografías. Además, la interpretación correcta de las imágenes de mamografía requiere de expertos médicos altamente capacitados: Es difícil, laborioso, caro y propenso a errores. Sin embargo, el tejido mamario denso puede dificultar la identificación del cáncer de mama y asociarse con un mayor riesgo de cáncer de mama. Por ejemplo, se ha informado que las mujeres con una alta densidad mamaria en comparación con las mujeres con una densidad mamaria baja tienen un riesgo de cuatro a seis veces mayor de desarrollar la enfermedad.
La clave principal de la computación de densidad de mama y la clasificación de densidad de mama es detectar correctamente los tejidos densos en las imágenes mamográficas. Se han propuesto muchos métodos para la estimación de la densidad mamaria; sin embargo, la mayoría de ellos no están automatizados. Además, se han visto gravemente afectados por la baja relación señal-ruido y la variabilidad de la densidad en apariencia y textura.
Sería más útil disponer de un sistema de diagnóstico asistido por ordenador (CAD) para ayudar al médico a analizarlo y diagnosticarlo automáticamente. El desarrollo actual de métodos de aprendizaje profundo nos motiva a mejorar los sistemas actuales de análisis de densidad mamaria.
El enfoque principal de la presente tesis es desarrollar un sistema para automatizar el análisis de densidad de la mama ( tal como; Segmentación de densidad de mama (BDS), porcentaje de densidad de mama (BDP) y clasificación de densidad de mama (BDC)), utilizando técnicas de aprendizaje profundo y aplicándola en las mamografías temporales después del tratamiento para analizar los cambios de densidad de mama para encontrar un paciente peligroso y sospechoso.Mammographic breast density (MBD) reflects the amount of fibroglandular breast tissue area that appears white and bright on mammograms, commonly referred to as breast percent density (PD%). MBD is a risk factor for breast cancer and a risk factor for masking tumors. However, accurate MBD estimation with visual assessment is still a challenge due to faint contrast and significant variations in background fatty tissues in mammograms. In addition, correctly interpreting mammogram images requires highly trained medical experts: it is difficult, time-consuming, expensive, and error-prone. Nevertheless, dense breast tissue can make it harder to identify breast cancer and be associated with an increased risk of breast cancer. For example, it has been reported that women with a high breast density compared to women with a low breast density have a four- to six-fold increased risk of developing the disease.
The primary key of breast density computing and breast density classification is to detect the dense tissues in the mammographic images correctly. Many methods have been proposed for breast density estimation; however, most are not automated. Besides, they have been badly affected by low signal-to-noise ratio and variability of density in appearance and texture. It would be more helpful to have a computer-aided diagnosis (CAD) system to assist the doctor analyze and diagnosing it automatically. Current development in deep learning methods motivates us to improve current breast density analysis systems.
The main focus of the present thesis is to develop a system for automating the breast density analysis ( such as; breast density segmentation(BDS), breast density percentage (BDP), and breast density classification ( BDC)), using deep learning techniques and applying it on the temporal mammograms after treatment for analyzing the breast density changes to find a risky and suspicious patient
Investigating the role of machine learning and deep learning techniques in medical image segmentation
openThis work originates from the growing interest of the medical imaging community in the application of
machine learning techniques and, from deep learning to improve the accuracy of cancerscreening. The thesis
is structured into two different tasks.
In the first part, magnetic resonance images were analysed in order to support clinical experts in the
treatment of patients with brain tumour metastases (BM). The main topic related to this study was to
investigate whether BM segmentation may be approached successfully by two supervised ML classifiers
belonging to feature-based and deep learning approaches, respectively. SVM and V-Net Convolutional Neural
Network model are selected from the literature as representative of the two approaches.
The second task related to this thesisis illustrated the development of a deep learning study aimed to process
and classify lesions in mammograms with the use of slender neural networks. Mammography has a central
role in screening and diagnosis of breast lesions. Deep Convolutional Neural Networks have shown a great
potentiality to address the issue of early detection of breast cancer with an acceptable level of accuracy and
reproducibility. A traditional convolution network was compared with a novel one obtained making use of
much more efficient depth wise separable convolution layers.
As a final goal to integrate the system developed in clinical practice, for both fields studied, all the Medical
Imaging and Pattern Recognition algorithmic solutions have been integrated into a MATLAB® software
packageopenInformatica e matematica del calcologonella gloriaGonella, Glori
Deep Learning-Based Artificial Intelligence for Mammography
During the past decade, researchers have investigated the use of computer-aided mammography interpretation. With the application of deep learning technology, artificial intelligence (AI)-based algorithms for mammography have shown promising results in the quantitative assessment of parenchymal density, detection and diagnosis of breast cancer, and prediction of breast cancer risk, enabling more precise patient management. AI-based algorithms may also enhance the efficiency of the interpretation workflow by reducing both the workload and interpretation time. However, more in-depth investigation is required to conclusively prove the effectiveness of AI-based algorithms. This review article discusses how AI algorithms can be applied to mammography interpretation as well as the current challenges in its implementation in real-world practice.ope
Literary review of algorithms for segmentation and classification of Artificial Intelligence pathologies applied to breast cancer
According to data, breast cancer is a significant health issue and has a considerable economic
impact. This clearly justifies the need for breast cancer screening. However, the current diagnostic
process used in clinical settings is prone to errors. Consequently, there is a requirement for a tool
that can help doctors categorize mammograms into the four BI-RADS categories.
This study presents an approach that uses deep learning. It examines the challenges and difficulties
encountered and evaluates and compares its effectiveness. One dataset of mammograms was
used, with experts having already classified the radiological images using the BI-RADS guidelines.
The images in these datasets belong to categories 1 to 4.
The deep learning approach employed in this study is based on a Convolutional Neural Network
(CNN), namely a ResNet22. The propose is to use two inputs, one for the Cranio-Caudal (CC) view
and another for the Medio-Lateral Oblique (MLO) view. Each input comprises a mammogram image
and two heatmaps. Consequently, we have named the architecture MammoHeatNet (MHN).
The algorithm initially processes the mammogram image by cropping it, extracting optimal centers,
and obtaining the heatmaps. Once the pre-processing is complete, the inputs are fed into the
model, which then classifies them into four BI-RADS categories. To obtain the best model, various
parameter configurations have been tested.
The ultimate model attained a maximum accuracy of 74.19%. The process of training and testing
the model was time-intensive, requiring 150 hours to obtain the best possible model.
In conclusion, the deep learning model used in this study achieve good performance. However, with
the incorporation of a larger dataset for train it and various modifications to the model, even better
results could be achieved. The main contribution of this work is the implementation of a deep
neuronal network that process the images like a human specialist would do it, using to views of the
same mammogram.Segons les dades, el càncer de mama és un important problema de salut i té un considerable
impacte econòmic. Això justifica clarament la necessitat de realitzar revisions de càncer de mama.
No obstant, el procés diagnòstic actual utilitzat en entorns clínics té tendència a cometre errors. En
conseqüència, és necessari disposar d'una eina que pugui ajudar els metges a classificar les
mamografies en les quatre categories BI-RADS.
Aquest estudi presenta una enfocament que utilitza el "deep learning". S'examinen els
desafiaments i dificultats trobades, i s'avalua i compara la seva eficàcia. S'utilitza un conjunt de
dades de mamografies, amb experts que ja han classificat les imatges radiològiques utilitzant les
directrius BI-RADS. Les imatges d'aquests conjunts de dades pertanyen a les categories 1 a 4.
L’algoritme de "deep learning" utilitzat en aquest estudi es basa en una Xarxa Neuronal
Convolucional (CNN), concretament un ResNet22. La proposta és utilitzar dues entrades, una per a
la vista Cranio-Caudal (CC) i una altra per a la vista Medio-Lateral Oblique (MLO). Cada entrada
comprèn una imatge de mamografia i dues "heatmaps". Per tant, s'ha nomenat a l'arquitectura
MammoHeatNet (MHN).
L'algoritme processa inicialment la imatge de mamografia, retallant-la, extraient centres òptims i
obtenint les "heatmaps". Una vegada que el pre-processament està complet, les entrades es duen
al model, que les classifica en les quatre categories BI-RADS. Per obtenir el millor model, s'han
provat diverses configuracions de paràmetres.
El model final assolit va obtenir una precisió màxima del 74.19%. El procés d'entrenament i prova
del model va requerir molt de temps, amb un total de 150 hores per obtenir el millor model
possible.
En conclusió, el model de "deep learning" utilitzat en aquest estudi aconsegueix un bon rendiment.
No obstant, amb la incorporació d'un conjunt de dades més gran per a l'entrenament i diverses
modificacions al model, es podrien obtenir resultats encara millors. La principal contribució
d'aquest treball és la implementació d'una xarxa neuronal profunda que processa les imatges com
ho faria un especialista humà, utilitzant dues vistes de la mateixa mamografia.Según los datos, el cáncer de mama es un problema de salud significativo y tiene un impacto
económico considerable. Esto justifica claramente la necesidad de realizar revisiones de cáncer de
mama. Sin embargo, el proceso diagnóstico actual utilizado en entornos clínicos tiende a cometer
errores. En consecuencia, es necesario disponer de una herramienta que pueda ayudar a los
médicos a clasificar las mamografías en las cuatro categorías BI-RADS.
Este estudio presenta un enfoque que utiliza el "deep learning". Se examinan los desafíos y
dificultades encontradas, y se evalúa y compara su eficacia. Se utiliza un conjunto de datos de
mamografías, con expertos que ya han clasificado las imágenes radiológicas utilizando las
directrices BI-RADS. Las imágenes de estos conjuntos de datos pertenecen a las categorías 1 a 4.
El algoritmo de "deep learning" empleado en este estudio se basa en una Red Neuronal
Convolucional (CNN), concretamente un ResNet22. La propuesta es utilizar dos entradas, una para
la vista Cranio-Caudal (CC) y otra para la vista Medio-Lateral Oblicua (MLO). Cada entrada
comprende una imagen de mamografía y dos "heatmaps". Por tanto, se ha nombrado a la
arquitectura MammoHeatNet (MHN).
El algoritmo procesa inicialmente la imagen de mamografía, recortándola, extrayendo centros
óptimos y obteniendo las "heatmaps". Una vez que el preprocesamiento está completo, las
entradas se entran al modelo, que las clasifica en las cuatro categorías BI-RADS. Para obtener el
mejor modelo, se han probado varias configuraciones de parámetros.
El modelo final alcanzó una precisión máxima del 74,19%. El proceso de entrenamiento y prueba
del modelo requirió mucho tiempo, con un total de 150 horas para obtener el mejor modelo
posible.
En conclusión, el modelo de "deep learning" utilizado en este estudio logra un buen rendimiento.
Sin embargo, con la incorporación de un conjunto de datos más grande para el entrenamiento y
diversas modificaciones al modelo, se podrían obtener resultados aún mejores. La principal
contribución de este trabajo es la implementación de una red neuronal profunda que procesa las
imágenes como lo haría un especialista humano, utilizando dos vistas de la misma mamografía
Computer-aided Detection of Breast Cancer in Digital Tomosynthesis Imaging Using Deep and Multiple Instance Learning
Breast cancer is the most common cancer among women in the world. Nevertheless, early detection of breast cancer improves the chance of successful treatment. Digital breast tomosynthesis (DBT) as a new tomographic technique was developed to minimize the limitations of conventional digital mammography screening. A DBT is a quasi-three-dimensional image that is reconstructed from a small number of two-dimensional (2D) low-dose X-ray images. The 2D X-ray images are acquired over a limited angular around the breast.
Our research aims to introduce computer-aided detection (CAD) frameworks to detect early signs of breast cancer in DBTs. In this thesis, we propose three CAD frameworks for detection of breast cancer in DBTs. The first CAD framework is based on hand-crafted feature extraction. Concerning early signs of breast cancer: mass, micro-calcifications, and bilateral asymmetry between left and right breast, the system includes three separate channels to detect each sign. Next two CAD frameworks automatically learn complex patterns of 2D slices using the deep convolutional neural network and the deep cardinality-restricted Boltzmann machines. Finally, the CAD frameworks employ a multiple-instance learning approach with randomized trees algorithm to classify DBT images based on extracted information from 2D slices. The frameworks operate on 2D slices which are generated from DBT volumes. These frameworks are developed and evaluated using 5,040 2D image slices obtained from 87 DBT volumes. We demonstrate the validation and usefulness of the proposed CAD frameworks within empirical experiments for detecting breast cancer in DBTs
Multi-fractal dimension features by enhancing and segmenting mammogram images of breast cancer
The common malignancy which causes deaths in women is breast cancer. Early detection of breast cancer using mammographic image can help in reducing the mortality rate and the probability of recurrence. Through mammographic examination, breast lesions can be detected and classified. Breast lesions can be detected using many popular tools such as Magnetic Resonance Imaging (MRI), ultrasonography, and mammography. Although mammography is very useful in the diagnosis of breast cancer, the pattern similarities between normal and pathologic cases makes the process of diagnosis difficult. Therefore, in this thesis Computer Aided Diagnosing (CAD) systems have been developed to help doctors and technicians in detecting lesions. The thesis aims to increase the accuracy of diagnosing breast cancer for optimal classification of cancer. It is achieved using Machine Learning (ML) and image processing techniques on mammogram images. This thesis also proposes an improvement of an automated extraction of powerful texture sign for classification by enhancing and segmenting the breast cancer mammogram images. The proposed CAD system consists of five stages namely pre-processing, segmentation, feature extraction, feature selection, and classification. First stage is pre-processing that is used for noise reduction due to noises in mammogram image. Therefore, based on the frequency domain this thesis employed wavelet transform to enhance mammogram images in pre-processing stage for two purposes which is to highlight the border of mammogram images for segmentation stage, and to enhance the region of interest (ROI) using adaptive threshold in the mammogram images for feature extraction purpose. Second stage is segmentation process to identify ROI in mammogram images. It is a difficult task because of several landmarks such as breast boundary and artifacts as well as pectoral muscle in Medio-Lateral Oblique (MLO). Thus, this thesis presents an automatic segmentation algorithm based on new thresholding combined with image processing techniques. Experimental results demonstrate that the proposed model increases segmentation accuracy of the ROI from breast background, landmarks, and pectoral muscle. Third stage is feature extraction where enhancement model based on fractal dimension is proposed to derive significant mammogram image texture features. Based on the proposed, model a powerful texture sign for classification are extracted. Fourth stage is feature selection where Genetic Algorithm (GA) technique has been used as a feature selection technique to select the important features. In last classification stage, Artificial Neural Network (ANN) technique has been used to differentiate between Benign and Malignant classes of cancer using the most relevant texture feature. As a conclusion, classification accuracy, sensitivity, and specificity obtained by the proposed CAD system are improved in comparison to previous studies. This thesis has practical contribution in identification of breast cancer using mammogram images and better classification accuracy of benign and malign lesions using ML and image processing techniques
Advanced Computational Methods for Oncological Image Analysis
[Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with clinicians’ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operations—such as segmentation, co-registration, classification, and dimensionality reduction—and multi-omics data integration.
Diagnostic accuracy of machine learning models on mammography in breast cancer classification:a meta-analysis
In this meta-analysis, we aimed to estimate the diagnostic accuracy of machine learning models on digital mammograms and tomosynthesis in breast cancer classification and to assess the factors affecting its diagnostic accuracy. We searched for related studies in Web of Science, Scopus, PubMed, Google Scholar and Embase. The studies were screened in two stages to exclude the unrelated studies and duplicates. Finally, 36 studies containing 68 machine learning models were included in this meta-analysis. The area under the curve (AUC), hierarchical summary receiver operating characteristics (HSROC) curve, pooled sensitivity and pooled specificity were estimated using a bivariate Reitsma model. Overall AUC, pooled sensitivity and pooled specificity were 0.90 (95% CI: 0.85–0.90), 0.83 (95% CI: 0.78–0.87) and 0.84 (95% CI: 0.81–0.87), respectively. Additionally, the three significant covariates identified in this study were country (p = 0.003), source (p = 0.002) and classifier (p = 0.016). The type of data covariate was not statistically significant (p = 0.121). Additionally, Deeks’ linear regression test indicated that there exists a publication bias in the included studies (p = 0.002). Thus, the results should be interpreted with caution