154 research outputs found
Redes neurais convolucionais para deteção de landmarks gástricas
Gastric cancer is the fifth most incident cancer in the world and, when diagnosed
at an advanced stage, its survival rate is only 5%-25%, providing that it is essential
that the cancer is detected at an early stage. However, physicians specialized in
this diagnosis have difficulties in detecting early lesions during a diagnostic
examination, esophagogastroduodenoscopy (EGD). Early lesions on the walls of
the digestive system are imperceptible and confounded with the stomach mucosa,
being difficult to detect. On the other hand, physicians run the risk of not covering
all areas of the stomach during diagnosis, especially areas that may have lesions.
The introduction of artificial intelligence into this diagnostic method may help to
detect gastric cancer at an earlier stage. The implementation of a system capable
of monitoring all areas of the digestive system during EGD would be a solution to
prevent the diagnosis of gastric cancer in advanced states. This work focuses on
the study of upper gastrointestinal (GI) landmarks monitoring, which are anatomical
areas of the digestive system more conducive to the appearance of lesions and
that allow better control of the missed areas during EGD exam.
The use of convolutional neural networks (CNNs) in GI landmarks monitoring has
been a great target of study by the scientific community, with such networks having
a good capacity to extract features that better characterize EGD images.
The aim of this work consisted in testing new automatic algorithms, specifically
CNN-based systems able to detect upper GI landmarks to avoid the presence of
blind spots during EGD to increase the quality of endoscopic exams.
In contrast with related works in the literature, in this work we used upper GI
landmarks images closer to real-world environments. In particular, images for each
anatomical landmark class include both examples affected by pathologies and
healthy tissue.
We tested some pre-trained architectures as the ResNet-50, DenseNet-121, and
VGG-16. For each pre-trained architecture, we tested different learning
approaches, including the use of class weights (CW), the use of batch
normalization and dropout layers, and the use of data augmentation to train the
network. The CW ResNet-50 achieved an accuracy of 71.79% and a Mathews
Correlation Coefficient (MCC) of 65.06%.
In current state-of-art studies, only supervised learning approaches were used to
classify EGD images. On the other hand, in our work, we tested the use of
unsupervised learning to increase classification performance. In particular,
convolutional autoencoder architectures to extract representative features from
unlabeled GI images and concatenated their outputs withs with the CW ResNet-50
architecture. We achieved an accuracy of 72.45% and an MCC of 65.08%.O cancro gástrico é o quinto cancro mais incidente no mundo e quando
diagnosticado numa fase avançada a taxa de sobrevivência é de apenas 5%-25%.
Assim, é essencial que este cancro seja detetado numa fase precoce. No entanto,
os médicos especializados neste diagnóstico nem sempre são capazes de uma
boa performance de deteção durante o exame de diagnóstico, a
esofagogastroduodenoscopia (EGD). As lesões precoces nas paredes do sistema
digestivo são quase impercetíveis e confundíveis com a mucosa do estômago,
sendo difíceis de detetar. Por outro lado, os médicos correm o risco de não
cobrirem todas as áreas do estômago durante o diagnóstico, podendo estas áreas
ter lesões.
A introdução da inteligência artificial neste método de diagnóstico poderá ajudar a
detetar o cancro gástrico numa fase mais precoce. A implementação de um
sistema capaz de fazer a monitorização de todas as áreas do sistema digestivo
durante a EGD seria uma solução de forma a prevenir o diagnóstico de cancro
gástrico em estados avançados. Este trabalho tem como foco o estudo da
monitorização de landmarks gastrointestinais (GI) superiores, que são zonas
anatómicas do sistema digestivo mais propícias ao surgimento de lesões e que
permitem fazer um melhor controlo das áreas esquecidas durante a EGD.
O uso de redes neurais convolucionais (CNNs) na monitorização de landmarks GI
tem sido grande alvo de estudo pela comunidade científica, por serem redes com
uma boa capacidade de extração features que melhor caraterizam as imagens da
EGD.
O objetivo deste trabalho consistiu em testar novos algoritmos automáticos
baseados em CNNs capazes de detetar landmarks GI superiores para evitar a
presença áreas não cobertas durante a EGD, aumentando a qualidade deste
exame.
Este trabalho difere de outros estudos porque foram usadas classes de landmarks
GI superiores mais próximas do ambiente real da EGD. Dentro de cada classe
incluímos imagens com patologias e de tecido saudável da respetiva zona
anatómica, ao contrário dos demais estudos. Nos estudos apresentados no estado
de arte apenas foram consideradas classes de landmarks com tecido saudável em
tarefas de deteção de landmarks GI.
Testámos algumas arquiteturas pré-treinadas como a ResNet-50, a DenseNet-121
e a VGG-16. Para cada arquitetura pré-treinada, testámos algumas variáveis: o
uso de class weights (CW), o uso das camadas batch normalization e dropout, e o
uso de data augmentation. A arquitetura CW ResNet-50 atingiu uma accuracy de
71,79% e um coeficiente de correlação de Mathews (MCC) de 65,06%.
Nos estudos apresentados no estado de arte, apenas foram estudados sistemas
de supervised learning para classificação de imagens EGD enquanto, que no
nosso trabalho, foram também testados sistemas de unsupervised learning para
aumentar o desempenho da classificação. Em particular, arquiteturas autoencoder
convolucionais para extração de features de imagens GI sem labels. Assim,
concatenámos os outputs das arquiteturas autoencoder convolucionais com a
arquitetura CW ResNet-50 e alcançamos uma accuracy de 72,45% e um MCC de
65,08%.Mestrado em Engenharia Biomédic
UIT-Saviors at MEDVQA-GI 2023: Improving Multimodal Learning with Image Enhancement for Gastrointestinal Visual Question Answering
In recent years, artificial intelligence has played an important role in
medicine and disease diagnosis, with many applications to be mentioned, one of
which is Medical Visual Question Answering (MedVQA). By combining computer
vision and natural language processing, MedVQA systems can assist experts in
extracting relevant information from medical image based on a given question
and providing precise diagnostic answers. The ImageCLEFmed-MEDVQA-GI-2023
challenge carried out visual question answering task in the gastrointestinal
domain, which includes gastroscopy and colonoscopy images. Our team approached
Task 1 of the challenge by proposing a multimodal learning method with image
enhancement to improve the VQA performance on gastrointestinal images. The
multimodal architecture is set up with BERT encoder and different pre-trained
vision models based on convolutional neural network (CNN) and Transformer
architecture for features extraction from question and endoscopy image. The
result of this study highlights the dominance of Transformer-based vision
models over the CNNs and demonstrates the effectiveness of the image
enhancement process, with six out of the eight vision models achieving better
F1-Score. Our best method, which takes advantages of BERT+BEiT fusion and image
enhancement, achieves up to 87.25% accuracy and 91.85% F1-Score on the
development test set, while also producing good result on the private test set
with accuracy of 82.01%.Comment: ImageCLEF2023 published version:
https://ceur-ws.org/Vol-3497/paper-129.pd
Classification of Anomalies in Gastrointestinal Tract Using Deep Learning
Automatic detection of diseases and anatomical landmarks in medical images by the use of computers is important and considered a challenging process that could help medical diagnosis and reduce the cost and time of investigational procedures and refine health care systems all over the world. Recently, gastrointestinal (GI) tract disease diagnosis through endoscopic image classification is an active research area in the biomedical field. Several GI tract disease classification methods based on image processing and machine learning techniques have been proposed by diverse research groups in the recent past. However, yet effective and comprehensive deep ensemble neural network-based classification model with high accuracy classification results is not available in the literature. In this thesis, we review ways and mechanisms to use deep learning techniques to research on multi-disease computer-aided detection about gastrointestinal and identify these images. We re-trained five state-of-the-art neural network architectures, VGG16, ResNet, MobileNet, Inception-v3, and Xception on the Kvasir dataset to classify eight categories that include an anatomical landmark (pylorus, z-line, cecum), a diseased state (esophagitis, ulcerative colitis, polyps), or a medical procedure (dyed lifted polyps, dyed resection margins) in the Gastrointestinal Tract. Our models have showed results with a promising accuracy which is a remarkable performance with respect to the state-of-the-art approaches. The resulting accuracies achieved using VGG, ResNet, MobileNet, Inception-v3, and Xception were 98.3%, 92.3%, 97.6%, 90% and 98.2%, respectively. As it appears, the most accurate result has been achieved when retraining VGG16 and Xception neural networks with accuracy reache to 98% due to its high performance on training on ImageNet dataset and internal structure that support classification problems
Two-Stream Deep Feature Modelling for Automated Video Endoscopy Data Analysis
Automating the analysis of imagery of the Gastrointestinal (GI) tract
captured during endoscopy procedures has substantial potential benefits for
patients, as it can provide diagnostic support to medical practitioners and
reduce mistakes via human error. To further the development of such methods, we
propose a two-stream model for endoscopic image analysis. Our model fuses two
streams of deep feature inputs by mapping their inherent relations through a
novel relational network model, to better model symptoms and classify the
image. In contrast to handcrafted feature-based models, our proposed network is
able to learn features automatically and outperforms existing state-of-the-art
methods on two public datasets: KVASIR and Nerthus. Our extensive evaluations
illustrate the importance of having two streams of inputs instead of a single
stream and also demonstrates the merits of the proposed relational network
architecture to combine those streams.Comment: Accepted for Publication at MICCAI 202
Deep Learning-based Solutions to Improve Diagnosis in Wireless Capsule Endoscopy
[eng] Deep Learning (DL) models have gained extensive attention due to their remarkable performance in a wide range of real-world applications, particularly in computer vision. This achievement, combined with the increase in available medical records, has made it possible to open up new opportunities for analyzing and interpreting healthcare data. This symbiotic relationship can enhance the diagnostic process by identifying abnormalities, patterns, and trends, resulting in more precise, personalized, and effective healthcare for patients.
Wireless Capsule Endoscopy (WCE) is a non-invasive medical imaging technique used to visualize the entire Gastrointestinal (GI) tract. Up to this moment, physicians meticulously review the captured frames to identify pathologies and diagnose patients. This manual process is time- consuming and prone to errors due to the challenges of interpreting the complex nature of WCE procedures. Thus, it demands a high level of attention, expertise, and experience. To overcome these drawbacks, shorten the screening process, and improve the diagnosis, efficient and accurate DL methods are required.
This thesis proposes DL solutions to the following problems encountered in the analysis of WCE studies: pathology detection, anatomical landmark identification, and Out-of-Distribution (OOD) sample handling. These solutions aim to achieve robust systems that minimize the duration of the video analysis and reduce the number of undetected lesions.
Throughout their development, several DL drawbacks have appeared, including small and imbalanced datasets. These limitations have also been addressed, ensuring that they do not hinder the generalization of neural networks, leading to suboptimal performance and overfitting.
To address the previous WCE problems and overcome the DL challenges, the proposed systems adopt various strategies that utilize the power advantage of Triplet Loss (TL) and Self-Supervised Learning (SSL) techniques. Mainly, TL has been used to improve the generalization of the models, while SSL methods have been employed to leverage the unlabeled data to obtain useful representations. The presented methods achieve State-of-the-art results in the aforementioned medical problems and contribute to the ongoing research to improve the diagnostic of WCE studies.[cat] Els models d’aprenentatge profund (AP) han acaparat molta atenció a causa del seu rendiment en una àmplia gamma d'aplicacions del món real, especialment en visió per ordinador. Aquest fet, combinat amb l'increment de registres mèdics disponibles, ha permès obrir noves oportunitats per analitzar i interpretar les dades sanitàries. Aquesta relació simbiòtica pot millorar el procés de diagnòstic identificant anomalies, patrons i tendències, amb la conseqüent obtenció de diagnòstics sanitaris més precisos, personalitzats i eficients per als pacients.
La Capsula endoscòpica (WCE) és una tècnica d'imatge mèdica no invasiva utilitzada per visualitzar tot el tracte gastrointestinal (GI). Fins ara, els metges revisen minuciosament els fotogrames capturats per identificar patologies i diagnosticar pacients. Aquest procés manual requereix temps i és propens a errors. Per tant, exigeix un alt nivell d'atenció, experiència i especialització. Per superar aquests inconvenients, reduir la durada del procés de detecció i millorar el diagnòstic, es requereixen mètodes eficients i precisos d’AP.
Aquesta tesi proposa solucions que utilitzen AP per als següents problemes trobats en l'anàlisi dels estudis de WCE: detecció de patologies, identificació de punts de referència anatòmics i gestió de mostres que pertanyen fora del domini. Aquestes solucions tenen com a objectiu aconseguir sistemes robustos que minimitzin la durada de l'anàlisi del vídeo i redueixin el nombre de lesions no detectades. Durant el seu desenvolupament, han sorgit diversos inconvenients relacionats amb l’AP, com ara conjunts de dades petits i desequilibrats. Aquestes limitacions també s'han abordat per assegurar que no obstaculitzin la generalització de les xarxes neuronals, evitant un rendiment subòptim.
Per abordar els problemes anteriors de WCE i superar els reptes d’AP, els sistemes proposats adopten diverses estratègies que aprofiten l'avantatge de la Triplet Loss (TL) i les tècniques d’auto-aprenentatge. Principalment, s'ha utilitzat TL per millorar la generalització dels models, mentre que els mètodes d’autoaprenentatge s'han emprat per aprofitar les dades sense etiquetar i obtenir representacions útils. Els mètodes presentats aconsegueixen bons resultats en els problemes mèdics esmentats i contribueixen a la investigació en curs per millorar el diagnòstic dels estudis de WCE
Teeth Localization and Lesion Segmentation in CBCT Images using SpatialConfiguration-Net and U-Net
The localization of teeth and segmentation of periapical lesions in cone-beam
computed tomography (CBCT) images are crucial tasks for clinical diagnosis and
treatment planning, which are often time-consuming and require a high level of
expertise. However, automating these tasks is challenging due to variations in
shape, size, and orientation of lesions, as well as similar topologies among
teeth. Moreover, the small volumes occupied by lesions in CBCT images pose a
class imbalance problem that needs to be addressed. In this study, we propose a
deep learning-based method utilizing two convolutional neural networks: the
SpatialConfiguration-Net (SCN) and a modified version of the U-Net. The SCN
accurately predicts the coordinates of all teeth present in an image, enabling
precise cropping of teeth volumes that are then fed into the U-Net which
detects lesions via segmentation. To address class imbalance, we compare the
performance of three reweighting loss functions. After evaluation on 144 CBCT
images, our method achieves a 97.3% accuracy for teeth localization, along with
a promising sensitivity and specificity of 0.97 and 0.88, respectively, for
subsequent lesion detection.Comment: Accepted for VISIGRAPP 2024 (Track: VISAPP), 8 page
Learning Through Guidance: Knowledge Distillation for Endoscopic Image Classification
Endoscopy plays a major role in identifying any underlying abnormalities
within the gastrointestinal (GI) tract. There are multiple GI tract diseases
that are life-threatening, such as precancerous lesions and other intestinal
cancers. In the usual process, a diagnosis is made by a medical expert which
can be prone to human errors and the accuracy of the test is also entirely
dependent on the expert's level of experience. Deep learning, specifically
Convolution Neural Networks (CNNs) which are designed to perform automatic
feature learning without any prior feature engineering, has recently reported
great benefits for GI endoscopy image analysis. Previous research has developed
models that focus only on improving performance, as such, the majority of
introduced models contain complex deep network architectures with a large
number of parameters that require longer training times. However, there is a
lack of focus on developing lightweight models which can run in low-resource
environments, which are typically encountered in medical clinics. We
investigate three KD-based learning frameworks, response-based, feature-based,
and relation-based mechanisms, and introduce a novel multi-head attention-based
feature fusion mechanism to support relation-based learning. Compared to the
existing relation-based methods that follow simplistic aggregation techniques
of multi-teacher response/feature-based knowledge, we adopt the multi-head
attention technique to provide flexibility towards localising and transferring
important details from each teacher to better guide the student. We perform
extensive evaluations on two widely used public datasets, KVASIR-V2 and
Hyper-KVASIR, and our experimental results signify the merits of our proposed
relation-based framework in achieving an improved lightweight model (only 51.8k
trainable parameters) that can run in a resource-limited environment
Recommended from our members
Machine learning based small bowel video capsule endoscopy analysis: Challenges and opportunities
YesVideo capsule endoscopy (VCE) is a revolutionary technology for the early diagnosis of gastric disorders. However, owing to the high redundancy and subtle manifestation of anomalies among thousands of frames, the manual construal of VCE videos requires considerable patience, focus, and time. The automatic analysis of these videos using computational methods is a challenge as the capsule is untamed in motion and captures frames inaptly. Several machine learning (ML) methods, including recent deep convolutional neural networks approaches, have been adopted after evaluating their potential of improving the VCE analysis. However, the clinical impact of these methods is yet to be investigated. This survey aimed to highlight the gaps between existing ML-based research methodologies and clinically significant rules recently established by gastroenterologists based on VCE. A framework for interpreting raw frames into contextually relevant frame-level findings and subsequently merging these findings with meta-data to obtain a disease-level diagnosis was formulated. Frame-level findings can be more intelligible for discriminative learning when organized in a taxonomical hierarchy. The proposed taxonomical hierarchy, which is formulated based on pathological and visual similarities, may yield better classification metrics by setting inference classes at a higher level than training classes. Mapping from the frame level to the disease level was structured in the form of a graph based on clinical relevance inspired by the recent international consensus developed by domain experts. Furthermore, existing methods for VCE summarization, classification, segmentation, detection, and localization were critically evaluated and compared based on aspects deemed significant by clinicians. Numerous studies pertain to single anomaly detection instead of a pragmatic approach in a clinical setting. The challenges and opportunities associated with VCE analysis were delineated. A focus on maximizing the discriminative power of features corresponding to various subtle lesions and anomalies may help cope with the diverse and mimicking nature of different VCE frames. Large multicenter datasets must be created to cope with data sparsity, bias, and class imbalance. Explainability, reliability, traceability, and transparency are important for an ML-based diagnostics system in a VCE. Existing ethical and legal bindings narrow the scope of possibilities where ML can potentially be leveraged in healthcare. Despite these limitations, ML based video capsule endoscopy will revolutionize clinical practice, aiding clinicians in rapid and accurate diagnosis
- …