75 research outputs found
Overview of ImageCLEFlifelog 2018: daily living understanding and lifelog moment retrieval
Benchmarking in Multimedia and Retrieval related research fields has a long tradition and important position within the community. Benchmarks such as the MediaEval Multimedia Benchmark or CLEF are well established and also served by the community. One major goal of these competitions beside of comparing different methods and approaches is also to create or promote new interesting research directions within multimedia. For example the Medico task at MediaEval with the goal of medical related multimedia analysis. Although lifelogging creates a lot of attention in the community which is shown by several workshops and special session hosted about the topic. Despite of that there exist also some lifelogging related benchmarks. For example the previous edition of the lifelogging task at ImageCLEF. The last years ImageCLEFlifelog task was well received but had some barriers that made it difficult for some researchers to participate (data size, multi modal features, etc.) The ImageCLEFlifelog 2018 tries to overcome these problems and make the task accessible for an even broader audience (eg, pre-extracted features are provided). Furthermore, the task is divided into two subtasks (challenges). The two challenges are lifelog moment retrieval (LMRT) and the Activities of Daily Living understanding (ADLT). All in all seven teams participated with a total number of 41 runs which was an significant increase compared to the previous year
An objective validation of polyp and instrument segmentation methods in colonoscopy through Medico 2020 polyp segmentation and MedAI 2021 transparency challenges
Automatic analysis of colonoscopy images has been an active field of research
motivated by the importance of early detection of precancerous polyps. However,
detecting polyps during the live examination can be challenging due to various
factors such as variation of skills and experience among the endoscopists, lack
of attentiveness, and fatigue leading to a high polyp miss-rate. Deep learning
has emerged as a promising solution to this challenge as it can assist
endoscopists in detecting and classifying overlooked polyps and abnormalities
in real time. In addition to the algorithm's accuracy, transparency and
interpretability are crucial to explaining the whys and hows of the algorithm's
prediction. Further, most algorithms are developed in private data, closed
source, or proprietary software, and methods lack reproducibility. Therefore,
to promote the development of efficient and transparent methods, we have
organized the "Medico automatic polyp segmentation (Medico 2020)" and "MedAI:
Transparency in Medical Image Segmentation (MedAI 2021)" competitions. We
present a comprehensive summary and analyze each contribution, highlight the
strength of the best-performing methods, and discuss the possibility of
clinical translations of such methods into the clinic. For the transparency
task, a multi-disciplinary team, including expert gastroenterologists, accessed
each submission and evaluated the team based on open-source practices, failure
case analysis, ablation studies, usability and understandability of evaluations
to gain a deeper understanding of the models' credibility for clinical
deployment. Through the comprehensive analysis of the challenge, we not only
highlight the advancements in polyp and surgical instrument segmentation but
also encourage qualitative evaluation for building more transparent and
understandable AI-based colonoscopy systems
Two-Stream Deep Feature Modelling for Automated Video Endoscopy Data Analysis
Automating the analysis of imagery of the Gastrointestinal (GI) tract
captured during endoscopy procedures has substantial potential benefits for
patients, as it can provide diagnostic support to medical practitioners and
reduce mistakes via human error. To further the development of such methods, we
propose a two-stream model for endoscopic image analysis. Our model fuses two
streams of deep feature inputs by mapping their inherent relations through a
novel relational network model, to better model symptoms and classify the
image. In contrast to handcrafted feature-based models, our proposed network is
able to learn features automatically and outperforms existing state-of-the-art
methods on two public datasets: KVASIR and Nerthus. Our extensive evaluations
illustrate the importance of having two streams of inputs instead of a single
stream and also demonstrates the merits of the proposed relational network
architecture to combine those streams.Comment: Accepted for Publication at MICCAI 202
Real-time polyp segmentation using U-net with IoU loss
Colonoscopy is the third leading cause of cancer deaths worldwide. While automated segmentation methods can help detect polyps and consequently improve their surgical removal, the clinical usability of these methods requires a trade-off between accuracy and speed. In this work, we exploit the traditional U-Net methods and compare different segmentation-loss functions. Our results demonstrate that IoU loss results in an improved segmentation performance (nearly 3% improvement on Dice) for real-time polyp segmentation
- …