1,473 research outputs found
2017 Robotic Instrument Segmentation Challenge
In mainstream computer vision and machine learning, public datasets such as
ImageNet, COCO and KITTI have helped drive enormous improvements by enabling
researchers to understand the strengths and limitations of different algorithms
via performance comparison. However, this type of approach has had limited
translation to problems in robotic assisted surgery as this field has never
established the same level of common datasets and benchmarking methods. In 2015
a sub-challenge was introduced at the EndoVis workshop where a set of robotic
images were provided with automatically generated annotations from robot
forward kinematics. However, there were issues with this dataset due to the
limited background variation, lack of complex motion and inaccuracies in the
annotation. In this work we present the results of the 2017 challenge on
robotic instrument segmentation which involved 10 teams participating in
binary, parts and type based segmentation of articulated da Vinci robotic
instruments
Automatic grade classification of Barretts Esophagus through feature enhancement
Barretts Esophagus (BE) is a precancerous condition that affects the esophagus tube and has the risk of develop- ing esophageal adenocarcinoma. BE is the process of developing metaplastic intestinal epithelium and replacing the normal cells in the esophageal area. The detection of BE is considered difficult due to its appearance and properties. The diagnosis is usually done through both endoscopy and biopsy. Recently, Computer Aided Diag- nosis systems have been developed to support physicians opinion when facing difficulty in detection/classification in different types of diseases. In this paper, an automatic classification of Barretts Esophagus condition is intro- duced. The presented method enhances the internal features of a Confocal Laser Endomicroscopy (CLE) image by utilizing a proposed enhancement filter. This filter depends on fractional differentiation and integration that improve the features in the discrete wavelet transform of an image. Later on, various features are extracted from each enhanced image on different levels for the multi-classification process. Our approach is validated on a dataset that consists of a group of 32 patients with 262 images with different histology grades. The experimental results demonstrated the efficiency of the proposed technique. Our method helps clinicians for more accurate classification. This potentially helps to reduce the need for biopsies needed for diagnosis, facilitate the regular monitoring of treatment/development of the patients case and can help train doctors with the new endoscopy technology. The accurate automatic classification is particularly important for the Intestinal Metaplasia (IM) type, which could turn into deadly cancerous. Hence, this work contributes to automatic classification that facilitates early intervention/treatment and decreasing biopsy samples needed
Confident texture-based laryngeal tissue classification for early stage diagnosis support
none8siopenMoccia, Sara; De Momi, Elena; Guarnaschelli, Marco; Savazzi, Matteo; Laborai, Andrea; Guastini, Luca; Peretti, Giorgio; Mattos, Leonardo S.Moccia, Sara; De Momi, Elena; Guarnaschelli, Marco; Savazzi, Matteo; Laborai, Andrea; Guastini, Luca; Peretti, Giorgio; Mattos, Leonardo S
Generalist Vision Foundation Models for Medical Imaging: A Case Study of Segment Anything Model on Zero-Shot Medical Segmentation
In this paper, we examine the recent Segment Anything Model (SAM) on medical
images, and report both quantitative and qualitative zero-shot segmentation
results on nine medical image segmentation benchmarks, covering various imaging
modalities, such as optical coherence tomography (OCT), magnetic resonance
imaging (MRI), and computed tomography (CT), as well as different applications
including dermatology, ophthalmology, and radiology. Those benchmarks are
representative and commonly used in model development. Our experimental results
indicate that while SAM presents remarkable segmentation performance on images
from the general domain, its zero-shot segmentation ability remains restricted
for out-of-distribution images, e.g., medical images. In addition, SAM exhibits
inconsistent zero-shot segmentation performance across different unseen medical
domains. For certain structured targets, e.g., blood vessels, the zero-shot
segmentation of SAM completely failed. In contrast, a simple fine-tuning of it
with a small amount of data could lead to remarkable improvement of the
segmentation quality, showing the great potential and feasibility of using
fine-tuned SAM to achieve accurate medical image segmentation for a precision
diagnostics. Our study indicates the versatility of generalist vision
foundation models on medical imaging, and their great potential to achieve
desired performance through fine-turning and eventually address the challenges
associated with accessing large and diverse medical datasets in support of
clinical diagnostics.Comment: Published in Diagnostic
- …