139 research outputs found
Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images
This paper investigates, using prior shape models and the concept of ball
scale (b-scale), ways of automatically recognizing objects in 3D images without
performing elaborate searches or optimization. That is, the goal is to place
the model in a single shot close to the right pose (position, orientation, and
scale) in a given image so that the model boundaries fall in the close vicinity
of object boundaries in the image. This is achieved via the following set of
key ideas: (a) A semi-automatic way of constructing a multi-object shape model
assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship
between objects in the training images and their intensity patterns captured in
b-scale images. (c) A hierarchical mechanism of positioning the model, in a
one-shot way, in a given image from a knowledge of the learnt pose relationship
and the b-scale image of the given image to be segmented. The evaluation
results on a set of 20 routine clinical abdominal female and male CT data sets
indicate the following: (1) Incorporating a large number of objects improves
the recognition accuracy dramatically. (2) The recognition algorithm can be
thought as a hierarchical framework such that quick replacement of the model
assembly is defined as coarse recognition and delineation itself is known as
finest recognition. (3) Scale yields useful information about the relationship
between the model assembly and any given image such that the recognition
results in a placement of the model close to the actual pose without doing any
elaborate searches or optimization. (4) Effective object recognition can make
delineation most accurate.Comment: This paper was published and presented in SPIE Medical Imaging 201
Medical image segmentation using object atlas versus object cloud models
Medical image segmentation is crucial for quantitative organ analysis and surgical planning. Since interactive segmentation is not practical in a production-mode clinical setting, automatic methods based on 3D object appearance models have been proposed. Among them, approaches based on object atlas are the most actively investigated. A key drawback of these approaches is that they require a time-costly image registration process to build and deploy the atlas. Object cloud models (OCM) have been introduced to avoid registration, considerably speeding up the whole process, but they have not been compared to object atlas models (OAM). The present paper fills this gap by presenting a comparative analysis of the two approaches in the task of individually segmenting nine anatomical structures of the human body. Our results indicate that OCM achieve a statistically significant better accuracy for seven anatomical structures, in terms of Dice Similarity Coefficient and Average Symmetric Surface Distance.Medical image segmentation is crucial for quantitative organ analysis and surgical planning. Since interactive segmentation is not practical in a production-mode clinical setting, automatic methods based on 3D object appearance models have been proposed.9415CNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOFAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO303673/2010-9; 479070/2013-0; 131835/2013-0sem informaçãoSPIE - international society for optical engineering. medical imagin
CIDI-Lung-Seg: A Single-Click Annotation Tool for Automatic Delineation of Lungs from CT Scans
Accurate and fast extraction of lung volumes from computed tomography (CT)
scans remains in a great demand in the clinical environment because the
available methods fail to provide a generic solution due to wide anatomical
variations of lungs and existence of pathologies. Manual annotation, current
gold standard, is time consuming and often subject to human bias. On the other
hand, current state-of-the-art fully automated lung segmentation methods fail
to make their way into the clinical practice due to their inability to
efficiently incorporate human input for handling misclassifications and praxis.
This paper presents a lung annotation tool for CT images that is interactive,
efficient, and robust. The proposed annotation tool produces an "as accurate as
possible" initial annotation based on the fuzzy-connectedness image
segmentation, followed by efficient manual fixation of the initial extraction
if deemed necessary by the practitioner. To provide maximum flexibility to the
users, our annotation tool is supported in three major operating systems
(Windows, Linux, and the Mac OS X). The quantitative results comparing our free
software with commercially available lung segmentation tools show higher degree
of consistency and precision of our software with a considerable potential to
enhance the performance of routine clinical tasks.Comment: 4 pages, 6 figures; to appear in the proceedings of 36th Annual
International Conference of the IEEE Engineering in Medicine and Biology
Society (EMBC 2014
Recommended from our members
Hybrid Segmentation of Anatomical Data
We propose new hybrid methods for automated segmentation of radiological patient data and the Visible Human data. In this paper, we integrate boundary-based and region-based segmentation methods which amplifies the strength but reduces the weakness of both approaches. The novelty comes from combining a boundary-based method, the deformable model-based segmentation with region-based segmentation methods, the fuzzy connectedness and Voronoi Diagram-based segmentation, to develop hybrid methods that yield high precision, accuracy and efficiency. This work is a part of a NLM funded effort to provide a fully implemented and tested Visible Human Project Segmentation and Registration Toolkit (Insight)
GazeGNN: A Gaze-Guided Graph Neural Network for Disease Classification
The application of eye-tracking techniques in medical image analysis has
become increasingly popular in recent years. It collects the visual search
patterns of the domain experts, containing much important information about
health and disease. Therefore, how to efficiently integrate radiologists' gaze
patterns into the diagnostic analysis turns into a critical question. Existing
works usually transform gaze information into visual attention maps (VAMs) to
supervise the learning process. However, this time-consuming procedure makes it
difficult to develop end-to-end algorithms. In this work, we propose a novel
gaze-guided graph neural network (GNN), GazeGNN, to perform disease
classification from medical scans. In GazeGNN, we create a unified
representation graph that models both the image and gaze pattern information.
Hence, the eye-gaze information is directly utilized without being converted
into VAMs. With this benefit, we develop a real-time, real-world, end-to-end
disease classification algorithm for the first time and avoid the noise and
time consumption introduced during the VAM preparation. To our best knowledge,
GazeGNN is the first work that adopts GNN to integrate image and eye-gaze data.
Our experiments on the public chest X-ray dataset show that our proposed method
exhibits the best classification performance compared to existing methods
CAVASS: A Computer-Assisted Visualization and Analysis Software System
The Medical Image Processing Group at the University of Pennsylvania has been developing (and distributing with source code) medical image analysis and visualization software systems for a long period of time. Our most recent system, 3DVIEWNIX, was first released in 1993. Since that time, a number of significant advancements have taken place with regard to computer platforms and operating systems, networking capability, the rise of parallel processing standards, and the development of open-source toolkits. The development of CAVASS by our group is the next generation of 3DVIEWNIX. CAVASS will be freely available and open source, and it is integrated with toolkits such as Insight Toolkit and Visualization Toolkit. CAVASS runs on Windows, Unix, Linux, and Mac but shares a single code base. Rather than requiring expensive multiprocessor systems, it seamlessly provides for parallel processing via inexpensive clusters of work stations for more time-consuming algorithms. Most importantly, CAVASS is directed at the visualization, processing, and analysis of 3-dimensional and higher-dimensional medical imagery, so support for digital imaging and communication in medicine data and the efficient implementation of algorithms is given paramount importance
- …