3,619 research outputs found
Automated multimodal volume registration based on supervised 3D anatomical landmark detection
We propose a new method for automatic 3D multimodal registration based on anatomical landmark detection. Landmark detectors are learned independantly in the two imaging modalities using Extremely Randomized Trees and multi-resolution voxel windows. A least-squares fitting algorithm is then used for rigid registration based on the landmark positions as predicted by these detectors in the two imaging modalities. Experiments are carried out with this method on a dataset of pelvis CT and CBCT scans related to 45 patients. On this dataset, our fully automatic approach yields results very competitive with respect to a manually assisted state-of-the-art rigid registration algorithm
Relational Reasoning Network (RRN) for Anatomical Landmarking
Accurately identifying anatomical landmarks is a crucial step in deformation
analysis and surgical planning for craniomaxillofacial (CMF) bones. Available
methods require segmentation of the object of interest for precise landmarking.
Unlike those, our purpose in this study is to perform anatomical landmarking
using the inherent relation of CMF bones without explicitly segmenting them. We
propose a new deep network architecture, called relational reasoning network
(RRN), to accurately learn the local and the global relations of the landmarks.
Specifically, we are interested in learning landmarks in CMF region: mandible,
maxilla, and nasal bones. The proposed RRN works in an end-to-end manner,
utilizing learned relations of the landmarks based on dense-block units and
without the need for segmentation. For a given a few landmarks as input, the
proposed system accurately and efficiently localizes the remaining landmarks on
the aforementioned bones. For a comprehensive evaluation of RRN, we used
cone-beam computed tomography (CBCT) scans of 250 patients. The proposed system
identifies the landmark locations very accurately even when there are severe
pathologies or deformations in the bones. The proposed RRN has also revealed
unique relationships among the landmarks that help us infer several reasoning
about informativeness of the landmark points. RRN is invariant to order of
landmarks and it allowed us to discover the optimal configurations (number and
location) for landmarks to be localized within the object of interest
(mandible) or nearby objects (maxilla and nasal). To the best of our knowledge,
this is the first of its kind algorithm finding anatomical relations of the
objects using deep learning.Comment: 10 pages, 6 Figures, 3 Table
Keypoint Transfer for Fast Whole-Body Segmentation
We introduce an approach for image segmentation based on sparse
correspondences between keypoints in testing and training images. Keypoints
represent automatically identified distinctive image locations, where each
keypoint correspondence suggests a transformation between images. We use these
correspondences to transfer label maps of entire organs from the training
images to the test image. The keypoint transfer algorithm includes three steps:
(i) keypoint matching, (ii) voting-based keypoint labeling, and (iii)
keypoint-based probabilistic transfer of organ segmentations. We report
segmentation results for abdominal organs in whole-body CT and MRI, as well as
in contrast-enhanced CT and MRI. Our method offers a speed-up of about three
orders of magnitude in comparison to common multi-atlas segmentation, while
achieving an accuracy that compares favorably. Moreover, keypoint transfer does
not require the registration to an atlas or a training phase. Finally, the
method allows for the segmentation of scans with highly variable field-of-view.Comment: Accepted for publication at IEEE Transactions on Medical Imagin
Stratified decision forests for accurate anatomical landmark localization in cardiac images
Accurate localization of anatomical landmarks is an important step in medical imaging, as it provides useful prior information for subsequent image analysis and acquisition methods. It is particularly useful for initialization of automatic image analysis tools (e.g. segmentation and registration) and detection of scan planes for automated image acquisition. Landmark localization has been commonly performed using learning based approaches, such as classifier and/or regressor models. However, trained models may not generalize well in heterogeneous datasets when the images contain large differences due to size, pose and shape variations of organs. To learn more data-adaptive and patient specific models, we propose a novel stratification based training model, and demonstrate its use in a decision forest. The proposed approach does not require any additional training information compared to the standard model training procedure and can be easily integrated into any decision tree framework. The proposed method is evaluated on 1080 3D highresolution and 90 multi-stack 2D cardiac cine MR images. The experiments show that the proposed method achieves state-of-theart landmark localization accuracy and outperforms standard regression and classification based approaches. Additionally, the proposed method is used in a multi-atlas segmentation to create a fully automatic segmentation pipeline, and the results show that it achieves state-of-the-art segmentation accuracy
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Synergistic Visualization And Quantitative Analysis Of Volumetric Medical Images
The medical diagnosis process starts with an interview with the patient, and continues with the physical exam. In practice, the medical professional may require additional screenings to precisely diagnose. Medical imaging is one of the most frequently used non-invasive screening methods to acquire insight of human body. Medical imaging is not only essential for accurate diagnosis, but also it can enable early prevention. Medical data visualization refers to projecting the medical data into a human understandable format at mediums such as 2D or head-mounted displays without causing any interpretation which may lead to clinical intervention. In contrast to the medical visualization, quantification refers to extracting the information in the medical scan to enable the clinicians to make fast and accurate decisions. Despite the extraordinary process both in medical visualization and quantitative radiology, efforts to improve these two complementary fields are often performed independently and synergistic combination is under-studied. Existing image-based software platforms mostly fail to be used in routine clinics due to lack of a unified strategy that guides clinicians both visually and quan- titatively. Hence, there is an urgent need for a bridge connecting the medical visualization and automatic quantification algorithms in the same software platform. In this thesis, we aim to fill this research gap by visualizing medical images interactively from anywhere, and performing a fast, accurate and fully-automatic quantification of the medical imaging data. To end this, we propose several innovative and novel methods. Specifically, we solve the following sub-problems of the ul- timate goal: (1) direct web-based out-of-core volume rendering, (2) robust, accurate, and efficient learning based algorithms to segment highly pathological medical data, (3) automatic landmark- ing for aiding diagnosis and surgical planning and (4) novel artificial intelligence algorithms to determine the sufficient and necessary data to derive large-scale problems
Recommended from our members
Global Localization and Orientation of the Cervical Spine in X-ray Imaging
Injuries in cervical spine X-ray images are often missed by emergency physicians. Many of these missing injuries cause further complications. Automated analysis of the images has the potential to reduce the chance of missing injuries. Towards this goal, this paper proposes an automatic localization of the spinal column in cervical spine X-ray images. The framework employs a random classification forest algorithm with a kernel density estimation-based voting accumulation method to localize the spinal column and to detect the orientation. The algorithm has been evaluated with 90 emergency room X-ray images and has achieved an average detection accuracy of 91% and an orientation error of 3.6â—¦. The framework can be used to narrow the search area for other advanced injury detection systems
Cloud-Based Benchmarking of Medical Image Analysis
Medical imagin
- …