1,701 research outputs found
Multiscale Centerline Detection
Finding the centerline and estimating the radius of linear structures is a critical first step in many applications, ranging from road delineation in 2D aerial images to modeling blood vessels, lung bronchi, and dendritic arbors in 3D biomedical image stacks. Existing techniques rely either on filters designed to respond to ideal cylindrical structures or on classification techniques. The former tend to become unreliable when the linear structures are very irregular while the latter often has difficulties distinguishing centerline locations from neighboring ones, thus losing accuracy. We solve this problem by reformulating centerline detection in terms of a \emph{regression} problem. We first train regressors to return the distances to the closest centerline in scale-space, and we apply them to the input images or volumes. The centerlines and the corresponding scale then correspond to the regressors local maxima, which can be easily identified. We show that our method outperforms state-of-the-art techniques for various 2D and 3D datasets. Moreover, our approach is very generic and also performs well on contour detection. We show an improvement above recent contour detection algorithms on the BSDS500 dataset
Feature detection from echocardiography images using local phase information
Ultrasound images are characterized by their special speckle appearance, low contrast, and low signal-to-noise ratio. It is always challenging to extract important clinical information from these images. An important step before formal analysis is to transform the image to significant features of interest. Intensity based methods do not perform particularly well on ultrasound images. However, it has been previously shown that these images respond well to local phase-based methods which are theoretically intensity-invariant and thus suitable for ultrasound images. We extend the previous local phase-based method to detect features using the local phase computed from monogenic signal which is an isotropic extension of the analytic signal. We apply our method of multiscale feature-asymmetry measurement and local phase-gradient computation to cardiac ultrasound (echocardiography) images for the detection of endocardial, epicardial and myocardial centerline
Coronary Artery Centerline Extraction in Cardiac CT Angiography Using a CNN-Based Orientation Classifier
Coronary artery centerline extraction in cardiac CT angiography (CCTA) images
is a prerequisite for evaluation of stenoses and atherosclerotic plaque. We
propose an algorithm that extracts coronary artery centerlines in CCTA using a
convolutional neural network (CNN).
A 3D dilated CNN is trained to predict the most likely direction and radius
of an artery at any given point in a CCTA image based on a local image patch.
Starting from a single seed point placed manually or automatically anywhere in
a coronary artery, a tracker follows the vessel centerline in two directions
using the predictions of the CNN. Tracking is terminated when no direction can
be identified with high certainty.
The CNN was trained using 32 manually annotated centerlines in a training set
consisting of 8 CCTA images provided in the MICCAI 2008 Coronary Artery
Tracking Challenge (CAT08). Evaluation using 24 test images of the CAT08
challenge showed that extracted centerlines had an average overlap of 93.7%
with 96 manually annotated reference centerlines. Extracted centerline points
were highly accurate, with an average distance of 0.21 mm to reference
centerline points. In a second test set consisting of 50 CCTA scans, 5,448
markers in the coronary arteries were used as seed points to extract single
centerlines. This showed strong correspondence between extracted centerlines
and manually placed markers. In a third test set containing 36 CCTA scans,
fully automatic seeding and centerline extraction led to extraction of on
average 92% of clinically relevant coronary artery segments.
The proposed method is able to accurately and efficiently determine the
direction and radius of coronary arteries. The method can be trained with
limited training data, and once trained allows fast automatic or interactive
extraction of coronary artery trees from CCTA images.Comment: Accepted in Medical Image Analysi
Vessel tractography using an intensity based tensor model
In this paper, we propose a novel tubular structure segmen- tation method, which is based on an intensity-based tensor that fits to a vessel. Our model is initialized with a single seed point and it is ca- pable of capturing whole vessel tree by an automatic branch detection algorithm. The centerline of the vessel as well as its thickness is extracted. We demonstrated the performance of our algorithm on 3 complex contrast varying tubular structured synthetic datasets for quantitative validation. Additionally, extracted arteries from 10 CTA (Computed Tomography An- giography) volumes are qualitatively evaluated by a cardiologist expert’s visual scores
Vessel tractography using an intensity based tensor model with branch detection
In this paper, we present a tubular structure seg- mentation method that utilizes a second order tensor constructed from directional intensity measurements, which is inspired from diffusion tensor image (DTI) modeling. The constructed anisotropic tensor which is fit inside a vessel drives the segmen- tation analogously to a tractography approach in DTI. Our model is initialized at a single seed point and is capable of capturing whole vessel trees by an automatic branch detection algorithm developed in the same framework. The centerline of the vessel as well as its thickness is extracted. Performance results within the Rotterdam Coronary Artery Algorithm Evaluation framework are provided for comparison with existing techniques. 96.4% average overlap with ground truth delineated by experts is obtained in addition to other measures reported in the paper. Moreover, we demonstrate further quantitative results over synthetic vascular datasets, and we provide quantitative experiments for branch detection on patient Computed Tomography Angiography (CTA) volumes, as well as qualitative evaluations on the same CTA datasets, from visual scores by a cardiologist expert
Automated artemia length measurement using U-shaped fully convolutional networks and second-order anisotropic Gaussian kernels
The brine shrimp Artemia, a small crustacean zooplankton organism, is universally used as live prey for larval fish and shrimps in aquaculture. In Artemia studies, it would be highly desired to have access to automated techniques to obtain the length information from Anemia images. However, this problem has so far not been addressed in literature. Moreover, conventional image-based length measurement approaches cannot be readily transferred to measure the Artemia length, due to the distortion of non-rigid bodies, the variation over growth stages and the interference from the antennae and other appendages. To address this problem, we compile a dataset containing 250 images as well as the corresponding label maps of length measuring lines. We propose an automated Anemia length measurement method using U-shaped fully convolutional networks (UNet) and second-order anisotropic Gaussian kernels. For a given Artemia image, the designed UNet model is used to extract a length measuring line structure, and, subsequently, the second-order Gaussian kernels are employed to transform the length measuring line structure into a thin measuring line. For comparison, we also follow conventional fish length measurement approaches and develop a non-learning-based method using mathematical morphology and polynomial curve fitting. We evaluate the proposed method and the competing methods on 100 test images taken from the dataset compiled. Experimental results show that the proposed method can accurately measure the length of Artemia objects in images, obtaining a mean absolute percentage error of 1.16%
Deep learning analysis of the myocardium in coronary CT angiography for identification of patients with functionally significant coronary artery stenosis
In patients with coronary artery stenoses of intermediate severity, the
functional significance needs to be determined. Fractional flow reserve (FFR)
measurement, performed during invasive coronary angiography (ICA), is most
often used in clinical practice. To reduce the number of ICA procedures, we
present a method for automatic identification of patients with functionally
significant coronary artery stenoses, employing deep learning analysis of the
left ventricle (LV) myocardium in rest coronary CT angiography (CCTA). The
study includes consecutively acquired CCTA scans of 166 patients with FFR
measurements. To identify patients with a functionally significant coronary
artery stenosis, analysis is performed in several stages. First, the LV
myocardium is segmented using a multiscale convolutional neural network (CNN).
To characterize the segmented LV myocardium, it is subsequently encoded using
unsupervised convolutional autoencoder (CAE). Thereafter, patients are
classified according to the presence of functionally significant stenosis using
an SVM classifier based on the extracted and clustered encodings. Quantitative
evaluation of LV myocardium segmentation in 20 images resulted in an average
Dice coefficient of 0.91 and an average mean absolute distance between the
segmented and reference LV boundaries of 0.7 mm. Classification of patients was
evaluated in the remaining 126 CCTA scans in 50 10-fold cross-validation
experiments and resulted in an area under the receiver operating characteristic
curve of 0.74 +- 0.02. At sensitivity levels 0.60, 0.70 and 0.80, the
corresponding specificity was 0.77, 0.71 and 0.59, respectively. The results
demonstrate that automatic analysis of the LV myocardium in a single CCTA scan
acquired at rest, without assessment of the anatomy of the coronary arteries,
can be used to identify patients with functionally significant coronary artery
stenosis.Comment: This paper was submitted in April 2017 and accepted in November 2017
for publication in Medical Image Analysis. Please cite as: Zreik et al.,
Medical Image Analysis, 2018, vol. 44, pp. 72-8
- …