12,665 research outputs found
An Automatic Method for Complete Brain Matter Segmentation from Multislice CT scan
Computed tomography imaging is well accepted for its imaging speed, image
contrast & resolution and cost. Thus it has wide use in detection and diagnosis
of brain diseases. But unfortunately reported works on CT segmentation is not
very significant. In this paper, a robust automatic segmentation system is
presented which is capable of segment complete brain matter from CT slices,
without any lose in information. The proposed method is simple, fast, accurate
and completely automatic. It can handle multislice CT scan in single run. From
a given multislice CT dataset, one slice is selected automatically to form
masks for segmentation. Two types of masks are created to handle nasal slices
in a better way. Masks are created from selected reference slice using
automatic seed point selection and region growing technique. One mask is
designed for brain matter and another includes the skull of the reference
slice. This second mask is used as global reference mask for all slices whereas
the brain matter mask is implemented on only adjacent slices and continuously
modified for better segmentation. Slices in given dataset are divided into two
batches, before reference slice and after reference slice. Each batch segmented
separately. Successive propagation of brain matter mask has demonstrated very
high potential in reported segmentation. Presented result shows highest
sensitivity and more than 96% accuracy in all cases. Resulted segmented images
can be used for any brain disease diagnosis or further image analysis
Region and Location Based Indexing and Retrieval of MR-T2 Brain Tumor Images
In this paper, region based and location based retrieval systems have been
implemented for retrieval of MR-T2 axial 2-D brain images. This is done by
extracting and characterizing the tumor portion of 2-D brain slices by use of a
suitable threshold computed over the entire image. Indexing and retrieval is
then performed by computing texture features based on gray-tone
spatial-dependence matrix of segmented regions. A Hash structure is used to
index all images. A combined index is adopted to point to all similar images in
terms of the texture features. At query time, only those images that are in the
same hash bucket as those of the queried image are compared for similarity,
thus reducing the search space and time.Comment: 10 page
Invariant Spectral Hashing of Image Saliency Graph
Image hashing is the process of associating a short vector of bits to an
image. The resulting summaries are useful in many applications including image
indexing, image authentication and pattern recognition. These hashes need to be
invariant under transformations of the image that result in similar visual
content, but should drastically differ for conceptually distinct contents. This
paper proposes an image hashing method that is invariant under rotation,
scaling and translation of the image. The gist of our approach relies on the
geometric characterization of salient point distribution in the image. This is
achieved by the definition of a "saliency graph" connecting these points
jointly with an image intensity function on the graph nodes. An invariant hash
is then obtained by considering the spectrum of this function in the
eigenvector basis of the Laplacian graph, that is, its graph Fourier transform.
Interestingly, this spectrum is invariant under any relabeling of the graph
nodes. The graph reveals geometric information of the image, making the hash
robust to image transformation, yet distinct for different visual content. The
efficiency of the proposed method is assessed on a set of MRI 2-D slices and on
a database of faces.Comment: Keywords: Invariant Hashing, Geometrical Invariant, Spectral Graph,
Salient Points. Content: 8 pages, 7 figures, 1 tabl
Robust Group Comparison Using Non-Parametric Block-Based Statistics
Voxel-based analysis methods localize brain structural differences by
performing voxel-wise statistical comparisons on two groups of images aligned
to a common space. This procedure requires highly accurate registration as well
as a sufficiently large dataset. However, in practice, the registration
algorithms are not perfect due to noise, artifacts, and complex structural
variations. The sample size is also limited due to low disease prevalence,
recruitment difficulties, and demographic matching issues. To address these
issues, in this paper, we propose a method, called block-based statistic (BBS),
for robust group comparison. BBS consists of two major components: Block
matching and permutation test. Specifically, based on two group of images
aligned to a common space, we first perform block matching so that structural
misalignments can be corrected. Then, based on results given by block matching,
we conduct robust non-parametric statistical inference based on permutation
test. Extensive experiments were performed on synthetic data and the real
diffusion MR data of mild cognitive impairment patients. The experimental
results indicate that BBS significantly improves statistical power,
notwithstanding the small sample size.Comment: 17 pages, 9 figure
Dental pathology detection in 3D cone-beam CT
Cone-beam computed tomography (CBCT) is a valuable imaging method in dental
diagnostics that provides information not available in traditional 2D imaging.
However, interpretation of CBCT images is a time-consuming process that
requires a physician to work with complicated software. In this work we propose
an automated pipeline composed of several deep convolutional neural networks
and algorithmic heuristics. Our task is two-fold: a) find locations of each
present tooth inside a 3D image volume, and b) detect several common tooth
conditions in each tooth. The proposed system achieves 96.3\% accuracy in tooth
localization and an average of 0.94 AUROC for 6 common tooth conditions
Human Recognition Using Face in Computed Tomography
With the mushrooming use of computed tomography (CT) images in clinical
decision making, management of CT data becomes increasingly difficult. From the
patient identification perspective, using the standard DICOM tag to track
patient information is challenged by issues such as misspelling, lost file,
site variation, etc. In this paper, we explore the feasibility of leveraging
the faces in 3D CT images as biometric features. Specifically, we propose an
automatic processing pipeline that first detects facial landmarks in 3D for ROI
extraction and then generates aligned 2D depth images, which are used for
automatic recognition. To boost the recognition performance, we employ transfer
learning to reduce the data sparsity issue and to introduce a group sampling
strategy to increase inter-class discrimination when training the recognition
network. Our proposed method is capable of capturing underlying identity
characteristics in medical images while reducing memory consumption. To test
its effectiveness, we curate 600 3D CT images of 280 patients from multiple
sources for performance evaluation. Experimental results demonstrate that our
method achieves a 1:56 identification accuracy of 92.53% and a 1:1 verification
accuracy of 96.12%, outperforming other competing approaches
A Simple, Fast and Fully Automated Approach for Midline Shift Measurement on Brain Computed Tomography
Brain CT has become a standard imaging tool for emergent evaluation of brain
condition, and measurement of midline shift (MLS) is one of the most important
features to address for brain CT assessment. We present a simple method to
estimate MLS and propose a new alternative parameter to MLS: the ratio of MLS
over the maximal width of intracranial region (MLS/ICWMAX). Three neurosurgeons
and our automated system were asked to measure MLS and MLS/ICWMAX in the same
sets of axial CT images obtained from 41 patients admitted to ICU under
neurosurgical service. A weighted midline (WML) was plotted based on individual
pixel intensities, with higher weighted given to the darker portions. The MLS
could then be measured as the distance between the WML and ideal midline (IML)
near the foramen of Monro. The average processing time to output an automatic
MLS measurement was around 10 seconds. Our automated system achieved an overall
accuracy of 90.24% when the CT images were calibrated automatically, and
performed better when the calibrations of head rotation were done manually
(accuracy: 92.68%). MLS/ICWMAX and MLS both gave results in same confusion
matrices and produced similar ROC curve results. We demonstrated a simple, fast
and accurate automated system of MLS measurement and introduced a new parameter
(MLS/ICWMAX) as a good alternative to MLS in terms of estimating the degree of
brain deformation, especially when non-DICOM images (e.g. JPEG) are more easily
accessed
Local Structure Matching Driven by Joint-Saliency-Structure Adaptive Kernel Regression
For nonrigid image registration, matching the particular structures (or the
outliers) that have missing correspondence and/or local large deformations, can
be more difficult than matching the common structures with small deformations
in the two images. Most existing works depend heavily on the outlier
segmentation to remove the outlier effect in the registration. Moreover, these
works do not handle simultaneously the missing correspondences and local large
deformations. In this paper, we defined the nonrigid image registration as a
local adaptive kernel regression which locally reconstruct the moving image's
dense deformation vectors from the sparse deformation vectors in the
multi-resolution block matching. The kernel function of the kernel regression
adapts its shape and orientation to the reference image's structure to gather
more deformation vector samples of the same structure for the iterative
regression computation, whereby the moving image's local deformations could be
compliant with the reference image's local structures. To estimate the local
deformations around the outliers, we use joint saliency map that highlights the
corresponding saliency structures (called Joint Saliency Structures, JSSs) in
the two images to guide the dense deformation reconstruction by emphasizing
those JSSs' sparse deformation vectors in the kernel regression. The
experimental results demonstrate that by using local JSS adaptive kernel
regression, the proposed method achieves almost the best performance in
alignment of all challenging image pairs with outlier structures compared with
other five state-of-the-art nonrigid registration algorithms.Comment: 12 page
Symmetric functions for fast image retrieval with persistent homology
Persistence diagrams, combining geometry and topology for an effective shape
description used in pattern recognition, have already proven to be an effective
tool for shape representation with respect to a certainfiltering function.
Comparing the persistence diagram of a query with those of a database allows
automatic classification or retrieval, but unfortunately, the standard method
for comparing persistence diagrams, the bottleneck distance, has a high
computational cost. A possible algebraic solution to this problem is to switch
to comparisons between the complex polynomials whose roots are the cornerpoints
of the persistence diagrams. This strategy allows to reduce the computational
cost in a significant way, thereby making persistent homology based
applications suitable for large scale databases. The definition of new
distances in the polynomial frame-work poses some interesting problems, both of
theoretical and practical nature. In this paper, these questions have been
addressed by considering possible transformations of the half-plane where the
persistence diagrams lie onto the complex plane, and by considering a certain
re-normalisation the symmetric functions associated to the polynomial roots of
the resulting transformed polynomial. The encouraging numerical results,
obtained in a dermatology application test, suggest that the proposed method
may even improve the achievements obtained by the standard methods using
persistence diagrams and the bottleneck distance.Comment: 14 page
Adapted and Oversegmenting Graphs: Application to Geometric Deep Learning
We propose a novel iterative method to adapt a a graph to d-dimensional image
data. The method drives the nodes of the graph towards image features. The
adaptation process naturally lends itself to a measure of feature saliency
which can then be used to retain meaningful nodes and edges in the graph. From
the adapted graph, we also propose the computation of a dual graph, which
inherits the saliency measure from the adapted graph, and whose edges run along
image features, hence producing an oversegmenting graph. The proposed method is
computationally efficient and fully parallelisable. We propose two distance
measures to find image saliency along graph edges, and evaluate the performance
on synthetic images and on natural images from publicly available databases. In
both cases, the most salient nodes of the graph achieve average boundary recall
over 90%. We also apply our method to image classification on the MNIST
hand-written digit dataset, using a recently proposed Deep Geometric Learning
architecture, and achieving state-of-the-art classification accuracy, for a
graph-based method, of 97.86%.Comment: Submited to CVI
- …