1,267 research outputs found
Curvilinear Structure Enhancement in Biomedical Images
Curvilinear structures can appear in many different areas and at a variety of scales. They can be axons and dendrites in the brain, blood vessels in the fundus, streets, rivers or fractures in buildings, and others. So, it is essential to study curvilinear structures in many fields such as neuroscience, biology, and cartography regarding image processing.
Image processing is an important field for the help to aid in biomedical imaging especially the diagnosing the disease. Image enhancement is the early step of image analysis.
In this thesis, I focus on the research, development, implementation, and validation of 2D and 3D curvilinear structure enhancement methods, recently established. The proposed methods are based on phase congruency, mathematical morphology, and tensor representation concepts.
First, I have introduced a 3D contrast independent phase congruency-based enhancement approach. The obtained results demonstrate the proposed approach is robust against the contrast variations in 3D biomedical images.
Second, I have proposed a new mathematical morphology-based approach called the bowler-hat transform. In this approach, I have combined the mathematical morphology with a local tensor representation of curvilinear structures in images.
The bowler-hat transform is shown to give better results than comparison methods on challenging data such as retinal/fundus images. The bowler-hat transform is shown to give better results than comparison methods on challenging data such as retinal/fundus images. Especially the proposed method is quite successful while enhancing of curvilinear structures at junctions.
Finally, I have extended the bowler-hat approach to the 3D version to prove the applicability, reliability, and ability of it in 3D
Multi-stage Suture Detection for Robot Assisted Anastomosis based on Deep Learning
In robotic surgery, task automation and learning from demonstration combined
with human supervision is an emerging trend for many new surgical robot
platforms. One such task is automated anastomosis, which requires bimanual
needle handling and suture detection. Due to the complexity of the surgical
environment and varying patient anatomies, reliable suture detection is
difficult, which is further complicated by occlusion and thread topologies. In
this paper, we propose a multi-stage framework for suture thread detection
based on deep learning. Fully convolutional neural networks are used to obtain
the initial detection and the overlapping status of suture thread, which are
later fused with the original image to learn a gradient road map of the thread.
Based on the gradient road map, multiple segments of the thread are extracted
and linked to form the whole thread using a curvilinear structure detector.
Experiments on two different types of sutures demonstrate the accuracy of the
proposed framework.Comment: Submitted to ICRA 201
Detection of curved lines with B-COSFIRE filters: A case study on crack delineation
The detection of curvilinear structures is an important step for various
computer vision applications, ranging from medical image analysis for
segmentation of blood vessels, to remote sensing for the identification of
roads and rivers, and to biometrics and robotics, among others. %The visual
system of the brain has remarkable abilities to detect curvilinear structures
in noisy images. This is a nontrivial task especially for the detection of thin
or incomplete curvilinear structures surrounded with noise. We propose a
general purpose curvilinear structure detector that uses the brain-inspired
trainable B-COSFIRE filters. It consists of four main steps, namely nonlinear
filtering with B-COSFIRE, thinning with non-maximum suppression, hysteresis
thresholding and morphological closing. We demonstrate its effectiveness on a
data set of noisy images with cracked pavements, where we achieve
state-of-the-art results (F-measure=0.865). The proposed method can be employed
in any computer vision methodology that requires the delineation of curvilinear
and elongated structures.Comment: Accepted at Computer Analysis of Images and Patterns (CAIP) 201
ShearLab 3D: Faithful Digital Shearlet Transforms based on Compactly Supported Shearlets
Wavelets and their associated transforms are highly efficient when
approximating and analyzing one-dimensional signals. However, multivariate
signals such as images or videos typically exhibit curvilinear singularities,
which wavelets are provably deficient of sparsely approximating and also of
analyzing in the sense of, for instance, detecting their direction. Shearlets
are a directional representation system extending the wavelet framework, which
overcomes those deficiencies. Similar to wavelets, shearlets allow a faithful
implementation and fast associated transforms. In this paper, we will introduce
a comprehensive carefully documented software package coined ShearLab 3D
(www.ShearLab.org) and discuss its algorithmic details. This package provides
MATLAB code for a novel faithful algorithmic realization of the 2D and 3D
shearlet transform (and their inverses) associated with compactly supported
universal shearlet systems incorporating the option of using CUDA. We will
present extensive numerical experiments in 2D and 3D concerning denoising,
inpainting, and feature extraction, comparing the performance of ShearLab 3D
with similar transform-based algorithms such as curvelets, contourlets, or
surfacelets. In the spirit of reproducible reseaerch, all scripts are
accessible on www.ShearLab.org.Comment: There is another shearlet software package
(http://www.mathematik.uni-kl.de/imagepro/members/haeuser/ffst/) by S.
H\"auser and G. Steidl. We will include this in a revisio
Segmentation of Loops from Coronal EUV Images
We present a procedure which extracts bright loop features from solar EUV
images. In terms of image intensities, these features are elongated ridge-like
intensity maxima. To discriminate the maxima, we need information about the
spatial derivatives of the image intensity. Commonly, the derivative estimates
are strongly affected by image noise. We therefore use a regularized estimation
of the derivative which is then used to interpolate a discrete vector field of
ridge points ``ridgels'' which are positioned on the ridge center and have the
intrinsic orientation of the local ridge direction. A scheme is proposed to
connect ridgels to smooth, spline-represented curves which fit the observed
loops. Finally, a half-automated user interface allows one to merge or split,
eliminate or select loop fits obtained form the above procedure. In this paper
we apply our tool to one of the first EUV images observed by the SECCHI
instrument onboard the recently launched STEREO spacecraft. We compare the
extracted loops with projected field lines computed from
almost-simultaneously-taken magnetograms measured by the SOHO/MDI Doppler
imager. The field lines were calculated using a linear force-free field model.
This comparison allows one to verify faint and spurious loop connections
produced by our segmentation tool and it also helps to prove the quality of the
magnetic-field model where well-identified loop structures comply with
field-line projections. We also discuss further potential applications of our
tool such as loop oscillations and stereoscopy.Comment: 13 pages, 9 figures, Solar Physics, online firs
Enforcing connectivity of 3D linear structures using their 2D projections
Many biological and medical tasks require the delineation of 3D curvilinear
structures such as blood vessels and neurites from image volumes. This is
typically done using neural networks trained by minimizing voxel-wise loss
functions that do not capture the topological properties of these structures.
As a result, the connectivity of the recovered structures is often wrong, which
lessens their usefulness. In this paper, we propose to improve the 3D
connectivity of our results by minimizing a sum of topology-aware losses on
their 2D projections. This suffices to increase the accuracy and to reduce the
annotation effort required to provide the required annotated training data
Beyond KernelBoost
In this Technical Report we propose a set of improvements with respect to the
KernelBoost classifier presented in [Becker et al., MICCAI 2013]. We start with
a scheme inspired by Auto-Context, but that is suitable in situations where the
lack of large training sets poses a potential problem of overfitting. The aim
is to capture the interactions between neighboring image pixels to better
regularize the boundaries of segmented regions. As in Auto-Context [Tu et al.,
PAMI 2009] the segmentation process is iterative and, at each iteration, the
segmentation results for the previous iterations are taken into account in
conjunction with the image itself. However, unlike in [Tu et al., PAMI 2009],
we organize our recursion so that the classifiers can progressively focus on
difficult-to-classify locations. This lets us exploit the power of the
decision-tree paradigm while avoiding over-fitting. In the context of this
architecture, KernelBoost represents a powerful building block due to its
ability to learn on the score maps coming from previous iterations. We first
introduce two important mechanisms to empower the KernelBoost classifier,
namely pooling and the clustering of positive samples based on the appearance
of the corresponding ground-truth. These operations significantly contribute to
increase the effectiveness of the system on biomedical images, where texture
plays a major role in the recognition of the different image components. We
then present some other techniques that can be easily integrated in the
KernelBoost framework to further improve the accuracy of the final
segmentation. We show extensive results on different medical image datasets,
including some multi-label tasks, on which our method is shown to outperform
state-of-the-art approaches. The resulting segmentations display high accuracy,
neat contours, and reduced noise
Modeling Brain Circuitry over a Wide Range of Scales
If we are ever to unravel the mysteries of brain function at its most
fundamental level, we will need a precise understanding of how its component
neurons connect to each other. Electron Microscopes (EM) can now provide the
nanometer resolution that is needed to image synapses, and therefore
connections, while Light Microscopes (LM) see at the micrometer resolution
required to model the 3D structure of the dendritic network. Since both the
topology and the connection strength are integral parts of the brain's wiring
diagram, being able to combine these two modalities is critically important.
In fact, these microscopes now routinely produce high-resolution imagery in
such large quantities that the bottleneck becomes automated processing and
interpretation, which is needed for such data to be exploited to its full
potential. In this paper, we briefly review the Computer Vision techniques we
have developed at EPFL to address this need. They include delineating dendritic
arbors from LM imagery, segmenting organelles from EM, and combining the two
into a consistent representation
- …