869 research outputs found
Variational methods and its applications to computer vision
Many computer vision applications such as image segmentation can be formulated in a ''variational'' way as energy minimization problems. Unfortunately, the computational task of minimizing these energies is usually difficult as it generally involves non convex functions in a space with thousands of dimensions and often the associated combinatorial problems are NP-hard to solve. Furthermore, they are ill-posed inverse problems and therefore are extremely sensitive to perturbations (e.g. noise). For this reason in order to compute a physically reliable approximation from given noisy data, it is necessary to incorporate into the mathematical model appropriate regularizations that require complex computations.
The main aim of this work is to describe variational segmentation methods that are particularly effective for curvilinear structures. Due to their complex geometry, classical regularization techniques cannot be adopted because they lead to the loss of most of low contrasted details. In contrast, the proposed method not only better preserves curvilinear structures, but also reconnects some parts that may have been disconnected by noise. Moreover, it can be easily extensible to graphs and successfully applied to different types of data such as medical imagery (i.e. vessels, hearth coronaries etc), material samples (i.e. concrete) and satellite signals (i.e. streets, rivers etc.). In particular, we will show results and performances about an implementation targeting new generation of High Performance Computing (HPC) architectures where different types of coprocessors cooperate. The involved dataset consists of approximately 200 images of cracks, captured in three different tunnels by a robotic machine designed for the European ROBO-SPECT project.Open Acces
Adaptive multiscale detection of filamentary structures in a background of uniform random points
We are given a set of points that might be uniformly distributed in the
unit square . We wish to test whether the set, although mostly
consisting of uniformly scattered points, also contains a small fraction of
points sampled from some (a priori unknown) curve with -norm
bounded by . An asymptotic detection threshold exists in this problem;
for a constant , if the number of points sampled from the
curve is smaller than , reliable detection
is not possible for large . We describe a multiscale significant-runs
algorithm that can reliably detect concentration of data near a smooth curve,
without knowing the smoothness information or in advance,
provided that the number of points on the curve exceeds
. This algorithm therefore has an optimal
detection threshold, up to a factor . At the heart of our approach is
an analysis of the data by counting membership in multiscale multianisotropic
strips. The strips will have area and exhibit a variety of lengths,
orientations and anisotropies. The strips are partitioned into anisotropy
classes; each class is organized as a directed graph whose vertices all are
strips of the same anisotropy and whose edges link such strips to their ``good
continuations.'' The point-cloud data are reduced to counts that measure
membership in strips. Each anisotropy graph is reduced to a subgraph that
consist of strips with significant counts. The algorithm rejects
whenever some such subgraph contains a path that connects many consecutive
significant counts.Comment: Published at http://dx.doi.org/10.1214/009053605000000787 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
From Multiview Image Curves to 3D Drawings
Reconstructing 3D scenes from multiple views has made impressive strides in
recent years, chiefly by correlating isolated feature points, intensity
patterns, or curvilinear structures. In the general setting - without
controlled acquisition, abundant texture, curves and surfaces following
specific models or limiting scene complexity - most methods produce unorganized
point clouds, meshes, or voxel representations, with some exceptions producing
unorganized clouds of 3D curve fragments. Ideally, many applications require
structured representations of curves, surfaces and their spatial relationships.
This paper presents a step in this direction by formulating an approach that
combines 2D image curves into a collection of 3D curves, with topological
connectivity between them represented as a 3D graph. This results in a 3D
drawing, which is complementary to surface representations in the same sense as
a 3D scaffold complements a tent taut over it. We evaluate our results against
truth on synthetic and real datasets.Comment: Expanded ECCV 2016 version with tweaked figures and including an
overview of the supplementary material available at
multiview-3d-drawing.sourceforge.ne
Active skeleton for bacteria modeling
The investigation of spatio-temporal dynamics of bacterial cells and their
molecular components requires automated image analysis tools to track cell
shape properties and molecular component locations inside the cells. In the
study of bacteria aging, the molecular components of interest are protein
aggregates accumulated near bacteria boundaries. This particular location makes
very ambiguous the correspondence between aggregates and cells, since computing
accurately bacteria boundaries in phase-contrast time-lapse imaging is a
challenging task. This paper proposes an active skeleton formulation for
bacteria modeling which provides several advantages: an easy computation of
shape properties (perimeter, length, thickness, orientation), an improved
boundary accuracy in noisy images, and a natural bacteria-centered coordinate
system that permits the intrinsic location of molecular components inside the
cell. Starting from an initial skeleton estimate, the medial axis of the
bacterium is obtained by minimizing an energy function which incorporates
bacteria shape constraints. Experimental results on biological images and
comparative evaluation of the performances validate the proposed approach for
modeling cigar-shaped bacteria like Escherichia coli. The Image-J plugin of the
proposed method can be found online at http://fluobactracker.inrialpes.fr.Comment: Published in Computer Methods in Biomechanics and Biomedical
Engineering: Imaging and Visualizationto appear i
Research issues in data modeling for scientific visualization
This article summarizes some topics of modeling as they impinge on the future development of scientific data visualization. The benefits from visualization techniques in analyzing data are well established, but to build on these pioneering efforts, one must recognize modeling as a distinct structural component in the larger context of visualization and problem-solving systems. Volume modeling is the entry way to this arena of future development, and model-based rendering describes how scientists will view the results. Important side developments such as multiresolution modeling and model-based segmentation will contribute structural capability to these systems. All of these components ultimately depend on the mathematical foundations of scattered data modeling and on model validation and standards to incorporate this modeling methodology into effective tools for scientific inquiry.Postprint (published version
Learning Approach to Delineation of Curvilinear Structures in 2D and 3D Images
Detection of curvilinear structures has long been of interest due to its wide range of applications. Large amounts of imaging data could be readily used in many fields, but it is practically not possible to analyze them manually. Hence, the need for automated delineation approaches. In the recent years Computer Vision witnessed a paradigm shift from mathematical modelling to data-driven methods based on Machine Learning. This led to improvements in performance and robustness of the detection algorithms. Nonetheless, most Machine Learning methods are general-purpose and they do not exploit the specificity of the delineation problem. In this thesis, we present learning methods suited for this task and we apply them to various kinds of microscopic and natural images, proving the general applicability of the presented solutions.
First, we introduce a topology loss - a new training loss term, which captures higher-level features of curvilinear networks such as smoothness, connectivity and continuity. This is in contrast to most Deep Learning segmentation methods that do not take into account the geometry of the resulting prediction. In order to compute the new loss term, we extract topology features of prediction and ground-truth using a pre-trained network, whose filters are activated by structures at different scales and orientations. We show that this approach yields better results in terms of conventional segmentation metrics and overall topology of the resulting delineation.
Although segmentation of curvilinear structures provides useful information, it is not always sufficient. In many cases, such as neuroscience and cartography, it is crucial to estimate the network connectivity. In order to find the graph representation of the structure depicted in the image, we propose an approach for joint segmentation and connection classification. Apart from pixel probabilities, this approach also returns the likelihood of a proposed path being a part of the reconstructed network. We show that segmentation and path classification are closely related tasks and can benefit from the synergy.
The aforementioned methods rely on Machine Learning, which requires significant amounts of annotated ground-truth data to train models. The labelling process often requires expertise, it is costly and tiresome. To alleviate this problem, we introduce an Active Learning method that significantly decreases the time spent on annotating images. It queries the annotator only about the most informative examples, in this case the hypothetical paths belonging to the structure of interest. Contrary to conventional Active Learning methods, our approach exploits local consistency of linear paths to pick the ones that stand out from their neighborhood.
Our final contribution is a method suited for both Active Learning and proofreading the result, which often requires more time than the automated delineation itself. It investigates edges of the delineation graph and determines the ones that are especially significant for the global reconstruction by perturbing their weights. Our Active Learning and proofreading strategies are combined with a new efficient formulation of an optimal subgraph computation and reduce the annotation effort by up to 80%
From whole-organ imaging to in-silico blood flow modeling: a new multi-scale network analysis for revisiting tissue functional anatomy
We present a multi-disciplinary image-based blood flow perfusion modeling of a whole organ vascular network for analyzing both its structural and functional properties. We show how the use of Light-Sheet Fluorescence Microscopy (LSFM) permits whole-organ micro- vascular imaging, analysis and modelling. By using adapted image post-treatment workflow, we could segment, vectorize and reconstruct the entire micro-vascular network composed of 1.7 million vessels, from the tissue-scale, inside a * 25 × 5 × 1 = 125mm 3 volume of the mouse fat pad, hundreds of times larger than previous studies, down to the cellular scale at micron resolution, with the entire blood perfusion modeled. Adapted network analysis revealed the structural and functional organization of meso-scale tissue as strongly connected communities of vessels. These communities share a distinct heterogeneous core region and a more homogeneous peripheral region, consistently with known biological functions of fat tissue. Graph clustering analysis also revealed two distinct robust meso-scale typical sizes (from 10 to several hundred times the cellular size), revealing, for the first time, strongly connected functional vascular communities. These community networks support heterogeneous micro-environments. This work provides the proof of concept that in-silico all-tissue perfusion modeling can reveal new structural and functional exchanges between micro-regions in tissues, found from community clusters in the vascular graph
Neuropathy Classification of Corneal Nerve Images Using Artificial Intelligence
Nerve variations in the human cornea have been associated with alterations in
the neuropathy state of a patient suffering from chronic diseases. For some diseases,
such as diabetes, detection of neuropathy prior to visible symptoms is important,
whereas for others, such as multiple sclerosis, early prediction of disease worsening is
crucial. As current methods fail to provide early diagnosis of neuropathy, in vivo
corneal confocal microscopy enables very early insight into the nerve damage by
illuminating and magnifying the human cornea. This non-invasive method captures a
sequence of images from the corneal sub-basal nerve plexus. Current practices of
manual nerve tracing and classification impede the advancement of medical research in
this domain. Since corneal nerve analysis for neuropathy is in its initial stages, there is
a dire need for process automation.
To address this limitation, we seek to automate the two stages of this process:
nerve segmentation and neuropathy classification of images. For nerve segmentation,
we compare the performance of two existing solutions on multiple datasets to select the
appropriate method and proceed to the classification stage. Consequently, we approach
neuropathy classification of the images through artificial intelligence using Adaptive
Neuro-Fuzzy Inference System, Support Vector Machines, Naïve Bayes and k-nearest
neighbors. We further compare the performance of machine learning classifiers with
deep learning. We ascertained that nerve segmentation using convolutional neural networks provided a significant improvement in sensitivity and false negative rate by
at least 5% over the state-of-the-art software. For classification, ANFIS yielded the best
classification accuracy of 93.7% compared to other classifiers. Furthermore, for this
problem, machine learning approaches performed better in terms of classification
accuracy than deep learning
- …