1,226 research outputs found

    Learning Approach to Delineation of Curvilinear Structures in 2D and 3D Images

    Get PDF
    Detection of curvilinear structures has long been of interest due to its wide range of applications. Large amounts of imaging data could be readily used in many fields, but it is practically not possible to analyze them manually. Hence, the need for automated delineation approaches. In the recent years Computer Vision witnessed a paradigm shift from mathematical modelling to data-driven methods based on Machine Learning. This led to improvements in performance and robustness of the detection algorithms. Nonetheless, most Machine Learning methods are general-purpose and they do not exploit the specificity of the delineation problem. In this thesis, we present learning methods suited for this task and we apply them to various kinds of microscopic and natural images, proving the general applicability of the presented solutions. First, we introduce a topology loss - a new training loss term, which captures higher-level features of curvilinear networks such as smoothness, connectivity and continuity. This is in contrast to most Deep Learning segmentation methods that do not take into account the geometry of the resulting prediction. In order to compute the new loss term, we extract topology features of prediction and ground-truth using a pre-trained network, whose filters are activated by structures at different scales and orientations. We show that this approach yields better results in terms of conventional segmentation metrics and overall topology of the resulting delineation. Although segmentation of curvilinear structures provides useful information, it is not always sufficient. In many cases, such as neuroscience and cartography, it is crucial to estimate the network connectivity. In order to find the graph representation of the structure depicted in the image, we propose an approach for joint segmentation and connection classification. Apart from pixel probabilities, this approach also returns the likelihood of a proposed path being a part of the reconstructed network. We show that segmentation and path classification are closely related tasks and can benefit from the synergy. The aforementioned methods rely on Machine Learning, which requires significant amounts of annotated ground-truth data to train models. The labelling process often requires expertise, it is costly and tiresome. To alleviate this problem, we introduce an Active Learning method that significantly decreases the time spent on annotating images. It queries the annotator only about the most informative examples, in this case the hypothetical paths belonging to the structure of interest. Contrary to conventional Active Learning methods, our approach exploits local consistency of linear paths to pick the ones that stand out from their neighborhood. Our final contribution is a method suited for both Active Learning and proofreading the result, which often requires more time than the automated delineation itself. It investigates edges of the delineation graph and determines the ones that are especially significant for the global reconstruction by perturbing their weights. Our Active Learning and proofreading strategies are combined with a new efficient formulation of an optimal subgraph computation and reduce the annotation effort by up to 80%

    Live extraction of curvilinear structures from lidar raw data

    Get PDF
    International audienceIn this paper, a general framework is proposed for live extraction of curvilinear structures such as roads or ridges from airborne LiDAR raw data, in the scope of present and past man-environment interaction studies. Unlike most approaches in literature, classified ground points are directly processed here, rather than derived products such as digital terrain models (DTM). This allows to detect possible lacks of ground points due to LiDAR signal occlusions caused by dense coniferous canopies. An efficient and simple solution based on discrete geometry tools is described for supervised context in which the user just indicates where the extraction should take place. Fast response times are required to ensure a good man-system interaction. The framework performance is first evaluated on the example of the extraction of forest roads in a mountainous area, as these objects are well marked in the DTM and hence provide some kind of ground truth. Good execution time and accuracy level are reported. Then this framework is applied to the detection of prominent curvilinear structures, which are much more diffuse objects, but of greater interest than roads in the scope of the present project. Achieved results show high potential of the proposed approach to help archaeologists and geomorphologists in finding areas of interest for future prospection using LiDAR data

    Inferring Geodesic Cerebrovascular Graphs: Image Processing, Topological Alignment and Biomarkers Extraction

    Get PDF
    A vectorial representation of the vascular network that embodies quantitative features - location, direction, scale, and bifurcations - has many potential neuro-vascular applications. Patient-specific models support computer-assisted surgical procedures in neurovascular interventions, while analyses on multiple subjects are essential for group-level studies on which clinical prediction and therapeutic inference ultimately depend. This first motivated the development of a variety of methods to segment the cerebrovascular system. Nonetheless, a number of limitations, ranging from data-driven inhomogeneities, the anatomical intra- and inter-subject variability, the lack of exhaustive ground-truth, the need for operator-dependent processing pipelines, and the highly non-linear vascular domain, still make the automatic inference of the cerebrovascular topology an open problem. In this thesis, brain vessels’ topology is inferred by focusing on their connectedness. With a novel framework, the brain vasculature is recovered from 3D angiographies by solving a connectivity-optimised anisotropic level-set over a voxel-wise tensor field representing the orientation of the underlying vasculature. Assuming vessels joining by minimal paths, a connectivity paradigm is formulated to automatically determine the vascular topology as an over-connected geodesic graph. Ultimately, deep-brain vascular structures are extracted with geodesic minimum spanning trees. The inferred topologies are then aligned with similar ones for labelling and propagating information over a non-linear vectorial domain, where the branching pattern of a set of vessels transcends a subject-specific quantized grid. Using a multi-source embedding of a vascular graph, the pairwise registration of topologies is performed with the state-of-the-art graph matching techniques employed in computer vision. Functional biomarkers are determined over the neurovascular graphs with two complementary approaches. Efficient approximations of blood flow and pressure drop account for autoregulation and compensation mechanisms in the whole network in presence of perturbations, using lumped-parameters analog-equivalents from clinical angiographies. Also, a localised NURBS-based parametrisation of bifurcations is introduced to model fluid-solid interactions by means of hemodynamic simulations using an isogeometric analysis framework, where both geometry and solution profile at the interface share the same homogeneous domain. Experimental results on synthetic and clinical angiographies validated the proposed formulations. Perspectives and future works are discussed for the group-wise alignment of cerebrovascular topologies over a population, towards defining cerebrovascular atlases, and for further topological optimisation strategies and risk prediction models for therapeutic inference. Most of the algorithms presented in this work are available as part of the open-source package VTrails

    Automatic Multi-Model Fitting for Blood Vessel Extraction

    Get PDF
    Blood vessel extraction and visualization in 2D images or 3D volumes is an essential clinical task. A blood vessel system is an example of a tubular tree like structure, and fully automated reconstruction of tubular tree like structures remains an open computer vision problem. Most vessel extraction methods are based on the vesselness measure. A vesselness measure, usually based on the eigenvalues of the Hessian matrix, assigns a high value to a voxel that is likely to be a part of a blood vessel. After the vesselness measure is computed, most methods extract vessels based on the shortest paths connecting voxels with a high measure of vesselness. Our approach is quite different. We also start with the vesselness measure, but instead of computing shortest paths, we propose to fit a geometric of vessel system to the vesselness measure. Fitting a geometric model has the advantage that we can choose a model with desired properties and the appropriate goodness-of-fit function to control the fitting results. Changing the model and goodness-of-fit function allows us to change the properties of the reconstructed vessel system structure in a principled way. In contrast, with shortest paths, any undesirable reconstruction properties, such as short-cutting, is addressed by developing ad-hock procedures that are not easy to control. Since the geometric model has to be fitted to a discrete set of points, we threshold the vesselness measure to extract voxels that are likely to be vessels, and fit our geometric model to these thresholded voxels. Our geometric model is a piecewise-line segment model. That is we approximate the vessel structure as a collection of 3D straight line segments of various lengths and widths. This can be regarded as the problem of fitting multiple line segments, that is a multi-model fitting problem. We approach the multi-model fitting problem in the global energy optimization framework. That is we formulate a global energy function that reflects the goodness of fit of our piecewise line segment model to the thresholded vesselness voxels and we use the efficient and effective graph cut algorithm to optimize the energy. Our global energy function consists of the data, smoothness and label cost. The data cost encourages a good geometric fit of each voxel to the line segment it is being assigned to. The smoothness cost encourages nearby line segments to have similar angles, thus encouraging smoother blood vessels. The label cost penalizes overly complex models, that is, it encourages to explain the data with fewer line segment models. We apply our algorithm to the challenging 3D data that are micro-CT images of a mouse heart and obtain promising results
    • …
    corecore