7,591 research outputs found

    Automatic normal orientation in point clouds of building interiors

    Full text link
    Orienting surface normals correctly and consistently is a fundamental problem in geometry processing. Applications such as visualization, feature detection, and geometry reconstruction often rely on the availability of correctly oriented normals. Many existing approaches for automatic orientation of normals on meshes or point clouds make severe assumptions on the input data or the topology of the underlying object which are not applicable to real-world measurements of urban scenes. In contrast, our approach is specifically tailored to the challenging case of unstructured indoor point cloud scans of multi-story, multi-room buildings. We evaluate the correctness and speed of our approach on multiple real-world point cloud datasets

    Introducing Geometry in Active Learning for Image Segmentation

    Get PDF
    We propose an Active Learning approach to training a segmentation classifier that exploits geometric priors to streamline the annotation process in 3D image volumes. To this end, we use these priors not only to select voxels most in need of annotation but to guarantee that they lie on 2D planar patch, which makes it much easier to annotate than if they were randomly distributed in the volume. A simplified version of this approach is effective in natural 2D images. We evaluated our approach on Electron Microscopy and Magnetic Resonance image volumes, as well as on natural images. Comparing our approach against several accepted baselines demonstrates a marked performance increase

    A Fast Level Set Method for Synthetic Aperture Radar Ocean Image Segmentation

    Get PDF
    Segmentation of high noise imagery like Synthetic Aperture Radar (SAR) images is still one of the most challenging tasks in image processing. While level set, a novel approach based on the analysis of the motion of an interface, can be used to address this challenge, the cell-based iterations may make the process of image segmentation remarkably slow, especially for large-size images. For this reason fast level set algorithms such as narrow band and fast marching have been attempted. Built upon these, this paper presents an improved fast level set method for SAR ocean image segmentation. This competent method is dependent on both the intensity driven speed and curvature flow that result in a stable and smooth boundary. Notably, it is optimized to track moving interfaces for keeping up with the point-wise boundary propagation using a single list and a method of fast up-wind scheme iteration. The list facilitates efficient insertion and deletion of pixels on the propagation front. Meanwhile, the local up-wind scheme is used to update the motion of the curvature front instead of solving partial differential equations. Experiments have been carried out on extraction of surface slick features from ERS-2 SAR images to substantiate the efficacy of the proposed fast level set method

    The Deep Poincare Map: A Novel Approach for Left Ventricle Segmentation

    Get PDF
    Precise segmentation of the left ventricle (LV) within cardiac MRI images is a prerequisite for the quantitative measurement of heart function. However, this task is challenging due to the limited availability of labeled data and motion artifacts from cardiac imaging. In this work, we present an iterative segmentation algorithm for LV delineation. By coupling deep learning with a novel dynamic-based labeling scheme, we present a new methodology where a policy model is learned to guide an agent to travel over the image, tracing out a boundary of the ROI – using the magnitude difference of the Poincaré map as a stopping criterion. Our method is evaluated on two datasets, namely the Sunnybrook Cardiac Dataset (SCD) and data from the STACOM 2011 LV segmentation challenge. Our method outperforms the previous research over many metrics. In order to demonstrate the transferability of our method we present encouraging results over the STACOM 2011 data, when using a model trained on the SCD dataset

    Machine Learning Methods for Medical and Biological Image Computing

    Get PDF
    Medical and biological imaging technologies provide valuable visualization information of structure and function for an organ from the level of individual molecules to the whole object. Brain is the most complex organ in body, and it increasingly attracts intense research attentions with the rapid development of medical and bio-logical imaging technologies. A massive amount of high-dimensional brain imaging data being generated makes the design of computational methods for efficient analysis on those images highly demanded. The current study of computational methods using hand-crafted features does not scale with the increasing number of brain images, hindering the pace of scientific discoveries in neuroscience. In this thesis, I propose computational methods using high-level features for automated analysis of brain images at different levels. At the brain function level, I develop a deep learning based framework for completing and integrating multi-modality neuroimaging data, which increases the diagnosis accuracy for Alzheimer’s disease. At the cellular level, I propose to use three dimensional convolutional neural networks (CNNs) for segmenting the volumetric neuronal images, which improves the performance of digital reconstruction of neuron structures. I design a novel CNN architecture such that the model training and testing image prediction can be implemented in an end-to-end manner. At the molecular level, I build a voxel CNN classifier to capture discriminative features of the input along three spatial dimensions, which facilitate the identification of secondary structures of proteins from electron microscopy im-ages. In order to classify genes specifically expressed in different brain cell-type, I propose to use invariant image feature descriptors to capture local gene expression information from cellular-resolution in situ hybridization images. I build image-level representations by applying regularized learning and vector quantization on generated image descriptors. The developed computational methods in this dissertation are evaluated using images from medical and biological experiments in comparison with baseline methods. Experimental results demonstrate that the developed representations, formulations, and algorithms are effective and efficient in learning from brain imaging data

    DETECTION OF GRANULATION TISSUE FOR HEALING ASSESSMENT OF CHRONIC ULCERS

    Get PDF
    Wounds that fail to heal within an expected period develop into ulcers that cause severe pain and expose patients to limb amputation. Ulcer appearance changes gradually as ulcer tissues evolve throughout the healing process. Dermatologists assess the progression of ulcer healing based on visual inspection of ulcer tissues, which is inconsistent and subjective. The ability to measure objectively early stages of ulcer healing is important to improve clinical decisions and enhance the effectiveness of the treatment. Ulcer healing is indicated by the growth of granulation tissue that contains pigment haemoglobin that causes the red colour of the tissue. An approach based on utilising haemoglobin content as an image marker to detect regions of granulation tissue on ulcers surface using colour images of chronic ulcers is investigated in this study. The approach is utilised to develop a system that is able to detect regions of granulation tissue on ulcers surface using colour images of chronic ulcers

    Segmentation of Tubular Structures Using Iterative Training with Tailored Samples

    Full text link
    We propose a minimal path method to simultaneously compute segmentation masks and extract centerlines of tubular structures with line-topology. Minimal path methods are commonly used for the segmentation of tubular structures in a wide variety of applications. Recent methods use features extracted by CNNs, and often outperform methods using hand-tuned features. However, for CNN-based methods, the samples used for training may be generated inappropriately, so that they can be very different from samples encountered during inference. We approach this discrepancy by introducing a novel iterative training scheme, which enables generating better training samples specifically tailored for the minimal path methods without changing existing annotations. In our method, segmentation masks and centerlines are not determined after one another by post-processing, but obtained using the same steps. Our method requires only very few annotated training images. Comparison with seven previous approaches on three public datasets, including satellite images and medical images, shows that our method achieves state-of-the-art results both for segmentation masks and centerlines.Comment: Accepted to IEEE/CVF International Conference on Computer Vision (ICCV), Paris, 202
    corecore