2,935 research outputs found

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    A Projected Gradient Descent Method for CRF Inference allowing End-To-End Training of Arbitrary Pairwise Potentials

    Full text link
    Are we using the right potential functions in the Conditional Random Field models that are popular in the Vision community? Semantic segmentation and other pixel-level labelling tasks have made significant progress recently due to the deep learning paradigm. However, most state-of-the-art structured prediction methods also include a random field model with a hand-crafted Gaussian potential to model spatial priors, label consistencies and feature-based image conditioning. In this paper, we challenge this view by developing a new inference and learning framework which can learn pairwise CRF potentials restricted only by their dependence on the image pixel values and the size of the support. Both standard spatial and high-dimensional bilateral kernels are considered. Our framework is based on the observation that CRF inference can be achieved via projected gradient descent and consequently, can easily be integrated in deep neural networks to allow for end-to-end training. It is empirically demonstrated that such learned potentials can improve segmentation accuracy and that certain label class interactions are indeed better modelled by a non-Gaussian potential. In addition, we compare our inference method to the commonly used mean-field algorithm. Our framework is evaluated on several public benchmarks for semantic segmentation with improved performance compared to previous state-of-the-art CNN+CRF models.Comment: Presented at EMMCVPR 2017 conferenc

    Retinal vessel segmentation:An efficient graph cut approach with Retinex and local phase

    Get PDF
    Our application concerns the automated detection of vessels in retinal images to improve understanding of the disease mechanism, diagnosis and treatment of retinal and a number of systemic diseases. We propose a new framework for segmenting retinal vasculatures with much improved accuracy and efficiency. The proposed framework consists of three technical components: Retinex-based image inhomogeneity correction, local phase-based vessel enhancement and graph cut-based active contour segmentation. These procedures are applied in the following order. Underpinned by the Retinex theory, the inhomogeneity correction step aims to address challenges presented by the image intensity inhomogeneities, and the relatively low contrast of thin vessels compared to the background. The local phase enhancement technique is employed to enhance vessels for its superiority in preserving the vessel edges. The graph cut-based active contour method is used for its efficiency and effectiveness in segmenting the vessels from the enhanced images using the local phase filter. We have demonstrated its performance by applying it to four public retinal image datasets (3 datasets of color fundus photography and 1 of fluorescein angiography). Statistical analysis demonstrates that each component of the framework can provide the level of performance expected. The proposed framework is compared with widely used unsupervised and supervised methods, showing that the overall framework outperforms its competitors. For example, the achieved sensitivity (0:744), specificity (0:978) and accuracy (0:953) for the DRIVE dataset are very close to those of the manual annotations obtained by the second observer

    Superpixel Convolutional Networks using Bilateral Inceptions

    Full text link
    In this paper we propose a CNN architecture for semantic image segmentation. We introduce a new 'bilateral inception' module that can be inserted in existing CNN architectures and performs bilateral filtering, at multiple feature-scales, between superpixels in an image. The feature spaces for bilateral filtering and other parameters of the module are learned end-to-end using standard backpropagation techniques. The bilateral inception module addresses two issues that arise with general CNN segmentation architectures. First, this module propagates information between (super) pixels while respecting image edges, thus using the structured information of the problem for improved results. Second, the layer recovers a full resolution segmentation result from the lower resolution solution of a CNN. In the experiments, we modify several existing CNN architectures by inserting our inception module between the last CNN (1x1 convolution) layers. Empirical results on three different datasets show reliable improvements not only in comparison to the baseline networks, but also in comparison to several dense-pixel prediction techniques such as CRFs, while being competitive in time.Comment: European Conference on Computer Vision (ECCV), 201

    Image Segmentation Techniques: A Survey

    Get PDF
    Segmenting an image utilizing diverse strategies is the primary technique of Image Processing. The technique is broadly utilized in clinical image handling, face acknowledgment, walker location, and so on. Various objects in an image can be recognized using image segmentation methods. Researchers have come up with various image segmentation methods for effective analysis. This paper presents a survey and sums up the designs process of essential image segmentation methods broadly utilized with their advantages and weaknesses

    Image Segmentation Techniques: A Survey

    Get PDF
    Segmenting an image utilizing diverse strategies is the primary technique of Image Processing. The technique is broadly utilized in clinical image handling, face acknowledgment, walker location, and so on. Various objects in an image can be recognized using image segmentation methods. Researchers have come up with various image segmentation methods for effective analysis. This paper presents a survey and sums up the designs process of essential image segmentation methods broadly utilized with their advantages and weaknesses

    Improved 3D Heart Segmentation Using Surface Parameterization for Volumetric Heart Data

    Get PDF
    Imaging modalities such as CT, MRI, and SPECT have had a tremendous impact on diagnosis and treatment planning. These imaging techniques have given doctors the capability to visualize 3D anatomy structures of human body and soft tissues while being non-invasive. Unfortunately, the 3D images produced by these modalities often have boundaries between the organs and soft tissues that are difficult to delineate due to low signal to noise ratios and other factors. Image segmentation is employed as a method for differentiating Regions of Interest in these images by creating artificial contours or boundaries in the images. There are many different techniques for performing segmentation and automating these methods is an active area of research, but currently there are no generalized methods for automatic segmentation due to the complexity of the problem. Therefore hand-segmentation is still widely used in the medical community and is the €œGold standard€� by which all other segmentation methods are measured. However, existing manual segmentation techniques have several drawbacks such as being time consuming, introduce slice interpolation errors when segmenting slice-by-slice, and are generally not reproducible. In this thesis, we present a novel semi-automated method for 3D hand-segmentation that uses mesh extraction and surface parameterization to project several 3D meshes to 2D plane . We hypothesize that allowing the user to better view the relationships between neighboring voxels will aid in delineating Regions of Interest resulting in reduced segmentation time, alleviating slice interpolation artifacts, and be more reproducible
    • …
    corecore