117 research outputs found

    DeepWheat: Estimating Phenotypic Traits from Crop Images with Deep Learning

    Full text link
    In this paper, we investigate estimating emergence and biomass traits from color images and elevation maps of wheat field plots. We employ a state-of-the-art deconvolutional network for segmentation and convolutional architectures, with residual and Inception-like layers, to estimate traits via high dimensional nonlinear regression. Evaluation was performed on two different species of wheat, grown in field plots for an experimental plant breeding study. Our framework achieves satisfactory performance with mean and standard deviation of absolute difference of 1.05 and 1.40 counts for emergence and 1.45 and 2.05 for biomass estimation. Our results for counting wheat plants from field images are better than the accuracy reported for the similar, but arguably less difficult, task of counting leaves from indoor images of rosette plants. Our results for biomass estimation, even with a very small dataset, improve upon all previously proposed approaches in the literature.Comment: WACV 2018 (Code repository: https://github.com/p2irc/deepwheat_WACV-2018

    Robust unsupervised small area change detection from SAR imagery using deep learning

    Get PDF
    Small area change detection using synthetic aperture radar (SAR) imagery is a highly challenging task, due to speckle noise and imbalance between classes (changed and unchanged). In this paper, a robust unsupervised approach is proposed for small area change detection using deep learning techniques. First, a multi-scale superpixel reconstruction method is developed to generate a difference image (DI), which can suppress the speckle noise effectively and enhance edges by exploiting local, spatially homogeneous information. Second, a two-stage centre-constrained fuzzy c-means clustering algorithm is proposed to divide the pixels of the DI into changed, unchanged and intermediate classes with a parallel clustering strategy. Image patches belonging to the first two classes are then constructed as pseudo-label training samples, and image patches of the intermediate class are treated as testing samples. Finally, a convolutional wavelet neural network (CWNN) is designed and trained to classify testing samples into changed or unchanged classes, coupled with a deep convolutional generative adversarial network (DCGAN) to increase the number of changed class within the pseudo-label training samples. Numerical experiments on four real SAR datasets demonstrate the validity and robustness of the proposed approach, achieving up to 99.61% accuracy for small area change detection

    Weakly supervised conditional random fields model for semantic segmentation with image patches.

    Get PDF
    Image semantic segmentation (ISS) is used to segment an image into regions with differently labeled semantic category. Most of the existing ISS methods are based on fully supervised learning, which requires pixel-level labeling for training the model. As a result, it is often very time-consuming and labor-intensive, yet still subject to manual errors and subjective inconsistency. To tackle such difficulties, a weakly supervised ISS approach is proposed, in which the challenging problem of label inference from image-level to pixel-level will be particularly addressed, using image patches and conditional random fields (CRF). An improved simple linear iterative cluster (SLIC) algorithm is employed to extract superpixels. for image segmentation. Specifically, it generates various numbers of superpixels according to different images, which can be used to guide the process of image patch extraction based on the image-level labeled information. Based on the extracted image patches, the CRF model is constructed for inferring semantic class labels, which uses the potential energy function to map from the image-level to pixel-level image labels. Finally, patch based CRF (PBCRF) model is used to accomplish the weakly supervised ISS. Experiments conducted on two publicly available benchmark datasets, MSRC and PASCAL VOC 2012, have demonstrated that our proposed algorithm can yield very promising results compared to quite a few state-of-the-art ISS methods, including some deep learning-based models

    PolSAR Ship Detection Based on Neighborhood Polarimetric Covariance Matrix

    Get PDF
    The detection of small ships in polarimetric synthetic aperture radar (PolSAR) images is still a topic for further investigation. Recently, patch detection techniques, such as superpixel-level detection, have stimulated wide interest because they can use the information contained in similarities among neighboring pixels. In this article, we propose a novel neighborhood polarimetric covariance matrix (NPCM) to detect the small ships in PolSAR images, leading to a significant improvement in the separability between ship targets and sea clutter. The NPCM utilizes the spatial correlation between neighborhood pixels and maps the representation for a given pixel into a high-dimensional covariance matrix by embedding spatial and polarization information. Using the NPCM formalism, we apply a standard whitening filter, similar to the polarimetric whitening filter (PWF). We show how the inclusion of neighborhood information improves the performance compared with the traditional polarimetric covariance matrix. However, this is at the expense of a higher computation cost. The theory is validated via the simulated and measured data under different sea states and using different radar platforms
    • …
    corecore