79,668 research outputs found

    Exploring forest structural complexity by multi-scale segmentation of VHR imagery

    Get PDF
    Forests are complex ecological systems, characterised by multiple-scale structural and dynamical patterns which are not inferable from a system description that spans only a narrow window of resolution; this makes their investigation a difficult task using standard field sampling protocols. We segment a QuickBird image covering a beech forest in an initial stage of old-growthness – showing, accordingly, a good degree of structural complexity – into three segmentation levels. We apply field-based diversity indices of tree size, spacing, species assemblage to quantify structural heterogeneity amongst forest regions delineated by segmentation. The aim of the study is to evaluate, on a statistical basis, the relationships between spectrally delineated image segments and observed spatial heterogeneity in forest structure, including gaps in the outer canopy. Results show that: some 45% of the segments generated at the coarser segmentation scale (level 1) are surrounded by structurally different neighbours; level 2 segments distinguish spatial heterogeneity in forest structure in about 63% of level 1 segments; level 3 image segments detect better canopy gaps, rather than differences in the spatial pattern of the investigated structural indices. Results support also the idea of a mixture of macro and micro structural heterogeneity within the beech forest: large size populations of trees homogeneous for the examined structural indices at the coarser segmentation level, when analysed at a finer scale, are internally heterogeneous; and vice versa. Findings from this study demonstrate that multiresolution segmentation is able to delineate scale-dependent patterns of forest structural heterogeneity, even in an initial stage of old-growth structural differentiation. This tool has therefore a potential to improve the sampling design of field surveys aimed at characterizing forest structural complexity across multiple spatio-temporal scales.L'articolo è disponibile sul sito dell'editore www.sciencedirect.co

    Foveation for Segmentation of Mega-Pixel Histology Images

    Get PDF
    Segmenting histology images is challenging because of the sheer size of the images with millions or even billions of pixels. Typical solutions pre-process each histology image by dividing it into patches of fixed size and/or down-sampling to meet memory constraints. Such operations incur information loss in the field-of-view (FoV) (i.e., spatial coverage) and the image resolution. The impact on segmentation performance is, however, as yet understudied. In this work, we first show under typical memory constraints (e.g., 10G GPU memory) that the trade-off between FoV and resolution considerably affects segmentation performance on histology images, and its influence also varies spatially according to local patterns in different areas (see Fig. 1). Based on this insight, we then introduce foveation module, a learnable “dataloader” which, for a given histology image, adaptively chooses the appropriate configuration (FoV/resolution trade-off) of the input patch to feed to the downstream segmentation model at each spatial location (Fig. 1). The foveation module is jointly trained with the segmentation network to maximise the task performance. We demonstrate, on the Gleason2019 challenge dataset for histopathology segmentation, that the foveation module improves segmentation performance over the cases trained with patches of fixed FoV/resolution trade-off. Moreover, our model achieves better segmentation accuracy for the two most clinically important and ambiguous classes (Gleason Grade 3 and 4) than the top performers in the challenge by 13.1% and 7.5%, and improves on the average performance of 6 human experts by 6.5% and 7.5%

    Bayesian Spatial Binary Regression for Label Fusion in Structural Neuroimaging

    Full text link
    Many analyses of neuroimaging data involve studying one or more regions of interest (ROIs) in a brain image. In order to do so, each ROI must first be identified. Since every brain is unique, the location, size, and shape of each ROI varies across subjects. Thus, each ROI in a brain image must either be manually identified or (semi-) automatically delineated, a task referred to as segmentation. Automatic segmentation often involves mapping a previously manually segmented image to a new brain image and propagating the labels to obtain an estimate of where each ROI is located in the new image. A more recent approach to this problem is to propagate labels from multiple manually segmented atlases and combine the results using a process known as label fusion. To date, most label fusion algorithms either employ voting procedures or impose prior structure and subsequently find the maximum a posteriori estimator (i.e., the posterior mode) through optimization. We propose using a fully Bayesian spatial regression model for label fusion that facilitates direct incorporation of covariate information while making accessible the entire posterior distribution. We discuss the implementation of our model via Markov chain Monte Carlo and illustrate the procedure through both simulation and application to segmentation of the hippocampus, an anatomical structure known to be associated with Alzheimer's disease.Comment: 24 pages, 10 figure
    • …
    corecore