17,514 research outputs found

    Developing Allometric Equations for Teak Plantations Located in the Coastal Region of Ecuador from Terrestrial Laser Scanning Data

    Get PDF
    Traditional studies aimed at developing allometric models to estimate dry above-ground biomass (AGB) and other tree-level variables, such as tree stem commercial volume (TSCV) or tree stem volume (TSV), usually involves cutting down the trees. Although this method has low uncertainty, it is quite costly and inefficient since it requires a very time-consuming field work. In order to assist in data collection and processing, remote sensing is allowing the application of non-destructive sampling methods such as that based on terrestrial laser scanning (TLS). In this work, TLS-derived point clouds were used to digitally reconstruct the tree stem of a set of teak trees (Tectona grandis Linn. F.) from 58 circular reference plots of 18 m radius belonging to three different plantations located in the Coastal Region of Ecuador. After manually selecting the appropriate trees from the entire sample, semi-automatic data processing was performed to provide measurements of TSCV and TSV, together with estimates of AGB values at tree level. These observed values were used to develop allometric models, based on diameter at breast height (DBH), total tree height (h), or the metric DBH2 × h, by applying a robust regression method to remove likely outliers. Results showed that the developed allometric models performed reasonably well, especially those based on the metric DBH2 × h, providing low bias estimates and relative RMSE values of 21.60% and 16.41% for TSCV and TSV, respectively. Allometric models only based on tree height were derived from replacing DBH by h in the expression DBH2 x h, according to adjusted expressions depending on DBH classes (ranges of DBH). This finding can facilitate the obtaining of variables such as AGB (carbon stock) and commercial volume of wood over teak plantations in the Coastal Region of Ecuador from only knowing the tree height, constituting a promising method to address large-scale teak plantations monitoring from the canopy height models derived from digital aerial stereophotogrammetry

    Robust Detection of Non-overlapping Ellipses from Points with Applications to Circular Target Extraction in Images and Cylinder Detection in Point Clouds

    Full text link
    This manuscript provides a collection of new methods for the automated detection of non-overlapping ellipses from edge points. The methods introduce new developments in: (i) robust Monte Carlo-based ellipse fitting to 2-dimensional (2D) points in the presence of outliers; (ii) detection of non-overlapping ellipse from 2D edge points; and (iii) extraction of cylinder from 3D point clouds. The proposed methods were thoroughly compared with established state-of-the-art methods, using simulated and real-world datasets, through the design of four sets of original experiments. It was found that the proposed robust ellipse detection was superior to four reliable robust methods, including the popular least median of squares, in both simulated and real-world datasets. The proposed process for detecting non-overlapping ellipses achieved F-measure of 99.3% on real images, compared to F-measures of 42.4%, 65.6%, and 59.2%, obtained using the methods of Fornaciari, Patraucean, and Panagiotakis, respectively. The proposed cylinder extraction method identified all detectable mechanical pipes in two real-world point clouds, obtained under laboratory, and industrial construction site conditions. The results of this investigation show promise for the application of the proposed methods for automatic extraction of circular targets from images and pipes from point clouds

    The JCMT Gould Belt Survey: a quantitative comparison between SCUBA-2 data reduction methods

    Get PDF
    Performing ground-based submillimetre observations is a difficult task as the measurements are subject to absorption and emission from water vapour in the Earth's atmosphere and time variation in weather and instrument stability. Removing these features and other artefacts from the data is a vital process which affects the characteristics of the recovered astronomical structure we seek to study. In this paper, we explore two data reduction methods for data taken with the Submillimetre Common-User Bolometer Array-2 (SCUBA-2) at the James Clerk Maxwell Telescope (JCMT). The JCMT Legacy Reduction 1 (JCMT LR1) and The Gould Belt Legacy Survey Legacy Release 1 (GBS LR1) reduction both use the same software (starlink) but differ in their choice of data reduction parameters. We find that the JCMT LR1 reduction is suitable for determining whether or not compact emission is present in a given region and the GBS LR1 reduction is tuned in a robust way to uncover more extended emission, which better serves more in-depth physical analyses of star-forming regions. Using the GBS LR1 method, we find that compact sources are recovered well, even at a peak brightness of only three times the noise, whereas the reconstruction of larger objects requires much care when drawing boundaries around the expected astronomical signal in the data reduction process. Incorrect boundaries can lead to false structure identification or it can cause structure to be missed. In the JCMT LR1 reduction, the extent of the true structure of objects larger than a point source is never fully recovered

    The impact of priors and observables on parameter inferences in the Constrained MSSM

    Get PDF
    We use a newly released version of the SuperBayeS code to analyze the impact of the choice of priors and the influence of various constraints on the statistical conclusions for the preferred values of the parameters of the Constrained MSSM. We assess the effect in a Bayesian framework and compare it with an alternative likelihood-based measure of a profile likelihood. We employ a new scanning algorithm (MultiNest) which increases the computational efficiency by a factor ~200 with respect to previously used techniques. We demonstrate that the currently available data are not yet sufficiently constraining to allow one to determine the preferred values of CMSSM parameters in a way that is completely independent of the choice of priors and statistical measures. While b->s gamma generally favors large m_0, this is in some contrast with the preference for low values of m_0 and m_1/2 that is almost entirely a consequence of a combination of prior effects and a single constraint coming from the anomalous magnetic moment of the muon, which remains somewhat controversial. Using an information-theoretical measure, we find that the cosmological dark matter abundance determination provides at least 80% of the total constraining power of all available observables. Despite the remaining uncertainties, prospects for direct detection in the CMSSM remain excellent, with the spin-independent neutralino-proton cross section almost guaranteed above sigma_SI ~ 10^{-10} pb, independently of the choice of priors or statistics. Likewise, gluino and lightest Higgs discovery at the LHC remain highly encouraging. While in this work we have used the CMSSM as particle physics model, our formalism and scanning technique can be readily applied to a wider class of models with several free parameters.Comment: Minor changes, extended discussion of profile likelihood. Matches JHEP accepted version. SuperBayeS code with MultiNest algorithm available at http://www.superbayes.or
    corecore