163 research outputs found
Space-Time Structure of Loop Quantum Black Hole
In this paper we have improved the semiclassical analysis of loop quantum
black hole (LQBH) in the conservative approach of constant polymeric parameter.
In particular we have focused our attention on the space-time structure. We
have introduced a very simple modification of the spherically symmetric
Hamiltonian constraint in its holonomic version. The new quantum constraint
reduces to the classical constraint when the polymeric parameter goes to
zero.Using this modification we have obtained a large class of semiclassical
solutions parametrized by a generic function of the polymeric parameter. We
have found that only a particular choice of this function reproduces the black
hole solution with the correct asymptotic flat limit. In r=0 the semiclassical
metric is regular and the Kretschmann invariant has a maximum peaked in
L-Planck. The radial position of the pick does not depend on the black hole
mass and the polymeric parameter. The semiclassical solution is very similar to
the Reissner-Nordstrom metric. We have constructed the Carter-Penrose diagrams
explicitly, giving a causal description of the space-time and its maximal
extension. The LQBH metric interpolates between two asymptotically flat
regions, the r to infinity region and the r to 0 region. We have studied the
thermodynamics of the semiclassical solution. The temperature, entropy and the
evaporation process are regular and could be defined independently from the
polymeric parameter. We have studied the particular metric when the polymeric
parameter goes towards to zero. This metric is regular in r=0 and has only one
event horizon in r = 2m. The Kretschmann invariant maximum depends only on
L-Planck. The polymeric parameter does not play any role in the black hole
singularity resolution. The thermodynamics is the same.Comment: 17 pages, 19 figure
Limitations of estimating branch volume from terrestrial laser scanning
Quantitative structural models (QSMs) are frequently used to simplify single tree point clouds obtained by terrestrial laser scanning (TLS). QSMs use geometric primitives to derive topological and volumetric information about trees. Previous studies have shown a high agreement between TLS and QSM total volume estimates alongside field measured data for whole trees. Although already broadly applied, the uncertainties of the combination of TLS and QSM modelling are still largely unexplored. In our study, we investigated the effect of scanning distance on length and volume estimates of branches when deriving QSMs from TLS data. We scanned ten European beech (Fagus sylvatica L.) branches with an average length of 2.6 m. The branches were scanned from distances ranging from 5 to 45 m at step intervals of 5 m from three scan positions each. Twelve close-range scans were performed as a benchmark. For each distance and branch, QSMs were derived. We found that with increasing distance, the point cloud density and the cumulative length of the reconstructed branches decreased, whereas individual volumes increased. Dependent on the QSM hyperparameters, at a scanning distance of 45 m, cumulative branch length was on average underestimated by − 75%, while branch volume was overestimated by up to + 539%. We assume that the high deviations are related to point cloud quality. As the scanning distance increases, the size of the individual laser footprints and the distances between them increase, making it more difficult to fully capture small branches and to adjust suitable QSMs
Like trainer, like bot? Inheritance of bias in algorithmic content moderation
The internet has become a central medium through which `networked publics'
express their opinions and engage in debate. Offensive comments and personal
attacks can inhibit participation in these spaces. Automated content moderation
aims to overcome this problem using machine learning classifiers trained on
large corpora of texts manually annotated for offence. While such systems could
help encourage more civil debate, they must navigate inherently normatively
contestable boundaries, and are subject to the idiosyncratic norms of the human
raters who provide the training data. An important objective for platforms
implementing such measures might be to ensure that they are not unduly biased
towards or against particular norms of offence. This paper provides some
exploratory methods by which the normative biases of algorithmic content
moderation systems can be measured, by way of a case study using an existing
dataset of comments labelled for offence. We train classifiers on comments
labelled by different demographic subsets (men and women) to understand how
differences in conceptions of offence between these groups might affect the
performance of the resulting models on various test sets. We conclude by
discussing some of the ethical choices facing the implementers of algorithmic
moderation systems, given various desired levels of diversity of viewpoints
amongst discussion participants.Comment: 12 pages, 3 figures, 9th International Conference on Social
Informatics (SocInfo 2017), Oxford, UK, 13--15 September 2017 (forthcoming in
Springer Lecture Notes in Computer Science
Assessment of Bias in Pan-Tropical Biomass Predictions
Above-ground biomass (AGB) is an essential descriptor of forests, of use in ecological and climate-related research. At tree- and stand-scale, destructive but direct measurements of AGB are replaced with predictions from allometric models characterizing the correlational relationship between AGB, and predictor variables including stem diameter, tree height and wood density. These models are constructed from harvested calibration data, usually via linear regression. Here, we assess systematic error in out-of-sample predictions of AGB introduced during measurement, compilation and modeling of in-sample calibration data. Various conventional bivariate and multivariate models are constructed from open access data of tropical forests. Metadata analysis, fit diagnostics and cross-validation results suggest several model misspecifications: chiefly, unaccounted for inconsistent measurement error in predictor variables between in- and out-of-sample data. Simulations demonstrate conservative inconsistencies can introduce significant bias into tree- and stand-scale AGB predictions. When tree height and wood density are included as predictors, models should be modified to correct for bias. Finally, we explore a fundamental assumption of conventional allometry, that model parameters are independent of tree size. That is, the same model can provide predictions of consistent trueness irrespective of size-class. Most observations in current calibration datasets are from smaller trees, meaning the existence of a size dependency would bias predictions for larger trees. We determine that detecting the absence or presence of a size dependency is currently prevented by model misspecifications and calibration data imbalances. We call for the collection of additional harvest data, specifically under-represented larger trees
Benchmarking airborne laser scanning tree segmentation algorithms in broadleaf forests shows high accuracy only for canopy trees
Individual tree segmentation from airborne laser scanning data is a longstanding and important challenge in forest remote sensing. Tree segmentation algorithms are widely available, but robust intercomparison studies are rare due to the difficulty of obtaining reliable reference data. Here we provide a benchmark data set for temperate and tropical broadleaf forests generated from labelled terrestrial laser scanning data. We compared the performance of four widely used tree segmentation algorithms against this benchmark data set. All algorithms performed reasonably well on the canopy trees. The point cloud-based algorithm AMS3D (Adaptive Mean Shift 3D) had the highest overall accuracy, closely followed by the 2D raster based region growing algorithm Dalponte2016 +. However, all algorithms failed to accurately segment the understory trees. This result was consistent across both forest types. This study emphasises the need to assess tree segmentation algorithms directly using benchmark data, rather than comparing with forest indices such as biomass or the number and size distribution of trees. We provide the first openly available benchmark data set for tropical forests and we hope future studies will extend this work to other regions
Redundancy, Deduction Schemes, and Minimum-Size Bases for Association Rules
Association rules are among the most widely employed data analysis methods in
the field of Data Mining. An association rule is a form of partial implication
between two sets of binary variables. In the most common approach, association
rules are parameterized by a lower bound on their confidence, which is the
empirical conditional probability of their consequent given the antecedent,
and/or by some other parameter bounds such as "support" or deviation from
independence. We study here notions of redundancy among association rules from
a fundamental perspective. We see each transaction in a dataset as an
interpretation (or model) in the propositional logic sense, and consider
existing notions of redundancy, that is, of logical entailment, among
association rules, of the form "any dataset in which this first rule holds must
obey also that second rule, therefore the second is redundant". We discuss
several existing alternative definitions of redundancy between association
rules and provide new characterizations and relationships among them. We show
that the main alternatives we discuss correspond actually to just two variants,
which differ in the treatment of full-confidence implications. For each of
these two notions of redundancy, we provide a sound and complete deduction
calculus, and we show how to construct complete bases (that is,
axiomatizations) of absolutely minimum size in terms of the number of rules. We
explore finally an approach to redundancy with respect to several association
rules, and fully characterize its simplest case of two partial premises.Comment: LMCS accepted pape
Finite element analysis of trees in the wind based on terrestrial laser scanning data
Wind damage is an important driver of forest structure and dynamics, but it is poorly understood in natural broadleaf forests. This paper presents a new approach in the study of wind damage: combining terrestrial laser scanning (TLS) data and finite element analysis. Recent advances in tree reconstruction from TLS data allowed us to accurately represent the 3D geometry of a tree in a mechanical simulation, without the need for arduous manual mapping or simplifying assumptions about tree shape. We used this simulation to predict the mechanical strains produced on the trunks of 21 trees in Wytham Woods, UK, and validated it using strain data measured on these same trees. For a subset of five trees near the anemometer, the model predicted a five-minute time-series of strain with a mean cross-correlation coefficient of 0.71, when forced by the locally measured wind speed data. Additionally, the maximum strain associated with a 5 ms−1 or 15 ms-1 wind speed was well predicted by the model (N = 17, R2 = 0.81 and R2 = 0.79, respectively). We also predicted the critical wind speed at which the trees will break from both the field data and models and find a good overall agreement (N = 17, R2 = 0.40). Finally, the model predicted the correct trend in the fundamental frequencies of the trees (N = 20, R2 = 0.38) although there was a systematic underprediction, possibly due to the simplified treatment of material properties in the model. The current approach relies on local wind data, so must be combined with wind flow modelling to be applicable at the landscape-scale or over complex terrain. This approach is applicable at the plot level and could also be applied to open-grown trees, such as in cities or parks
Leaf and wood classification framework for terrestrial LiDAR point clouds
Methods in Ecology and Evolution published by John Wiley & Sons Ltd on behalf of British Ecological Society. Leaf and wood separation is a key step to allow a new range of estimates from Terrestrial LiDAR data, such as quantifying above-ground biomass, leaf and wood area and their 3D spatial distributions. We present a new method to separate leaf and wood from single tree point clouds automatically. Our approach combines unsupervised classification of geometric features and shortest path analysis. The automated separation algorithm and its intermediate steps are presented and validated. Validation consisted of using a testing framework with synthetic point clouds, simulated using ray-tracing and 3D tree models and 10 field scanned tree point clouds. To evaluate results we calculated accuracy, kappa coefficient and F-score. Validation using simulated data resulted in an overall accuracy of 0.83, ranging from 0.71 to 0.94. Per tree average accuracy from synthetic data ranged from 0.77 to 0.89. Field data results presented and overall average accuracy of 0.89. Analysis of each step showed accuracy ranging from 0.75 to 0.98. F-scores from both simulated and field data were similar, with scores from leaf usually higher than for wood. Our separation method showed results similar to others in literature, albeit from a completely automated workflow. Analysis of each separation step suggests that the addition of path analysis improved the robustness of our algorithm. Accuracy can be improved with per tree parameter optimization. The library containing our separation script can be easily installed and applied to single tree point cloud. Average processing times are below 10 min for each tree
Realistic forest stand reconstruction from terrestrial LiDAR for radiative transfer modelling
Forest biophysical variables derived from remote sensing observations are vital for climate research. The combination of structurally and radiometrically accurate 3D "virtual" forests with radiative transfer (RT) models creates a powerful tool to facilitate the calibration and validation of remote sensing data and derived biophysical products by helping us understand the assumptions made in data processing algorithms. We present a workflow that uses highly detailed 3D terrestrial laser scanning (TLS) data to generate virtual forests for RT model simulations. Our approach to forest stand reconstruction from a co-registered point cloud is unique as it models each tree individually. Our approach follows three steps: (1) tree segmentation; (2) tree structure modelling and (3) leaf addition. To demonstrate this approach, we present the measurement and construction of a one hectare model of the deciduous forest in Wytham Woods (Oxford, UK). The model contains 559 individual trees. We matched the TLS data with traditional census data to determine the species of each individual tree and allocate species-specific radiometric properties. Our modelling framework is generic, highly transferable and adjustable to data collected with other TLS instruments and different ecosystems. The Wytham Woods virtual forest is made publicly available through an online repository
- …