1,791 research outputs found
Land degradation: links to agricultural output and profitability
To understand land degradation and assess policy responses, knowledge is needed of the bio-physical causes, the economic effects on farms and the incentives farmers face to avoid or ameliorate the degradation. An empirical study of land degradation in the Australian state of New South Wales is presented in this article. The results suggest that there are incentives for farmers to co-exist with certain forms of degradation, while there are also incentives to avoid some other forms.Land Economics/Use,
Discussion of: Brownian distance covariance
Discussion on "Brownian distance covariance" by G\'abor J. Sz\'ekely and
Maria L. Rizzo [arXiv:1010.0297]Comment: Published in at http://dx.doi.org/10.1214/09-AOAS312F the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Discussion of: Brownian distance covariance
Discussion on "Brownian distance covariance" by G\'{a}bor J. Sz\'{e}kely and
Maria L. Rizzo [arXiv:1010.0297]Comment: Published in at http://dx.doi.org/10.1214/09-AOAS312E the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
A Kernel Independence Test for Random Processes
A new non parametric approach to the problem of testing the independence of
two random process is developed. The test statistic is the Hilbert Schmidt
Independence Criterion (HSIC), which was used previously in testing
independence for i.i.d pairs of variables. The asymptotic behaviour of HSIC is
established when computed from samples drawn from random processes. It is shown
that earlier bootstrap procedures which worked in the i.i.d. case will fail for
random processes, and an alternative consistent estimate of the p-values is
proposed. Tests on artificial data and real-world Forex data indicate that the
new test procedure discovers dependence which is missed by linear approaches,
while the earlier bootstrap procedure returns an elevated number of false
positives. The code is available online:
https://github.com/kacperChwialkowski/HSIC .Comment: In Proceedings of The 31st International Conference on Machine
Learnin
A maximum-mean-discrepancy goodness-of-fit test for censored data
We introduce a kernel-based goodness-of-fit test for censored data, where
observations may be missing in random time intervals: a common occurrence in
clinical trials and industrial life-testing. The test statistic is
straightforward to compute, as is the test threshold, and we establish
consistency under the null. Unlike earlier approaches such as the Log-rank
test, we make no assumptions as to how the data distribution might differ from
the null, and our test has power against a very rich class of alternatives. In
experiments, our test outperforms competing approaches for periodic and Weibull
hazard functions (where risks are time dependent), and does not show the
failure modes of tests that rely on user-defined features. Moreover, in cases
where classical tests are provably most powerful, our test performs almost as
well, while being more general
Interpretable Distribution Features with Maximum Testing Power
Two semimetrics on probability distributions are proposed, given as the sum
of differences of expectations of analytic functions evaluated at spatial or
frequency locations (i.e, features). The features are chosen so as to maximize
the distinguishability of the distributions, by optimizing a lower bound on
test power for a statistical test using these features. The result is a
parsimonious and interpretable indication of how and where two distributions
differ locally. An empirical estimate of the test power criterion converges
with increasing sample size, ensuring the quality of the returned features. In
real-world benchmarks on high-dimensional text and image data, linear-time
tests using the proposed semimetrics achieve comparable performance to the
state-of-the-art quadratic-time maximum mean discrepancy test, while returning
human-interpretable features that explain the test results
- …
