163,575 research outputs found

    On choosing and bounding probability metrics

    Get PDF
    When studying convergence of measures, an important issue is the choice of probability metric. In this review, we provide a summary and some new results concerning bounds among ten important probability metrics/distances that are used by statisticians and probabilists. We focus on these metrics because they are either well-known, commonly used, or admit practical bounding techniques. We summarize these relationships in a handy reference diagram, and also give examples to show how rates of convergence can depend on the metric chosen.Comment: To appear, International Statistical Review. Related work at http://www.math.hmc.edu/~su/papers.htm

    Sets of bounded discrepancy for multi-dimensional irrational rotation

    Full text link
    We study bounded remainder sets with respect to an irrational rotation of the dd-dimensional torus. The subject goes back to Hecke, Ostrowski and Kesten who characterized the intervals with bounded remainder in dimension one. First we extend to several dimensions the Hecke-Ostrowski result by constructing a class of dd-dimensional parallelepipeds of bounded remainder. Then we characterize the Riemann measurable bounded remainder sets in terms of "equidecomposability" to such a parallelepiped. By constructing invariants with respect to this equidecomposition, we derive explicit conditions for a polytope to be a bounded remainder set. In particular this yields a characterization of the convex bounded remainder polygons in two dimensions. The approach is used to obtain several other results as well.Comment: To appear in Geometric And Functional Analysi

    Unsupervised edge map scoring: a statistical complexity approach

    Full text link
    We propose a new Statistical Complexity Measure (SCM) to qualify edge maps without Ground Truth (GT) knowledge. The measure is the product of two indices, an \emph{Equilibrium} index E\mathcal{E} obtained by projecting the edge map into a family of edge patterns, and an \emph{Entropy} index H\mathcal{H}, defined as a function of the Kolmogorov Smirnov (KS) statistic. This new measure can be used for performance characterization which includes: (i)~the specific evaluation of an algorithm (intra-technique process) in order to identify its best parameters, and (ii)~the comparison of different algorithms (inter-technique process) in order to classify them according to their quality. Results made over images of the South Florida and Berkeley databases show that our approach significantly improves over Pratt's Figure of Merit (PFoM) which is the objective reference-based edge map evaluation standard, as it takes into account more features in its evaluation

    A function space framework for structural total variation regularization with applications in inverse problems

    Get PDF
    In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable total variation type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted total variation for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction

    Computing Probabilistic Bisimilarity Distances for Probabilistic Automata

    Get PDF
    The probabilistic bisimilarity distance of Deng et al. has been proposed as a robust quantitative generalization of Segala and Lynch's probabilistic bisimilarity for probabilistic automata. In this paper, we present a characterization of the bisimilarity distance as the solution of a simple stochastic game. The characterization gives us an algorithm to compute the distances by applying Condon's simple policy iteration on these games. The correctness of Condon's approach, however, relies on the assumption that the games are stopping. Our games may be non-stopping in general, yet we are able to prove termination for this extended class of games. Already other algorithms have been proposed in the literature to compute these distances, with complexity in UPcoUP\textbf{UP} \cap \textbf{coUP} and \textbf{PPAD}. Despite the theoretical relevance, these algorithms are inefficient in practice. To the best of our knowledge, our algorithm is the first practical solution. The characterization of the probabilistic bisimilarity distance mentioned above crucially uses a dual presentation of the Hausdorff distance due to M\'emoli. As an additional contribution, in this paper we show that M\'emoli's result can be used also to prove that the bisimilarity distance bounds the difference in the maximal (or minimal) probability of two states to satisfying arbitrary ω\omega-regular properties, expressed, eg., as LTL formulas
    corecore