2,603 research outputs found

    Improved Chebyshev inequality: new probability bounds with known supremum of PDF

    Full text link
    In this paper, we derive new probability bounds for Chebyshev's inequality if the supremum of the probability density function is known. This result holds for one-dimensional or multivariate continuous probability distributions with finite mean and variance (covariance matrix). We also show that the similar result holds for specific discrete probability distributions.Comment: 7 pages, 2 figure

    A mean value theorem for systems of integrals

    Get PDF
    More than a century ago, G. Kowalewski stated that for each n continuous functions on a compact interval [a,b], there exists an n-point quadrature rule (with respect to Lebesgue measure on [a,b]), which is exact for given functions. Here we generalize this result to continuous functions with an arbitrary positive and finite measure on an arbitrary interval. The proof relies on a version of Caratheodory's convex hull theorem for a continuous curve, that we also prove in the paper. As applications, we give a representation of the covariance for two continuous functions of a random variable, and a most general version of Gruess' inequality.Comment: 7 page

    Transductive-Inductive Cluster Approximation Via Multivariate Chebyshev Inequality

    Full text link
    Approximating adequate number of clusters in multidimensional data is an open area of research, given a level of compromise made on the quality of acceptable results. The manuscript addresses the issue by formulating a transductive inductive learning algorithm which uses multivariate Chebyshev inequality. Considering clustering problem in imaging, theoretical proofs for a particular level of compromise are derived to show the convergence of the reconstruction error to a finite value with increasing (a) number of unseen examples and (b) the number of clusters, respectively. Upper bounds for these error rates are also proved. Non-parametric estimates of these error from a random sample of sequences empirically point to a stable number of clusters. Lastly, the generalization of algorithm can be applied to multidimensional data sets from different fields.Comment: 16 pages, 5 figure
    • …
    corecore