15 research outputs found

    Checking for orthant orderings between discrete multivariate distributions: An algorithm

    Get PDF
    We consider four orthant stochastic orderings between random vectors X and Y that have finitely discrete probability distributions in IRk. For each of the orderings conditions have been developed that are necessary and sufficient for dominance of Y over X. We present an algorithm that checks these conditions in an efficient way by operating on a semilattice generated by the support of the two distributions. In particular, the algorithm can be used to compute multivariate Smirnov statistics. --Multivariate stochastic orders,decision under risk,comparison of empirical distribution functions

    Depth and Depth-Based Classification with R Package ddalpha

    Get PDF
    Following the seminal idea of Tukey (1975), data depth is a function that measures how close an arbitrary point of the space is located to an implicitly defined center of a data cloud. Having undergone theoretical and computational developments, it is now employed in numerous applications with classification being the most popular one. The R package ddalpha is a software directed to fuse experience of the applicant with recent achievements in the area of data depth and depth-based classification. ddalpha provides an implementation for exact and approximate computation of most reasonable and widely applied notions of data depth. These can be further used in the depth-based multivariate and functional classifiers implemented in the package, where the DDα-procedure is in the main focus. The package is expandable with user-defined custom depth methods and separators. The implemented functions for depth visualization and the built-in benchmark procedures may also serve to provide insights into the geometry of the data and the quality of pattern recognition

    Choquet-Erwartungsnutzen und antizipierter Nutzen: ein Beitrag zur Entscheidungstheorie bei einem und mehreren Attributen

    No full text
    Available from Bibliothek des Instituts fuer Weltwirtschaft, ZBW, Duesternbrook Weg 120, D-24105 Kiel A 201525 / FIZ - Fachinformationszzentrum Karlsruhe / TIB - Technische InformationsbibliothekSIGLEDEGerman

    Weighted-mean trimming of multivariate data

    No full text
    A general notion of trimmed regions for empirical distributions in d-space is introduced. The regions are called weighted-mean trimmed regions. They are continuous in the data as well as in the trimming parameter. Further, these trimmed regions have many other attractive properties. In particular they are subadditive and monotone which makes it possible to construct multivariate measures of risk based on these regions. Special cases include the zonoid trimming and the ECH (expected convex hull) trimming. These regions can be exactly calculated for any dimension. Finally, the notion of weighted-mean trimmed regions extends to probability distributions in d-space, and a law of large numbers applies.Central regions Continuous trimming Data depth Lift zonoid regions Expected convex hull Law of large numbers

    Approximate computation of projection depths

    No full text
    Data depth is a concept in multivariate statistics that measures the centrality of a point in a given data cloud in R-d. If the depth of a point can be represented as the minimum of the depths with respect to all one-dimensional projections of the data, then the depth satisfies the so-called projection property. Such depths form an important class that includes many of the depths that have been proposed in literature. For depths that satisfy the projection property an approximate algorithm can easily be constructed since taking the minimum of the depths with respect to only a finite number of one-dimensional projections yields an upper bound for the depth with respect to the multivariate data. Such an algorithm is particularly useful if no exact algorithm exists or if the exact algorithm has a high computational complexity, as is the case with the halfspace depth or the projection depth. To compute these depths in high dimensions, the use of an approximate algorithm with better complexity is surely preferable. Instead of focusing on a single method we provide a comprehensive and fair comparison of several methods, both already described in the literature and original. (C) 2021 Elsevier B.V. All rights reserved

    Confidence Regions for Multivariate Quantiles

    Get PDF
    Multivariate quantiles are of increasing importance in applications of hydrology. This calls for reliable methods to evaluate the precision of the estimated quantile sets. Therefore, we focus on two recently developed approaches to estimate confidence regions for level sets and extend them to provide confidence regions for multivariate quantiles based on copulas. In a simulation study, we check coverage probabilities of the employed approaches. In particular, we focus on small sample sizes. One approach shows reasonable coverage probabilities and the second one obtains mixed results. Not only the bounded copula domain but also the additional estimation of the quantile level pose some problems. A small sample application gives further insight into the employed techniques

    Uniform convergence rates for the approximated halfspace and projection depth

    No full text
    The computational complexity of some depths that satisfy the projection property, such as the halfspace depth or the projection depth, is known to be high, especially for data of higher dimensionality. In such scenarios, the exact depth is frequently approximated using a randomized approach: The data are projected into a finite number of directions uniformly distributed on the unit sphere, and the minimal depth of these univariate projections is used to approximate the true depth. We provide a theoretical background for this approximation procedure. Several uniform consistency results are established, and the corresponding uniform convergence rates are provided. For elliptically symmetric distributions and the halfspace depth it is shown that the obtained uniform convergence rates are sharp. In particular, guidelines for the choice of the number of random projections in order to achieve a given precision of the depths are stated

    Depth and depth-based classification with R-package ddalpha

    No full text
    Following the seminal idea of Tukey, data depth is a function that measures how close an arbitrary point of the space is located to an implicitly defined center of a data cloud. Having undergone theoretical and computational developments, it is now employed in numerous applications with classification being the most popular one. The R-package ddalpha is a software directed to fuse experience of the applicant with recent achievements in the area of data depth and depth-based classification. ddalpha provides an implementation for exact and approximate computation of most reasonable and widely applied notions of data depth. These can be further used in the depth-based multivariate and functional classifiers implemented in the package, where the DDαDD\alpha-procedure is in the main focus. The package is expandable with user-defined custom depth methods and separators. The implemented functions for depth visualization and the built-in benchmark procedures may also serve to provide insights into the geometry of the data and the quality of pattern recognition
    corecore