873 research outputs found

    Nonparametric estimation of multivariate extreme-value copulas

    Full text link
    Extreme-value copulas arise in the asymptotic theory for componentwise maxima of independent random samples. An extreme-value copula is determined by its Pickands dependence function, which is a function on the unit simplex subject to certain shape constraints that arise from an integral transform of an underlying measure called spectral measure. Multivariate extensions are provided of certain rank-based nonparametric estimators of the Pickands dependence function. The shape constraint that the estimator should itself be a Pickands dependence function is enforced by replacing an initial estimator by its best least-squares approximation in the set of Pickands dependence functions having a discrete spectral measure supported on a sufficiently fine grid. Weak convergence of the standardized estimators is demonstrated and the finite-sample performance of the estimators is investigated by means of a simulation experiment.Comment: 26 pages; submitted; Universit\'e catholique de Louvain, Institut de statistique, biostatistique et sciences actuarielle

    Complete Issue 24, 2001

    Get PDF

    Explainability and Causality in Machine Learning through Shapley values

    Get PDF
    Explainability and causality are becoming increasingly relevant in Machine Learning research. On the one hand, given the growing use of models in decision-making processes, the way in which they make predictions needs to be more thoroughly understood. On the other hand, a rising interest exists in formalising and introducing the causal relationships present in the real world into those same models. This work addresses both aspects through the use of Shapley values, a concept that is at the origin of SHAP, one of the most popular explainability techniques. Different methods for calculating Shapley values to explain predictions are introduced that take into account the dependence and the causal structure of the data. These methods are illustrated and compared through a series of experiments using a database whose causal structure is known. They show that differences can be observed when taking causality into account.La explicabilidad y la causalidad son áreas cada vez más relevantes en la investigación en Aprendizaje Automático. Por un lado, dado el creciente uso de los modelos en los procesos de toma de decisión, es necesario comprender mejor la forma en que realizan las predicciones. Por otro lado, existe un creciente interés por formalizar e introducir en esos mismos modelos las relaciones causales presentes en el mundo real. Este trabajo aborda ambos aspectos mediante el uso de los valores de Shapley, concepto que está en el origen de SHAP, una de las técnicas de explicabilidad más populares. Se exponen diferentes métodos de cálculo de valores de Shapley para explicar las predicciones que tienen en cuenta la dependencia y la estructura causal de los datos. Estos métodos se ilustran y comparan mediante una serie de experimentos que utilizan una base de datos cuya estructura causal se conoce. De ellos se pueden observar que existen diferencias cuando se tiene en cuenta la causalidad.Universidad de Sevilla. Doble Grado en Matemáticas y Estadístic
    • …
    corecore