22,635 research outputs found
Dense classes of multivariate extreme value distributions
International audienceIn this paper, we explore tail dependence modelling in multivariate extreme value distributions. The measure of dependence chosen is the scale function, which allows combinations of distributions in a very flexible way. The correspondences between the scale function and the spectral measure or the stable tail dependence function are given. Combining scale functions by simple operations, three parametric classes of laws are (re)constructed and analyzed, and resulting nested and structured models are discussed. Finally, the denseness of each of these classes is shown
Characterizations of bivariate conic, extreme value, and Archimax copulas
Based on a general construction method by means of bivariate ultramodular copulas we construct, for particular settings, special bivariate conic, extreme value, and Archimax copulas. We also show that the sets of copulas obtained in this way are dense in the sets of all conic, extreme value, and Archimax copulas, respectively
Identifying Mixtures of Mixtures Using Bayesian Estimation
The use of a finite mixture of normal distributions in model-based clustering
allows to capture non-Gaussian data clusters. However, identifying the clusters
from the normal components is challenging and in general either achieved by
imposing constraints on the model or by using post-processing procedures.
Within the Bayesian framework we propose a different approach based on sparse
finite mixtures to achieve identifiability. We specify a hierarchical prior
where the hyperparameters are carefully selected such that they are reflective
of the cluster structure aimed at. In addition this prior allows to estimate
the model using standard MCMC sampling methods. In combination with a
post-processing approach which resolves the label switching issue and results
in an identified model, our approach allows to simultaneously (1) determine the
number of clusters, (2) flexibly approximate the cluster distributions in a
semi-parametric way using finite mixtures of normals and (3) identify
cluster-specific parameters and classify observations. The proposed approach is
illustrated in two simulation studies and on benchmark data sets.Comment: 49 page
Fast Genome-Wide QTL Association Mapping on Pedigree and Population Data
Since most analysis software for genome-wide association studies (GWAS)
currently exploit only unrelated individuals, there is a need for efficient
applications that can handle general pedigree data or mixtures of both
population and pedigree data. Even data sets thought to consist of only
unrelated individuals may include cryptic relationships that can lead to false
positives if not discovered and controlled for. In addition, family designs
possess compelling advantages. They are better equipped to detect rare
variants, control for population stratification, and facilitate the study of
parent-of-origin effects. Pedigrees selected for extreme trait values often
segregate a single gene with strong effect. Finally, many pedigrees are
available as an important legacy from the era of linkage analysis.
Unfortunately, pedigree likelihoods are notoriously hard to compute. In this
paper we re-examine the computational bottlenecks and implement ultra-fast
pedigree-based GWAS analysis. Kinship coefficients can either be based on
explicitly provided pedigrees or automatically estimated from dense markers.
Our strategy (a) works for random sample data, pedigree data, or a mix of both;
(b) entails no loss of power; (c) allows for any number of covariate
adjustments, including correction for population stratification; (d) allows for
testing SNPs under additive, dominant, and recessive models; and (e)
accommodates both univariate and multivariate quantitative traits. On a typical
personal computer (6 CPU cores at 2.67 GHz), analyzing a univariate HDL
(high-density lipoprotein) trait from the San Antonio Family Heart Study
(935,392 SNPs on 1357 individuals in 124 pedigrees) takes less than 2 minutes
and 1.5 GB of memory. Complete multivariate QTL analysis of the three
time-points of the longitudinal HDL multivariate trait takes less than 5
minutes and 1.5 GB of memory
Robust improper maximum likelihood: tuning, computation, and a comparison with other methods for robust Gaussian clustering
The two main topics of this paper are the introduction of the "optimally
tuned improper maximum likelihood estimator" (OTRIMLE) for robust clustering
based on the multivariate Gaussian model for clusters, and a comprehensive
simulation study comparing the OTRIMLE to Maximum Likelihood in Gaussian
mixtures with and without noise component, mixtures of t-distributions, and the
TCLUST approach for trimmed clustering. The OTRIMLE uses an improper constant
density for modelling outliers and noise. This can be chosen optimally so that
the non-noise part of the data looks as close to a Gaussian mixture as
possible. Some deviation from Gaussianity can be traded in for lowering the
estimated noise proportion. Covariance matrix constraints and computation of
the OTRIMLE are also treated. In the simulation study, all methods are
confronted with setups in which their model assumptions are not exactly
fulfilled, and in order to evaluate the experiments in a standardized way by
misclassification rates, a new model-based definition of "true clusters" is
introduced that deviates from the usual identification of mixture components
with clusters. In the study, every method turns out to be superior for one or
more setups, but the OTRIMLE achieves the most satisfactory overall
performance. The methods are also applied to two real datasets, one without and
one with known "true" clusters
Visual and interactive exploration of point data
Point data, such as Unit Postcodes (UPC), can provide very detailed information at fine
scales of resolution. For instance, socio-economic attributes are commonly assigned to
UPC. Hence, they can be represented as points and observable at the postcode level.
Using UPC as a common field allows the concatenation of variables from disparate data
sources that can potentially support sophisticated spatial analysis. However, visualising
UPC in urban areas has at least three limitations. First, at small scales UPC occurrences
can be very dense making their visualisation as points difficult. On the other hand,
patterns in the associated attribute values are often hardly recognisable at large scales.
Secondly, UPC can be used as a common field to allow the concatenation of highly
multivariate data sets with an associated postcode. Finally, socio-economic variables
assigned to UPC (such as the ones used here) can be non-Normal in their distributions
as a result of a large presence of zero values and high variances which constrain their
analysis using traditional statistics.
This paper discusses a Point Visualisation Tool (PVT), a proof-of-concept system
developed to visually explore point data. Various well-known visualisation techniques
were implemented to enable their interactive and dynamic interrogation. PVT provides
multiple representations of point data to facilitate the understanding of the relations
between attributes or variables as well as their spatial characteristics. Brushing between
alternative views is used to link several representations of a single attribute, as well as
to simultaneously explore more than one variable. PVT’s functionality shows how the
use of visual techniques embedded in an interactive environment enable the exploration
of large amounts of multivariate point data
- …