1,377 research outputs found
Recommended from our members
Identification and separation of DNA mixtures using peak area information (Updated version of Statistical Research Paper No. 25)
We introduce a new methodology, based upon probabilistic expert systems, for analysing forensic identification problems involving DNA mixture traces using quantitative peak area information. Peak area is modelled with conditional Gaussian distributions. The expert system can be used for ascertaining whether individuals, whose profiles have been measured, have contributed to the mixture, but also to predict DNA profiles of unknown contributors by separating the mixture into its individual components. The potential of our probabilistic methodology is illustrated on case data examples and compared with alternative approaches. The advantages are that identification and separation issues can be handled in a unified way within a single probabilistic model and the uncertainty associated with the analysis is quantified. Further work, required to bring the methodology to a point where it could be applied to the routine analysis of casework, is discussed
Recommended from our members
Identification and separation of DNA mixtures using peak area information
Probabilistic expert systems for handling artifacts in complex DNA mixtures
This paper presents a coherent probabilistic framework for taking account of allelic dropout, stutter bands and silent alleles when interpreting STR DNA profiles from a mixture sample using peak size information arising from a PCR analysis. This information can be exploited for evaluating the evidential strength for a hypothesis that DNA from a particular person is present in the mixture. It extends an earlier Bayesian network approach that ignored such artifacts. We illustrate the use of the extended network on a published casework example
Maximum Likelihood Estimation in Gaussian Chain Graph Models under the Alternative Markov Property
The AMP Markov property is a recently proposed alternative Markov property
for chain graphs. In the case of continuous variables with a joint multivariate
Gaussian distribution, it is the AMP rather than the earlier introduced LWF
Markov property that is coherent with data-generation by natural
block-recursive regressions. In this paper, we show that maximum likelihood
estimates in Gaussian AMP chain graph models can be obtained by combining
generalized least squares and iterative proportional fitting to an iterative
algorithm. In an appendix, we give useful convergence results for iterative
partial maximization algorithms that apply in particular to the described
algorithm.Comment: 15 pages, article will appear in Scandinavian Journal of Statistic
Practical Bayesian Modeling and Inference for Massive Spatial Datasets On Modest Computing Environments
With continued advances in Geographic Information Systems and related
computational technologies, statisticians are often required to analyze very
large spatial datasets. This has generated substantial interest over the last
decade, already too vast to be summarized here, in scalable methodologies for
analyzing large spatial datasets. Scalable spatial process models have been
found especially attractive due to their richness and flexibility and,
particularly so in the Bayesian paradigm, due to their presence in hierarchical
model settings. However, the vast majority of research articles present in this
domain have been geared toward innovative theory or more complex model
development. Very limited attention has been accorded to approaches for easily
implementable scalable hierarchical models for the practicing scientist or
spatial analyst. This article is submitted to the Practice section of the
journal with the aim of developing massively scalable Bayesian approaches that
can rapidly deliver Bayesian inference on spatial process that are practically
indistinguishable from inference obtained using more expensive alternatives. A
key emphasis is on implementation within very standard (modest) computing
environments (e.g., a standard desktop or laptop) using easily available
statistical software packages without requiring message-parsing interfaces or
parallel programming paradigms. Key insights are offered regarding assumptions
and approximations concerning practical efficiency.Comment: 20 pages, 4 figures, 2 table
Structurally Tractable Uncertain Data
Many data management applications must deal with data which is uncertain,
incomplete, or noisy. However, on existing uncertain data representations, we
cannot tractably perform the important query evaluation tasks of determining
query possibility, certainty, or probability: these problems are hard on
arbitrary uncertain input instances. We thus ask whether we could restrict the
structure of uncertain data so as to guarantee the tractability of exact query
evaluation. We present our tractability results for tree and tree-like
uncertain data, and a vision for probabilistic rule reasoning. We also study
uncertainty about order, proposing a suitable representation, and study
uncertain data conditioned by additional observations.Comment: 11 pages, 1 figure, 1 table. To appear in SIGMOD/PODS PhD Symposium
201
Analysis of forensic DNA mixtures with artefacts
DNA is now routinely used in criminal investigations and court cases, although DNA samples taken at crime scenes are of varying quality and therefore present challenging problems for their interpretation. We present a statistical model for the quantitative peak information obtained from an electropherogram of a forensic DNA sample and illustrate its potential use for the analysis of criminal cases. In contrast with most previously used methods, we directly model the peak height information and incorporate important artefacts that are associated with the production of the electropherogram. Our model has a number of unknown parameters, and we show that these can be estimated by the method of maximum likelihood in the presence of multiple unknown individuals contributing to the sample, and their approximate standard errors calculated; the computations exploit a Bayesian network representation of the model. A case example from a UK trial, as reported in the literature, is used to illustrate the efficacy and use of the model, both in finding likelihood ratios to quantify the strength of evidence, and in the deconvolution of mixtures for finding likely profiles of the individuals contributing to the sample. Our model is readily extended to simultaneous analysis of more than one mixture as illustrated in a case example. We show that the combination of evidence from several samples may give an evidential strength which is close to that of a single-source trace and thus modelling of peak height information provides a potentially very efficient mixture analysis
Transfer Entropy as a Log-likelihood Ratio
Transfer entropy, an information-theoretic measure of time-directed
information transfer between joint processes, has steadily gained popularity in
the analysis of complex stochastic dynamics in diverse fields, including the
neurosciences, ecology, climatology and econometrics. We show that for a broad
class of predictive models, the log-likelihood ratio test statistic for the
null hypothesis of zero transfer entropy is a consistent estimator for the
transfer entropy itself. For finite Markov chains, furthermore, no explicit
model is required. In the general case, an asymptotic chi-squared distribution
is established for the transfer entropy estimator. The result generalises the
equivalence in the Gaussian case of transfer entropy and Granger causality, a
statistical notion of causal influence based on prediction via vector
autoregression, and establishes a fundamental connection between directed
information transfer and causality in the Wiener-Granger sense
Network Inference via the Time-Varying Graphical Lasso
Many important problems can be modeled as a system of interconnected
entities, where each entity is recording time-dependent observations or
measurements. In order to spot trends, detect anomalies, and interpret the
temporal dynamics of such data, it is essential to understand the relationships
between the different entities and how these relationships evolve over time. In
this paper, we introduce the time-varying graphical lasso (TVGL), a method of
inferring time-varying networks from raw time series data. We cast the problem
in terms of estimating a sparse time-varying inverse covariance matrix, which
reveals a dynamic network of interdependencies between the entities. Since
dynamic network inference is a computationally expensive task, we derive a
scalable message-passing algorithm based on the Alternating Direction Method of
Multipliers (ADMM) to solve this problem in an efficient way. We also discuss
several extensions, including a streaming algorithm to update the model and
incorporate new observations in real time. Finally, we evaluate our TVGL
algorithm on both real and synthetic datasets, obtaining interpretable results
and outperforming state-of-the-art baselines in terms of both accuracy and
scalability
Hierarchical Models for Independence Structures of Networks
We introduce a new family of network models, called hierarchical network
models, that allow us to represent in an explicit manner the stochastic
dependence among the dyads (random ties) of the network. In particular, each
member of this family can be associated with a graphical model defining
conditional independence clauses among the dyads of the network, called the
dependency graph. Every network model with dyadic independence assumption can
be generalized to construct members of this new family. Using this new
framework, we generalize the Erd\"os-R\'enyi and beta-models to create
hierarchical Erd\"os-R\'enyi and beta-models. We describe various methods for
parameter estimation as well as simulation studies for models with sparse
dependency graphs.Comment: 19 pages, 7 figure
- …