49,549 research outputs found
Estimation of extended mixed models using latent classes and latent processes: the R package lcmm
The R package lcmm provides a series of functions to estimate statistical
models based on linear mixed model theory. It includes the estimation of mixed
models and latent class mixed models for Gaussian longitudinal outcomes (hlme),
curvilinear and ordinal univariate longitudinal outcomes (lcmm) and curvilinear
multivariate outcomes (multlcmm), as well as joint latent class mixed models
(Jointlcmm) for a (Gaussian or curvilinear) longitudinal outcome and a
time-to-event that can be possibly left-truncated right-censored and defined in
a competing setting. Maximum likelihood esimators are obtained using a modified
Marquardt algorithm with strict convergence criteria based on the parameters
and likelihood stability, and on the negativity of the second derivatives. The
package also provides various post-fit functions including goodness-of-fit
analyses, classification, plots, predicted trajectories, individual dynamic
prediction of the event and predictive accuracy assessment. This paper
constitutes a companion paper to the package by introducing each family of
models, the estimation technique, some implementation details and giving
examples through a dataset on cognitive aging
An Iterative Receiver for OFDM With Sparsity-Based Parametric Channel Estimation
In this work we design a receiver that iteratively passes soft information
between the channel estimation and data decoding stages. The receiver
incorporates sparsity-based parametric channel estimation. State-of-the-art
sparsity-based iterative receivers simplify the channel estimation problem by
restricting the multipath delays to a grid. Our receiver does not impose such a
restriction. As a result it does not suffer from the leakage effect, which
destroys sparsity. Communication at near capacity rates in high SNR requires a
large modulation order. Due to the close proximity of modulation symbols in
such systems, the grid-based approximation is of insufficient accuracy. We show
numerically that a state-of-the-art iterative receiver with grid-based sparse
channel estimation exhibits a bit-error-rate floor in the high SNR regime. On
the contrary, our receiver performs very close to the perfect channel state
information bound for all SNR values. We also demonstrate both theoretically
and numerically that parametric channel estimation works well in dense
channels, i.e., when the number of multipath components is large and each
individual component cannot be resolved.Comment: Major revision, accepted for IEEE Transactions on Signal Processin
Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization
We address the inverse problem of cosmic large-scale structure reconstruction
from a Bayesian perspective. For a linear data model, a number of known and
novel reconstruction schemes, which differ in terms of the underlying signal
prior, data likelihood, and numerical inverse extra-regularization schemes are
derived and classified. The Bayesian methodology presented in this paper tries
to unify and extend the following methods: Wiener-filtering, Tikhonov
regularization, Ridge regression, Maximum Entropy, and inverse regularization
techniques. The inverse techniques considered here are the asymptotic
regularization, the Jacobi, Steepest Descent, Newton-Raphson,
Landweber-Fridman, and both linear and non-linear Krylov methods based on
Fletcher-Reeves, Polak-Ribiere, and Hestenes-Stiefel Conjugate Gradients. The
structures of the up-to-date highest-performing algorithms are presented, based
on an operator scheme, which permits one to exploit the power of fast Fourier
transforms. Using such an implementation of the generalized Wiener-filter in
the novel ARGO-software package, the different numerical schemes are
benchmarked with 1-, 2-, and 3-dimensional problems including structured white
and Poissonian noise, data windowing and blurring effects. A novel numerical
Krylov scheme is shown to be superior in terms of performance and fidelity.
These fast inverse methods ultimately will enable the application of sampling
techniques to explore complex joint posterior distributions. We outline how the
space of the dark-matter density field, the peculiar velocity field, and the
power spectrum can jointly be investigated by a Gibbs-sampling process. Such a
method can be applied for the redshift distortions correction of the observed
galaxies and for time-reversal reconstructions of the initial density field.Comment: 40 pages, 11 figure
AutoBayes: A System for Generating Data Analysis Programs from Statistical Models
Data analysis is an important scientific task which is required whenever information needs to be extracted from raw data. Statistical approaches to data analysis, which use methods from probability theory and numerical analysis, are well-founded but difficult to implement: the development of a statistical data analysis program for any given application is time-consuming and requires substantial knowledge and experience in several areas. In this paper, we describe AutoBayes, a program synthesis system for the generation of data analysis programs from statistical models. A statistical model specifies the properties for each problem variable (i.e., observation or parameter) and its dependencies in the form of a probability distribution. It is a fully declarative problem description, similar in spirit to a set of differential equations. From such a model, AutoBayes generates optimized and fully commented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Code is produced by a schema-guided deductive synthesis process. A schema consists of a code template and applicability constraints which are checked against the model during synthesis using theorem proving technology. AutoBayes augments schema-guided synthesis by symbolic-algebraic computation and can thus derive closed-form solutions for many problems. It is well-suited for tasks like estimating best-fitting model parameters for the given data. Here, we describe AutoBayes's system architecture, in particular the schema-guided synthesis kernel. Its capabilities are illustrated by a number of advanced textbook examples and benchmarks
Maximum-entropy moment-closure for stochastic systems on networks
Moment-closure methods are popular tools to simplify the mathematical
analysis of stochastic models defined on networks, in which high dimensional
joint distributions are approximated (often by some heuristic argument) as
functions of lower dimensional distributions. Whilst undoubtedly useful,
several such methods suffer from issues of non-uniqueness and inconsistency.
These problems are solved by an approach based on the maximisation of entropy,
which is motivated, derived and implemented in this article. A series of
numerical experiments are also presented, detailing the application of the
method to the Susceptible-Infective-Recovered model of epidemics, as well as
cautionary examples showing the sensitivity of moment-closure techniques in
general.Comment: 20 pages, 7 figure
- …