871 research outputs found
Graphical continuous Lyapunov models
The linear Lyapunov equation of a covariance matrix parametrizes the
equilibrium covariance matrix of a stochastic process. This parametrization can
be interpreted as a new graphical model class, and we show how the model class
behaves under marginalization and introduce a method for structure learning via
-penalized loss minimization. Our proposed method is demonstrated to
outperform alternative structure learning algorithms in a simulation study, and
we illustrate its application for protein phosphorylation network
reconstruction.Comment: 10 pages, 5 figure
Estimation and variable selection in a joint model of survival times and longitudinal outcomes with random effects
This paper considers a joint survival and mixed-effects model to explain the
survival time from longitudinal data and high-dimensional covariates. The
longitudinal data is modeled using a nonlinear effects model, where the
regression function serves as a link function incorporated into a Cox model as
a covariate. In that way, the longitudinal data is related to the survival time
at a given time. Additionally, the Cox model takes into account the inclusion
of high-dimensional covariates. The main objectives of this research are
two-fold: first, to identify the relevant covariates that contribute to
explaining survival time, and second, to estimate all unknown parameters of the
joint model. For that purpose, we consider the maximization of a Lasso
penalized likelihood. To tackle the optimization problem, we implement a
pre-conditioned stochastic gradient to handle the latent variables of the
nonlinear mixed-effects model associated with a proximal operator to manage the
non-differentiability of the penalty. We provide relevant simulations that
showcase the performance of the proposed variable selection and parameters'
estimation method in the joint modeling of a Cox and logistic model
Model Consistency for Learning with Mirror-Stratifiable Regularizers
Low-complexity non-smooth convex regularizers are routinely used to impose
some structure (such as sparsity or low-rank) on the coefficients for linear
predictors in supervised learning. Model consistency consists then in selecting
the correct structure (for instance support or rank) by regularized empirical
risk minimization.
It is known that model consistency holds under appropriate non-degeneracy
conditions. However such conditions typically fail for highly correlated
designs and it is observed that regularization methods tend to select larger
models.
In this work, we provide the theoretical underpinning of this behavior using
the notion of mirror-stratifiable regularizers. This class of regularizers
encompasses the most well-known in the literature, including the or
trace norms. It brings into play a pair of primal-dual models, which in turn
allows one to locate the structure of the solution using a specific dual
certificate.
We also show how this analysis is applicable to optimal solutions of the
learning problem, and also to the iterates computed by a certain class of
stochastic proximal-gradient algorithms.Comment: 14 pages, 4 figure
- …