13,742 research outputs found
The generalized Lasso with non-linear observations
We study the problem of signal estimation from non-linear observations when
the signal belongs to a low-dimensional set buried in a high-dimensional space.
A rough heuristic often used in practice postulates that non-linear
observations may be treated as noisy linear observations, and thus the signal
may be estimated using the generalized Lasso. This is appealing because of the
abundance of efficient, specialized solvers for this program. Just as noise may
be diminished by projecting onto the lower dimensional space, the error from
modeling non-linear observations with linear observations will be greatly
reduced when using the signal structure in the reconstruction. We allow general
signal structure, only assuming that the signal belongs to some set K in R^n.
We consider the single-index model of non-linearity. Our theory allows the
non-linearity to be discontinuous, not one-to-one and even unknown. We assume a
random Gaussian model for the measurement matrix, but allow the rows to have an
unknown covariance matrix. As special cases of our results, we recover
near-optimal theory for noisy linear observations, and also give the first
theoretical accuracy guarantee for 1-bit compressed sensing with unknown
covariance matrix of the measurement vectors.Comment: 21 page
The OSCAR for Generalized Linear Models
The Octagonal Selection and Clustering Algorithm in Regression (OSCAR) proposed by Bondell and Reich (2008) has the attractive feature that highly correlated predictors can obtain exactly the same coecient yielding clustering of predictors. Estimation methods are available for linear regression models. It is shown how the OSCAR penalty can be used within the framework of generalized linear models. An algorithm that solves the corresponding maximization problem is given. The estimation method is investigated in a simulation study and the usefulness is demonstrated by an example from water engineering
Sparse Regression with Multi-type Regularized Feature Modeling
Within the statistical and machine learning literature, regularization
techniques are often used to construct sparse (predictive) models. Most
regularization strategies only work for data where all predictors are treated
identically, such as Lasso regression for (continuous) predictors treated as
linear effects. However, many predictive problems involve different types of
predictors and require a tailored regularization term. We propose a multi-type
Lasso penalty that acts on the objective function as a sum of subpenalties, one
for each type of predictor. As such, we allow for predictor selection and level
fusion within a predictor in a data-driven way, simultaneous with the parameter
estimation process. We develop a new estimation strategy for convex predictive
models with this multi-type penalty. Using the theory of proximal operators,
our estimation procedure is computationally efficient, partitioning the overall
optimization problem into easier to solve subproblems, specific for each
predictor type and its associated penalty. Earlier research applies
approximations to non-differentiable penalties to solve the optimization
problem. The proposed SMuRF algorithm removes the need for approximations and
achieves a higher accuracy and computational efficiency. This is demonstrated
with an extensive simulation study and the analysis of a case-study on
insurance pricing analytics
A General Family of Penalties for Combining Differing Types of Penalties in Generalized Structured Models
Penalized estimation has become an established tool for regularization and model selection in regression models.
A variety of penalties with specific features are available
and effective algorithms for specific penalties have been proposed.
But not much is available to fit models that call for a combination of different penalties.
When modeling rent data, which will be considered as an example, various types of predictors call for a combination of a Ridge, a grouped Lasso and a Lasso-type penalty within one model.
Algorithms that can deal with such problems, are in demand.
We propose to approximate penalties that are (semi-)norms of scalar linear transformations of the coefficient vector in generalized structured models.
The penalty is very general such that the Lasso, the fused Lasso, the Ridge, the smoothly clipped absolute deviation penalty (SCAD), the elastic net and many more penalties are embedded.
The approximation allows to combine all these penalties within one model.
The computation is based on conventional penalized iteratively re-weighted least squares (PIRLS) algorithms and hence, easy to implement.
Moreover, new penalties can be incorporated quickly.
The approach is also extended to penalties with vector based arguments; that is, to penalties with norms of linear transformations of the coefficient vector.
Some illustrative examples and the model for the Munich rent data show promising results
- …