34,256 research outputs found
Probability density estimation with tunable kernels using orthogonal forward regression
A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately
Bayesian Fused Lasso regression for dynamic binary networks
We propose a multinomial logistic regression model for link prediction in a
time series of directed binary networks. To account for the dynamic nature of
the data we employ a dynamic model for the model parameters that is strongly
connected with the fused lasso penalty. In addition to promoting sparseness,
this prior allows us to explore the presence of change points in the structure
of the network. We introduce fast computational algorithms for estimation and
prediction using both optimization and Bayesian approaches. The performance of
the model is illustrated using simulated data and data from a financial trading
network in the NYMEX natural gas futures market. Supplementary material
containing the trading network data set and code to implement the algorithms is
available online
A data driven equivariant approach to constrained Gaussian mixture modeling
Maximum likelihood estimation of Gaussian mixture models with different
class-specific covariance matrices is known to be problematic. This is due to
the unboundedness of the likelihood, together with the presence of spurious
maximizers. Existing methods to bypass this obstacle are based on the fact that
unboundedness is avoided if the eigenvalues of the covariance matrices are
bounded away from zero. This can be done imposing some constraints on the
covariance matrices, i.e. by incorporating a priori information on the
covariance structure of the mixture components. The present work introduces a
constrained equivariant approach, where the class conditional covariance
matrices are shrunk towards a pre-specified matrix Psi. Data-driven choices of
the matrix Psi, when a priori information is not available, and the optimal
amount of shrinkage are investigated. The effectiveness of the proposal is
evaluated on the basis of a simulation study and an empirical example
Application of response surface methodology to stiffened panel optimization
In a multilevel optimization frame, the use of surrogate models to approximate optimization constraints allows great time saving. Among available metamodelling techniques we chose to use Neural Networks to perform regression of static mechanical criteria, namely buckling and collapse reserve factors of a stiffened panel, which are constraints of our subsystem optimization problem. Due to the highly non linear behaviour of these functions with respect to loading and design variables, we encountered some difficulties to obtain an approximation of sufficient quality on the whole design space. In particular, variations of the approximated function can be very different according to the value of loading variables. We show how a prior knowledge of the influence of the variables allows us to build an efficient Mixture of Expert model, leading to a good approximation of constraints. Optimization benchmark processes are computed to measure time saving, effects on optimum feasibility and objective value due to the use of the surrogate models as constraints. Finally we see that, while efficient, this
mixture of expert model could be still improved by some additional learning techniques
- âŠ