12,124 research outputs found

    Kernel density construction using orthogonal forward regression

    No full text
    An automatic algorithm is derived for constructing kernel density estimates based on a regression approach that directly optimizes generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. Local regularization is incorporated into the density construction process to further enforce sparsity. Examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample Parzen window density estimate

    Sparse kernel density construction using orthogonal forward regression with leave-one-out test score and local regularization

    No full text
    The paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favourably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates

    Probability density estimation with tunable kernels using orthogonal forward regression

    Get PDF
    A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately

    Sparse kernel density estimation technique based on zero-norm constraint

    Get PDF
    A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance

    Elastic net prefiltering for two class classification

    No full text
    A two-stage linear-in-the-parameter model construction algorithm is proposed aimed at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linear-in-the-parameter classifier. The prefiltering stage is a two-level process aimed at maximizing a model’s generalization capability, in which a new elastic-net model identification algorithm using singular value decomposition is employed at the lower level, and then, two regularization parameters are optimized using a particle-swarm-optimization algorithm at the upper level by minimizing the leave-one-out (LOO) misclassification rate. It is shown that the LOO misclassification rate based on the resultant prefiltered signal can be analytically computed without splitting the data set, and the associated computational cost is minimal due to orthogonality. The second stage of sparse classifier construction is based on orthogonal forward regression with the D-optimality algorithm. Extensive simulations of this approach for noisy data sets illustrate the competitiveness of this approach to classification of noisy data problems

    Diffusion Maps Kalman Filter for a Class of Systems with Gradient Flows

    Full text link
    In this paper, we propose a non-parametric method for state estimation of high-dimensional nonlinear stochastic dynamical systems, which evolve according to gradient flows with isotropic diffusion. We combine diffusion maps, a manifold learning technique, with a linear Kalman filter and with concepts from Koopman operator theory. More concretely, using diffusion maps, we construct data-driven virtual state coordinates, which linearize the system model. Based on these coordinates, we devise a data-driven framework for state estimation using the Kalman filter. We demonstrate the strengths of our method with respect to both parametric and non-parametric algorithms in three tracking problems. In particular, applying the approach to actual recordings of hippocampal neural activity in rodents directly yields a representation of the position of the animals. We show that the proposed method outperforms competing non-parametric algorithms in the examined stochastic problem formulations. Additionally, we obtain results comparable to classical parametric algorithms, which, in contrast to our method, are equipped with model knowledge.Comment: 15 pages, 12 figures, submitted to IEEE TS

    Projection Pursuit through Φ\Phi-Divergence Minimisation

    Full text link
    Consider a defined density on a set of very large dimension. It is quite difficult to find an estimate of this density from a data set. However, it is possible through a projection pursuit methodology to solve this problem. Touboul's article "Projection Pursuit Through Relative Entropy Minimization", 2009, demonstrates the interest of the author's method in a very simple given case. He considers the factorization of a density through an Elliptical component and some residual density. The above Touboul's work is based on minimizing relative entropy. In the present article, our proposal will aim at extending this very methodology to the Φ\Phi-divergence. Furthermore, we will also consider the case when the density to be factorized is estimated from an i.i.d. sample. We will then propose a test for the factorization of the estimated density. Applications include a new test of fit pertaining to the Elliptical copulas.Comment: 32 pages, 4 figures, 5 tableaux, elsarticle clas

    Model term selection for spatio-temporal system identification using mutual information

    Get PDF
    A new mutual information based algorithm is introduced for term selection in spatio-temporal models. A generalised cross validation procedure is also introduced for model length determination and examples based on cellular automata, coupled map lattice and partial differential equations are described
    corecore