7,013 research outputs found

    Estimating Local Function Complexity via Mixture of Gaussian Processes

    Full text link
    Real world data often exhibit inhomogeneity, e.g., the noise level, the sampling distribution or the complexity of the target function may change over the input space. In this paper, we try to isolate local function complexity in a practical, robust way. This is achieved by first estimating the locally optimal kernel bandwidth as a functional relationship. Specifically, we propose Spatially Adaptive Bandwidth Estimation in Regression (SABER), which employs the mixture of experts consisting of multinomial kernel logistic regression as a gate and Gaussian process regression models as experts. Using the locally optimal kernel bandwidths, we deduce an estimate to the local function complexity by drawing parallels to the theory of locally linear smoothing. We demonstrate the usefulness of local function complexity for model interpretation and active learning in quantum chemistry experiments and fluid dynamics simulations.Comment: 19 pages, 16 figure

    NARX-based nonlinear system identification using orthogonal least squares basis hunting

    No full text
    An orthogonal least squares technique for basis hunting (OLS-BH) is proposed to construct sparse radial basis function (RBF) models for NARX-type nonlinear systems. Unlike most of the existing RBF or kernel modelling methods, whichplaces the RBF or kernel centers at the training input data points and use a fixed common variance for all the regressors, the proposed OLS-BH technique tunes the RBF center and diagonal covariance matrix of individual regressor by minimizing the training mean square error. An efficient optimization method isadopted for this basis hunting to select regressors in an orthogonal forward selection procedure. Experimental results obtained using this OLS-BH technique demonstrate that it offers a state-of-the-art method for constructing parsimonious RBF models with excellent generalization performance

    Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization

    Full text link
    As a powerful statistical image modeling technique, sparse representation has been successfully used in various image restoration applications. The success of sparse representation owes to the development of l1-norm optimization techniques, and the fact that natural images are intrinsically sparse in some domain. The image restoration quality largely depends on whether the employed sparse domain can represent well the underlying image. Considering that the contents can vary significantly across different images or different patches in a single image, we propose to learn various sets of bases from a pre-collected dataset of example image patches, and then for a given patch to be processed, one set of bases are adaptively selected to characterize the local sparse domain. We further introduce two adaptive regularization terms into the sparse representation framework. First, a set of autoregressive (AR) models are learned from the dataset of example image patches. The best fitted AR models to a given patch are adaptively selected to regularize the image local structures. Second, the image non-local self-similarity is introduced as another regularization term. In addition, the sparsity regularization parameter is adaptively estimated for better image restoration performance. Extensive experiments on image deblurring and super-resolution validate that by using adaptive sparse domain selection and adaptive regularization, the proposed method achieves much better results than many state-of-the-art algorithms in terms of both PSNR and visual perception.Comment: 35 pages. This paper is under review in IEEE TI
    corecore