1,210 research outputs found

    Stochastic Low-Rank Kernel Learning for Regression

    Full text link
    We present a novel approach to learn a kernel-based regression function. It is based on the useof conical combinations of data-based parameterized kernels and on a new stochastic convex optimization procedure of which we establish convergence guarantees. The overall learning procedure has the nice properties that a) the learned conical combination is automatically designed to perform the regression task at hand and b) the updates implicated by the optimization procedure are quite inexpensive. In order to shed light on the appositeness of our learning strategy, we present empirical results from experiments conducted on various benchmark datasets.Comment: International Conference on Machine Learning (ICML'11), Bellevue (Washington) : United States (2011

    Data Filtering for Cluster Analysis by ā„“0\ell_0-Norm Regularization

    Full text link
    A data filtering method for cluster analysis is proposed, based on minimizing a least squares function with a weighted ā„“0\ell_0-norm penalty. To overcome the discontinuity of the objective function, smooth non-convex functions are employed to approximate the ā„“0\ell_0-norm. The convergence of the global minimum points of the approximating problems towards global minimum points of the original problem is stated. The proposed method also exploits a suitable technique to choose the penalty parameter. Numerical results on synthetic and real data sets are finally provided, showing how some existing clustering methods can take advantages from the proposed filtering strategy.Comment: Optimization Letters (2017

    On the shape of posterior densities and credible sets in instrumental variable regression models with reduced rank: an application of flexible sampling methods using neural networks

    Get PDF
    Likelihoods and posteriors of instrumental variable regression models with strongendogeneity and/or weak instruments may exhibit rather non-elliptical contours inthe parameter space. This may seriously affect inference based on Bayesian crediblesets. When approximating such contours using Monte Carlo integration methods likeimportance sampling or Markov chain Monte Carlo procedures the speed of the algorithmand the quality of the results greatly depend on the choice of the importance orcandidate density. Such a density has to be `close' to the target density in order toyield accurate results with numerically efficient sampling. For this purpose we introduce neural networks which seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from.A key step in the proposed class of methods is the construction of a neural network that approximates the target density accurately. The methods are tested on a set ofillustrative models. The results indicate the feasibility of the neural networkapproach.Markov chain Monte Carlo;Bayesian inference;credible sets;importance sampling;instrumental variables;neural networks;reduced rank

    Productivity Dynamics and Structural Change in the U.S. Manufacturing Sector

    Get PDF
    The paper investigates structural change among the four-digit (SIC) industries of the U.S. manufacturing sector during 1958-96 within a distribution dynamics framework. Focus is on the transition density of the Markov process that characterizes the value added shares of the industries. This transition density is estimated nonparametrically as well as by maximum likelihood, in which case the functional form of the density is derived from a search theoretic model. The nonparametric and the maximum likelihood fits show striking similarities. The relation of structural change to a relative measure of total factor productivity change is tested by an application of quantile regression and is found to be significantly positive throughout.structural change, productivity, manufacturing, quantile regression
    • ā€¦
    corecore