1,210 research outputs found
Stochastic Low-Rank Kernel Learning for Regression
We present a novel approach to learn a kernel-based regression function. It
is based on the useof conical combinations of data-based parameterized kernels
and on a new stochastic convex optimization procedure of which we establish
convergence guarantees. The overall learning procedure has the nice properties
that a) the learned conical combination is automatically designed to perform
the regression task at hand and b) the updates implicated by the optimization
procedure are quite inexpensive. In order to shed light on the appositeness of
our learning strategy, we present empirical results from experiments conducted
on various benchmark datasets.Comment: International Conference on Machine Learning (ICML'11), Bellevue
(Washington) : United States (2011
Data Filtering for Cluster Analysis by -Norm Regularization
A data filtering method for cluster analysis is proposed, based on minimizing
a least squares function with a weighted -norm penalty. To overcome the
discontinuity of the objective function, smooth non-convex functions are
employed to approximate the -norm. The convergence of the global
minimum points of the approximating problems towards global minimum points of
the original problem is stated. The proposed method also exploits a suitable
technique to choose the penalty parameter. Numerical results on synthetic and
real data sets are finally provided, showing how some existing clustering
methods can take advantages from the proposed filtering strategy.Comment: Optimization Letters (2017
On the shape of posterior densities and credible sets in instrumental variable regression models with reduced rank: an application of flexible sampling methods using neural networks
Likelihoods and posteriors of instrumental variable regression models with strongendogeneity and/or weak instruments may exhibit rather non-elliptical contours inthe parameter space. This may seriously affect inference based on Bayesian crediblesets. When approximating such contours using Monte Carlo integration methods likeimportance sampling or Markov chain Monte Carlo procedures the speed of the algorithmand the quality of the results greatly depend on the choice of the importance orcandidate density. Such a density has to be `close' to the target density in order toyield accurate results with numerically efficient sampling. For this purpose we introduce neural networks which seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from.A key step in the proposed class of methods is the construction of a neural network that approximates the target density accurately. The methods are tested on a set ofillustrative models. The results indicate the feasibility of the neural networkapproach.Markov chain Monte Carlo;Bayesian inference;credible sets;importance sampling;instrumental variables;neural networks;reduced rank
Recommended from our members
Semiparametric estimation for a class of time-inhomogenous diffusion processes
Copyright @ 2009 Institute of Statistical Science, Academia SinicaWe develop two likelihood-based approaches to semiparametrically estimate a class of time-inhomogeneous diffusion processes: log penalized splines (P-splines) and the local log-linear method. Positive volatility is naturally embedded and this positivity is not guaranteed in most existing diffusion models. We investigate different smoothing parameter selections. Separate bandwidths are used for drift and volatility estimation. In the log P-splines approach, different smoothness for different time varying coefficients is feasible by assigning different penalty parameters. We also provide theorems for both approaches and report statistical inference results. Finally, we present a case study using the weekly three-month Treasury bill data from 1954 to 2004. We find that the log P-splines approach seems to capture the volatility dip in mid-1960s the best. We also present an application to calculate a financial market risk measure called Value at Risk (VaR) using statistical estimates from log P-splines
Productivity Dynamics and Structural Change in the U.S. Manufacturing Sector
The paper investigates structural change among the four-digit (SIC) industries of the U.S. manufacturing sector during 1958-96 within a distribution dynamics framework. Focus is on the transition density of the Markov process that characterizes the value added shares of the industries. This transition density is estimated nonparametrically as well as by maximum likelihood, in which case the functional form of the density is derived from a search theoretic model. The nonparametric and the maximum likelihood fits show striking similarities. The relation of structural change to a relative measure of total factor productivity change is tested by an application of quantile regression and is found to be significantly positive throughout.structural change, productivity, manufacturing, quantile regression
- ā¦