249 research outputs found

    Stochastic Low-Rank Kernel Learning for Regression

    Full text link
    We present a novel approach to learn a kernel-based regression function. It is based on the useof conical combinations of data-based parameterized kernels and on a new stochastic convex optimization procedure of which we establish convergence guarantees. The overall learning procedure has the nice properties that a) the learned conical combination is automatically designed to perform the regression task at hand and b) the updates implicated by the optimization procedure are quite inexpensive. In order to shed light on the appositeness of our learning strategy, we present empirical results from experiments conducted on various benchmark datasets.Comment: International Conference on Machine Learning (ICML'11), Bellevue (Washington) : United States (2011

    Etudes comparatives des robustesses au bruit de l'approche 'Full Combination' et de son approximation

    Get PDF
    Recently, a new method for sub-band based ASR has been introduced which approximates the ideal 'full combination' (FC) approach. We show how this ideal approach can be set up as a nonlinear combination function and how it can be approximated by estimating the posteriors for each combination as a function of the posteriors from the individual sub-bands alone [Hagen98:SBS,Morris99:TFC]. Sub-band based ASR is especially promising in stationary and time-varying band-limited noise as long as the remaining clean sub-bands supply sufficiently reliable information. For the first time, experiments will be presented which compare the FC approach and its approximation on these noise characteristics. They were carried out on PLP [Hermansky90:PLP] and J-RASTA-PLP [Hermansky94:RPO] features in the framework of HMM/ANN Hybrid systems
    • …
    corecore