697 research outputs found
High-order convergent deferred correction schemes based on parameterized Runge-Kutta-Nyström methods for second-order boundary value problems
AbstractIterated deferred correction is a widely used approach to the numerical solution of first-order systems of nonlinear two-point boundary value problems. Normally, the orders of accuracy of the various methods used in a deferred correction scheme differ by 2 and, as a direct result, each time deferred correction is used the order of the overall scheme is increased by a maximum of 2. In [16], however, it has been shown that there exist schemes based on parameterized Runge–Kutta methods, which allow a higher increase of the overall order. A first example of such a high-order convergent scheme which allows an increase of 4 orders per deferred correction was based on two mono-implicit Runge–Kutta methods. In the present paper, we will investigate the possibility for high-order convergence of schemes for the numerical solution of second-order nonlinear two-point boundary value problems not containing the first derivative. Two examples of such high-order convergent schemes, based on parameterized Runge–Kutta-Nyström methods of orders 4 and 8, are analysed and discussed
Learning relevant eye movement feature spaces across users
In this paper we predict the relevance of images based on a lowdimensional feature space found using several users’ eye movements. Each user is given an image-based search task, during which their eye movements are extracted using a Tobii eye tracker. The users also provide us with explicit feedback regarding the relevance of images. We demonstrate that by using a greedy Nystrom algorithm on the eye movement features of different users, we can find a suitable low-dimensional feature space for learning. We validate the suitability of this feature space by projecting the eye movement features of a new user into this space, training an online learning algorithm using these features, and showing that the number of mistakes (regret over time) made in predicting relevant images is lower than when using the original eye movement features. We also plot Recall-Precision and ROC curves, and use a sign test to verify the statistical significance of our results
Spectral Dimensionality Reduction
In this paper, we study and put under a common framework a number of non-linear dimensionality reduction methods, such as Locally Linear Embedding, Isomap, Laplacian Eigenmaps and kernel PCA, which are based on performing an eigen-decomposition (hence the name 'spectral'). That framework also includes classical methods such as PCA and metric multidimensional scaling (MDS). It also includes the data transformation step used in spectral clustering. We show that in all of these cases the learning algorithm estimates the principal eigenfunctions of an operator that depends on the unknown data density and on a kernel that is not necessarily positive semi-definite. This helps to generalize some of these algorithms so as to predict an embedding for out-of-sample examples without having to retrain the model. It also makes it more transparent what these algorithm are minimizing on the empirical data and gives a corresponding notion of generalization error. Dans cet article, nous étudions et développons un cadre unifié pour un certain nombre de méthodes non linéaires de réduction de dimensionalité, telles que LLE, Isomap, LE (Laplacian Eigenmap) et ACP à noyaux, qui font de la décomposition en valeurs propres (d'où le nom "spectral"). Ce cadre inclut également des méthodes classiques telles que l'ACP et l'échelonnage multidimensionnel métrique (MDS). Il inclut aussi l'étape de transformation de données utilisée dans l'agrégation spectrale. Nous montrons que, dans tous les cas, l'algorithme d'apprentissage estime les fonctions propres principales d'un opérateur qui dépend de la densité inconnue de données et d'un noyau qui n'est pas nécessairement positif semi-défini. Ce cadre aide à généraliser certains modèles pour prédire les coordonnées des exemples hors-échantillons sans avoir à réentraîner le modèle. Il aide également à rendre plus transparent ce que ces algorithmes minimisent sur les données empiriques et donne une notion correspondante d'erreur de généralisation.non-parametric models, non-linear dimensionality reduction, kernel models, modèles non paramétriques, réduction de dimensionalité non linéaire, modèles à noyau
- …