108 research outputs found

    Constructive Approximation and Learning by Greedy Algorithms

    Get PDF
    This thesis develops several kernel-based greedy algorithms for different machine learning problems and analyzes their theoretical and empirical properties. Greedy approaches have been extensively used in the past for tackling problems in combinatorial optimization where finding even a feasible solution can be a computationally hard problem (i.e., not solvable in polynomial time). A key feature of greedy algorithms is that a solution is constructed recursively from the smallest constituent parts. In each step of the constructive process a component is added to the partial solution from the previous step and, thus, the size of the optimization problem is reduced. The selected components are given by optimization problems that are simpler and easier to solve than the original problem. As such schemes are typically fast at constructing a solution they can be very effective on complex optimization problems where finding an optimal/good solution has a high computational cost. Moreover, greedy solutions are rather intuitive and the schemes themselves are simple to design and easy to implement. There is a large class of problems for which greedy schemes generate an optimal solution or a good approximation of the optimum. In the first part of the thesis, we develop two deterministic greedy algorithms for optimization problems in which a solution is given by a set of functions mapping an instance space to the space of reals. The first of the two approaches facilitates data understanding through interactive visualization by providing means for experts to incorporate their domain knowledge into otherwise static kernel principal component analysis. This is achieved by greedily constructing embedding directions that maximize the variance at data points (unexplained by the previously constructed embedding directions) while adhering to specified domain knowledge constraints. The second deterministic greedy approach is a supervised feature construction method capable of addressing the problem of kernel choice. The goal of the approach is to construct a feature representation for which a set of linear hypotheses is of sufficient capacity — large enough to contain a satisfactory solution to the considered problem and small enough to allow good generalization from a small number of training examples. The approach mimics functional gradient descent and constructs features by fitting squared error residuals. We show that the constructive process is consistent and provide conditions under which it converges to the optimal solution. In the second part of the thesis, we investigate two problems for which deterministic greedy schemes can fail to find an optimal solution or a good approximation of the optimum. This happens as a result of making a sequence of choices which take into account only the immediate reward without considering the consequences onto future decisions. To address this shortcoming of deterministic greedy schemes, we propose two efficient randomized greedy algorithms which are guaranteed to find effective solutions to the corresponding problems. In the first of the two approaches, we provide a mean to scale kernel methods to problems with millions of instances. An approach, frequently used in practice, for this type of problems is the Nyström method for low-rank approximation of kernel matrices. A crucial step in this method is the choice of landmarks which determine the quality of the approximation. We tackle this problem with a randomized greedy algorithm based on the K-means++ cluster seeding scheme and provide a theoretical and empirical study of its effectiveness. In the second problem for which a deterministic strategy can fail to find a good solution, the goal is to find a set of objects from a structured space that are likely to exhibit an unknown target property. This discrete optimization problem is of significant interest to cyclic discovery processes such as de novo drug design. We propose to address it with an adaptive Metropolis–Hastings approach that samples candidates from the posterior distribution of structures conditioned on them having the target property. The proposed constructive scheme defines a consistent random process and our empirical evaluation demonstrates its effectiveness across several different application domains

    A machine learning approach to the unsupervised segmentation of mitochondria in subcellular electron microscopy data

    Get PDF
    Recent advances in cellular and subcellular microscopy demonstrated its potential towards unravelling the mechanisms of various diseases at the molecular level. The biggest challenge in both human- and computer-based visual analysis of micrographs is the variety of nanostructures and mitochondrial morphologies. The state-of-the-art is, however, dominated by supervised manual data annotation and early attempts to automate the segmentation process were based on supervised machine learning techniques which require large datasets for training. Given a minimal number of training sequences or none at all, unsupervised machine learning formulations, such as spectral dimensionality reduction, are known to be superior in detecting salient image structures. This thesis presents three major contributions developed around the spectral clustering framework which is proven to capture perceptual organization features. Firstly, we approach the problem of mitochondria localization. We propose a novel grouping method for the extracted line segments which describes the normal mitochondrial morphology. Experimental findings show that the clusters obtained successfully model the inner mitochondrial membrane folding and therefore can be used as markers for the subsequent segmentation approaches. Secondly, we developed an unsupervised mitochondria segmentation framework. This method follows the evolutional ability of human vision to extrapolate salient membrane structures in a micrograph. Furthermore, we designed robust non-parametric similarity models according to Gestaltic laws of visual segregation. Experiments demonstrate that such models automatically adapt to the statistical structure of the biological domain and return optimal performance in pixel classification tasks under the wide variety of distributional assumptions. The last major contribution addresses the computational complexity of spectral clustering. Here, we introduced a new anticorrelation-based spectral clustering formulation with the objective to improve both: speed and quality of segmentation. The experimental findings showed the applicability of our dimensionality reduction algorithm to very large scale problems as well as asymmetric, dense and non-Euclidean datasets

    Scalable large margin pairwise learning algorithms

    Get PDF
    2019 Summer.Includes bibliographical references.Classification is a major task in machine learning and data mining applications. Many of these applications involve building a classification model using a large volume of imbalanced data. In such an imbalanced learning scenario, the area under the ROC curve (AUC) has proven to be a reliable performance measure to evaluate a classifier. Therefore, it is desirable to develop scalable learning algorithms that maximize the AUC metric directly. The kernelized AUC maximization machines have established a superior generalization ability compared to linear AUC machines. However, the computational cost of the kernelized machines hinders their scalability. To address this problem, we propose a large-scale nonlinear AUC maximization algorithm that learns a batch linear classifier on approximate feature space computed via the k-means Nyström method. The proposed algorithm is shown empirically to achieve comparable AUC classification performance or even better than the kernel AUC machines, while its training time is faster by several orders of magnitude. However, the computational complexity of the linear batch model compromises its scalability when training sizable datasets. Hence, we develop a second-order online AUC maximization algorithms based on a confidence-weighted model. The proposed algorithms exploit the second-order information to improve the convergence rate and implement a fixed-size buffer to address the multivariate nature of the AUC objective function. We also extend our online linear algorithms to consider an approximate feature map constructed using random Fourier features in an online setting. The results show that our proposed algorithms outperform or are at least comparable to the competing online AUC maximization methods. Despite their scalability, we notice that online first and second-order AUC maximization methods are prone to suboptimal convergence. This can be attributed to the limitation of the hypothesis space. A potential improvement can be attained by learning stochastic online variants. However, the vanilla stochastic methods also suffer from slow convergence because of the high variance introduced by the stochastic process. We address the problem of slow convergence by developing a fast convergence stochastic AUC maximization algorithm. The proposed stochastic algorithm is accelerated using a unique combination of scheduled regularization update and scheduled averaging. The experimental results show that the proposed algorithm performs better than the state-of-the-art online and stochastic AUC maximization methods in terms of AUC classification accuracy. Moreover, we develop a proximal variant of our accelerated stochastic AUC maximization algorithm. The proposed method applies the proximal operator to the hinge loss function. Therefore, it evaluates the gradient of the loss function at the approximated weight vector. Experiments on several benchmark datasets show that our proximal algorithm converges to the optimal solution faster than the previous AUC maximization algorithms

    Large-scale Machine Learning in High-dimensional Datasets

    Get PDF

    Advances in knowledge discovery and data mining Part II

    Get PDF
    19th Pacific-Asia Conference, PAKDD 2015, Ho Chi Minh City, Vietnam, May 19-22, 2015, Proceedings, Part II</p

    Diffusion, methods and applications

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de lectura: junio de 2014Big Data, an important problem nowadays, can be understood in terms of a very large number of patterns, a very large pattern dimension or, often, both. In this thesis, we will concentrate on the high dimensionality issue, applying manifold learning techniques for visualizing and analyzing such patterns. The core technique will be Di usion Maps (DM) and its Anisotropic Di usion (AD) version, introduced by Ronald R. Coifman and his school at Yale University, and of which we will give a complete, systematic, compact and self-contained treatment. This will be done after a brief survey of previous manifold learning methods. The algorithmic contributions of the thesis will be centered in two computational challenges of di usion methods: the potential high cost of the similarity matrix eigenanalysis that is needed to define the di usion embedding coordinates, and the di culty of computing this embedding over new patterns not available for the initial eigenanalysis. With respect to the first issue, we will show how the AD set up can be used to skip it when looking for local models. In this case, local patterns will be selected through a k-Nearest Neighbors search using a properly defined local Mahalanobis distance, that enables neighbors to be found over the latent variable space underlying the AD model while we can work directly with the observable patterns and, thus, avoiding the potentially costly similarity matrix eigenanalysis. The second proposed algorithm, that we will call Auto-adaptative Laplacian Pyramids (ALP), focuses in the out-of-sample embedding extension and consists in a modification of the classical Laplacian Pyramids (LP) method. In this new algorithm the LP iterations will be combined with an estimate of the Leave One Out CV error, something that makes possible to directly define during training a criterion to estimate the optimal stopping point of this iterative algorithm. This thesis will also present several application contributions to important problems in renewable energy and medical imaging. More precisely, we will show how DM is a good method for dimensionality reduction of meteorological weather predictions, providing tools to visualize and describe these data, as well as to cluster them in order to define local models. In turn, we will apply our AD-based localized search method first to find the location in the human body of CT scan images and then to predict wind energy ramps on both individual farms and over the whole of Spain. We will see that, in both cases, our results improve on the current state of the art methods. Finally, we will compare our ALP proposal with the well-known Nyström method as well as with LP on two large dimensional problems, the time compression of meteorological data and the analysis of meteorological variables relevant in daily radiation forecasts. In both cases we will show that ALP compares favorably with the other approaches for out-of-sample extension problemsBig Data es un problema importante hoy en día, que puede ser entendido en términos de un amplio número de patrones, una alta dimensión o, como sucede normalmente, de ambos. Esta tesis se va a centrar en problemas de alta dimensión, aplicando técnicas de aprendizaje de subvariedades para visualizar y analizar dichos patrones. La técnica central será Di usion Maps (DM) y su versión anisotrópica, Anisotropic Di usion (AD), introducida por Ronald R. Coifman y su escuela en la Universidad de Yale, la cual va a ser tratada de manera completa, sistemática, compacta y auto-contenida. Esto se llevará a cabo tras un breve repaso de métodos previos de aprendizaje de subvariedades. Las contribuciones algorítmicas de esta tesis estarán centradas en dos de los grandes retos en métodos de difusión: el potencial alto coste que tiene el análisis de autovalores de la matriz de similitud, necesaria para definir las coordenadas embebidas; y la dificultad para calcular este mismo embedding sobre nuevos datos que no eran accesibles cuando se realizó el análisis de autovalores inicial. Respecto al primer tema, se mostrará cómo la aproximación AD se puede utilizar para evitar el cálculo del embedding cuando estamos interesados en definir modelos locales. En este caso, se seleccionarán patrones cercanos por medio de una búsqueda de vecinos próximos (k-NN), usando como distancia una medida de Mahalanobis local que permita encontrar vecinos sobre las variables latentes existentes bajo el modelo de AD. Todo esto se llevará a cabo trabajando directamente sobre los patrones observables y, por tanto, evitando el costoso cálculo que supone el cálculo de autovalores de la matriz de similitud. El segundo algoritmo propuesto, que llamaremos Auto-adaptative Laplacian Pyramids (ALP), se centra en la extensión del embedding para datos fuera de la muestra, y se trata de una modificación del método denominado Laplacian Pyramids (LP). En este nuevo algoritmo, las iteraciones de LP se combinarán con una estimación del error de Leave One Out CV, permitiendo definir directamente durante el periodo de entrenamiento, un criterio para estimar el criterio de parada óptimo para este método iterativo. En esta tesis se presentarán también una serie de contribuciones de aplicación de estas técnicas a importantes problemas en energías renovables e imágenes médicas. Más concretamente, se muestra como DM es un buen método para reducir la dimensión de predicciones del tiempo meteorológico, sirviendo por tanto de herramienta de visualización y descripción, así como de clasificación de los datos con vistas a definir modelos locales sobre cada grupo descrito. Posteriormente, se aplicará nuestro método de búsqueda localizada basado en AD tanto a la búsqueda de la correspondiente posición de tomografías en el cuerpo humano, como para la detección de rampas de energía eólica en parques individuales o de manera global en España. En ambos casos se verá como los resultados obtenidos mejoran los métodos del estado del arte actual. Finalmente se comparará el algoritmo de ALP propuesto frente al conocido método de Nyström y al método de LP, en dos problemas de alta dimensión: el problema de compresión temporal de datos meteorológicos y el análisis de variables meteorológicas relevantes para la predicción de la radiación diaria. En ambos casos se mostrará cómo ALP es comparativamente mejor que otras aproximaciones existentes para resolver el problema de extensión del embedding a puntos fuera de la muestr

    Manifold Learning in Atomistic Simulations: A Conceptual Review

    Full text link
    Analyzing large volumes of high-dimensional data requires dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. Such practice is needed in atomistic simulations of complex systems where even thousands of degrees of freedom are sampled. An abundance of such data makes gaining insight into a specific physical problem strenuous. Our primary aim in this review is to focus on unsupervised machine learning methods that can be used on simulation data to find a low-dimensional manifold providing a collective and informative characterization of the studied process. Such manifolds can be used for sampling long-timescale processes and free-energy estimation. We describe methods that can work on datasets from standard and enhanced sampling atomistic simulations. Unlike recent reviews on manifold learning for atomistic simulations, we consider only methods that construct low-dimensional manifolds based on Markov transition probabilities between high-dimensional samples. We discuss these techniques from a conceptual point of view, including their underlying theoretical frameworks and possible limitations

    Modèles structurés pour la reconnaissance d'actions dans des vidéos réalistes

    Get PDF
    Cette thèse décrit de nouveaux modèles pour la reconnaissance de catégories d'actions comme "ouvrir une porte" ou "courir" dans des vidéos réalistes telles que les films. Nous nous intéressons tout particulièrement aux propriétés structurelles des actions : comment les décomposer, quelle en est la structure caractéristique et comment utiliser cette information afin de représenter le contenu d'une vidéo. La difficulté principale à laquelle nos modèles s'attellent réside dans la satisfaction simultanée de deux contraintes antagonistes. D'une part, nous devons précisément modéliser les aspects discriminants d'une action afin de pouvoir clairement identifier les différences entre catégories. D'autre part, nos représentations doivent être robustes en conditions réelles, c'est-à-dire dans des vidéos réalistes avec de nombreuses variations visuelles en termes d'acteurs, d'environnements et de points de vue. Dans cette optique, nous proposons donc trois modèles précis et robustes à la fois, qui capturent les relations entre parties d'actions ainsi que leur contenu. Notre approche se base sur des caractéristiques locales - notamment les points d'intérêts spatio-temporels et le flot optique - et a pour objectif d'organiser l'ensemble des descripteurs locaux décrivant une vidéo. Nous proposons aussi des noyaux permettant de comparer efficacement les représentations structurées que nous introduisons. Bien que nos modèles se basent tous sur les principes mentionnés ci-dessus, ils différent de par le type de problème traité et la structure sur laquelle ils reposent. Premièrement, nous proposons de modéliser une action par une séquence de parties temporelles atomiques correspondant à une décomposition sémantique. De plus, nous décrivons comment apprendre un modèle flexible de la structure temporelle dans le but de localiser des actions dans des vidéos de longue durée. Deuxièmement, nous étendons nos idées à l'estimation et à la représentation de la structure spatio-temporelle d'activités plus complexes. Nous décrivons un algorithme d'apprentissage non supervisé permettant de dégager automatiquement une décomposition hiérarchique du contenu dynamique d'une vidéo. Nous utilisons la structure arborescente qui en résulte pour modéliser une action de manière hiérarchique. Troisièmement, au lieu de comparer des modèles structurés, nous explorons une autre alternative : directement comparer des modèles de structure. Pour cela, nous représentons des actions de courte durée comme des séries temporelles en haute dimension et étudions comment la dynamique temporelle d'une action peut être utilisée pour améliorer les performances des modèles non structurés formant l'état de l'art en reconnaissance d'actions. Dans ce but, nous proposons un noyau calculant de manière efficace la similarité entre les dépendances temporelles respectives de deux actions. Nos trois approches et leurs assertions sont à chaque fois validées par des expériences poussées sur des bases de données publiques parmi les plus difficiles en reconnaissance d'actions. Nos résultats sont significativement meilleurs que ceux de l'état de l'art, illustrant ainsi à quel point la structure des actions est importante afin de bâtir des modèles précis et robustes pour la reconnaissance d'actions dans des vidéos réalistes.This dissertation introduces novel models to recognize broad action categories - like "opening a door" and "running" - in real-world video data such as movies and internet videos. In particular, we investigate how an action can be decomposed, what is its discriminative structure, and how to use this information to accurately represent video content. The main challenge we address lies in how to build models of actions that are simultaneously information-rich - in order to correctly differentiate between different action categories - and robust to the large variations in actors, actions, and videos present in real-world data. We design three robust models capturing both the content of and the relations between action parts. Our approach consists in structuring collections of robust local features - such as spatio-temporal interest points and short-term point trajectories. We also propose efficient kernels to compare our structured action representations. Even if they share the same principles, our methods differ in terms of the type of problem they address and the structure information they rely on. We, first, propose to model a simple action as a sequence of meaningful atomic temporal parts. We show how to learn a flexible model of the temporal structure and how to use it for the problem of action localization in long unsegmented videos. Extending our ideas to the spatio-temporal structure of more complex activities, we, then, describe a large-scale unsupervised learning algorithm used to hierarchically decompose the motion content of videos. We leverage the resulting tree-structured decompositions to build hierarchical action models and provide an action kernel between unordered binary trees of arbitrary sizes. Instead of structuring action models, we, finally, explore another route: directly comparing models of the structure. We view short-duration actions as high-dimensional time-series and investigate how an action's temporal dynamics can complement the state-of-the-art unstructured models for action classification. We propose an efficient kernel to compare the temporal dependencies between two actions and show that it provides useful complementary information to the traditional bag-of-features approach. In all three cases, we conducted thorough experiments on some of the most challenging benchmarks used by the action recognition community. We show that each of our methods significantly outperforms the related state of the art, thus highlighting the importance of structure information for accurate and robust action recognition in real-world videos.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    Large Scale Kernel Methods for Fun and Profit

    Get PDF
    Kernel methods are among the most flexible classes of machine learning models with strong theoretical guarantees. Wide classes of functions can be approximated arbitrarily well with kernels, while fast convergence and learning rates have been formally shown to hold. Exact kernel methods are known to scale poorly with increasing dataset size, and we believe that one of the factors limiting their usage in modern machine learning is the lack of scalable and easy to use algorithms and software. The main goal of this thesis is to study kernel methods from the point of view of efficient learning, with particular emphasis on large-scale data, but also on low-latency training, and user efficiency. We improve the state-of-the-art for scaling kernel solvers to datasets with billions of points using the Falkon algorithm, which combines random projections with fast optimization. Running it on GPUs, we show how to fully utilize available computing power for training kernel machines. To boost the ease-of-use of approximate kernel solvers, we propose an algorithm for automated hyperparameter tuning. By minimizing a penalized loss function, a model can be learned together with its hyperparameters, reducing the time needed for user-driven experimentation. In the setting of multi-class learning, we show that – under stringent but realistic assumptions on the separation between classes – a wide set of algorithms needs much fewer data points than in the more general setting (without assumptions on class separation) to reach the same accuracy. The first part of the thesis develops a framework for efficient and scalable kernel machines. This raises the question of whether our approaches can be used successfully in real-world applications, especially compared to alternatives based on deep learning which are often deemed hard to beat. The second part aims to investigate this question on two main applications, chosen because of the paramount importance of having an efficient algorithm. First, we consider the problem of instance segmentation of images taken from the iCub robot. Here Falkon is used as part of a larger pipeline, but the efficiency afforded by our solver is essential to ensure smooth human-robot interactions. In the second instance, we consider time-series forecasting of wind speed, analysing the relevance of different physical variables on the predictions themselves. We investigate different schemes to adapt i.i.d. learning to the time-series setting. Overall, this work aims to demonstrate, through novel algorithms and examples, that kernel methods are up to computationally demanding tasks, and that there are concrete applications in which their use is warranted and more efficient than that of other, more complex, and less theoretically grounded models
    corecore