20 research outputs found

    Hyperparameter optimization with approximate gradient

    Full text link
    Most models in machine learning contain at least one hyperparameter to control for model complexity. Choosing an appropriate set of hyperparameters is both crucial in terms of model accuracy and computationally challenging. In this work we propose an algorithm for the optimization of continuous hyperparameters using inexact gradient information. An advantage of this method is that hyperparameters can be updated before model parameters have fully converged. We also give sufficient conditions for the global convergence of this method, based on regularity conditions of the involved functions and summability of errors. Finally, we validate the empirical performance of this method on the estimation of regularization constants of L2-regularized logistic regression and kernel Ridge regression. Empirical benchmarks indicate that our approach is highly competitive with respect to state of the art methods.Comment: Proceedings of the International conference on Machine Learning (ICML

    PBIL AutoEns: uma Ferramenta de Aprendizado de Máquina Automatizado Integrada à Plataforma Weka / PBIL AutoEns: an Automated Machine Learning Tool integrated to the Weka ML Platform

    Get PDF
    O Aprendizado de Máquina (AM) tem se popularizado nos últimos anos como uma abordagem eficiente para resolução de problemas. Existem na atualidade centenas de métodos de classificação, por exemplo, o que torna praticamente impossível analisar todos os possíveis resultados, dado que além de existirem muitos métodos, são muitas as configurações para cada um desses métodos. A partir desse problema, surgiu o conceito de Aprendizado de Máquina Automatizado (AutoML), uma técnica que busca entre diversas soluções, a melhor possível para um determinado problema, sem a necessidade de interferência humana. Este trabalho apresenta o PBIL AutoEns, uma ferramenta de AutoML que utiliza a API da plataforma WEKA para buscar soluções (modelos) dentre um grande conjunto de possibilidades. O PBIL AutoEns foi comparado com o Random Forest e XGBoost (métodos de comitês) e o MLP (classificador base). Nessa comparação, usamos uma medida de precisão preditiva muito forte (F-measure) para analisar o desempenho de classificação de todos os quatro métodos em 21 base de dados. 

    Agnostic Bayes

    Get PDF
    Tableau d'honneur de la Faculté des études supérieures et postdorales, 2014-2015L’apprentissage automatique correspond à la science de l’apprentissage à partir d’exemples. Des algorithmes basés sur cette approche sont aujourd’hui omniprésents. Bien qu’il y ait eu un progrès significatif, ce domaine présente des défis importants. Par exemple, simplement sélectionner la fonction qui correspond le mieux aux données observées n’offre aucune garantie statistiques sur les exemples qui n’ont pas encore été observées. Quelques théories sur l’apprentissage automatique offrent des façons d’aborder ce problème. Parmi ceux-ci, nous présentons la modélisation bayésienne de l’apprentissage automatique et l’approche PACbayésienne pour l’apprentissage automatique dans une vue unifiée pour mettre en évidence d’importantes similarités. Le résultat de cette analyse suggère que de considérer les réponses de l’ensemble des modèles plutôt qu’un seul correspond à un des éléments-clés pour obtenir une bonne performance de généralisation. Malheureusement, cette approche vient avec un coût de calcul élevé, et trouver de bonnes approximations est un sujet de recherche actif. Dans cette thèse, nous présentons une approche novatrice qui peut être appliquée avec un faible coût de calcul sur un large éventail de configurations d’apprentissage automatique. Pour atteindre cet objectif, nous appliquons la théorie de Bayes d’une manière différente de ce qui est conventionnellement fait pour l’apprentissage automatique. Spécifiquement, au lieu de chercher le vrai modèle à l’origine des données observées, nous cherchons le meilleur modèle selon une métrique donnée. Même si cette différence semble subtile, dans cette approche, nous ne faisons pas la supposition que le vrai modèle appartient à l’ensemble de modèles explorés. Par conséquent, nous disons que nous sommes agnostiques. Plusieurs expérimentations montrent un gain de généralisation significatif en utilisant cette approche d’ensemble de modèles durant la phase de validation croisée. De plus, cet algorithme est simple à programmer et n’ajoute pas un coût de calcul significatif à la recherche d’hyperparamètres conventionnels. Finalement, cet outil probabiliste peut également être utilisé comme un test statistique pour évaluer la qualité des algorithmes sur plusieurs ensembles de données d’apprentissage.Machine learning is the science of learning from examples. Algorithms based on this approach are now ubiquitous. While there has been significant progress, this field presents important challenges. Namely, simply selecting the function that best fits the observed data was shown to have no statistical guarantee on the examples that have not yet been observed. There are a few learning theories that suggest how to address this problem. Among these, we present the Bayesian modeling of machine learning and the PAC-Bayesian approach to machine learning in a unified view to highlight important similarities. The outcome of this analysis suggests that model averaging is one of the key elements to obtain a good generalization performance. Specifically, one should perform predictions based on the outcome of every model instead of simply the one that best fits the observed data. Unfortunately, this approach comes with a high computational cost problem, and finding good approximations is the subject of active research. In this thesis, we present an innovative approach that can be applied with a low computational cost on a wide range of machine learning setups. In order to achieve this, we apply the Bayes’ theory in a different way than what is conventionally done for machine learning. Specifically, instead of searching for the true model at the origin of the observed data, we search for the best model according to a given metric. While the difference seems subtle, in this approach, we do not assume that the true model belongs to the set of explored model. Hence, we say that we are agnostic. An extensive experimental setup shows a significant generalization performance gain when using this model averaging approach during the cross-validation phase. Moreover, this simple algorithm does not add a significant computational cost to the conventional search of hyperparameters. Finally, this probabilistic tool can also be used as a statistical significance test to evaluate the quality of learning algorithms on multiple datasets

    Hyperparameter optimization with approximate gradient

    Get PDF
    Abstract Most models in machine learning contain at least one hyperparameter to control for model complexity. Choosing an appropriate set of hyperparameters is both crucial in terms of model accuracy and computationally challenging. In this work we propose an algorithm for the optimization of continuous hyperparameters using inexact gradient information. An advantage of this method is that hyperparameters can be updated before model parameters have fully converged. We also give sufficient conditions for the global convergence of this method, based on regularity conditions of the involved functions and summability of errors. Finally, we validate the empirical performance of this method on the estimation of regularization constants of ďż˝ 2 -regularized logistic regression and kernel Ridge regression. Empirical benchmarks indicate that our approach is highly competitive with respect to state of the art methods
    corecore