19 research outputs found

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Decomposition and duality based approaches to stochastic integer programming

    Get PDF
    Stochastic Integer Programming is a variant of Linear Programming which incorporates integer and stochastic properties (i.e. some variables are discrete, and some properties of the problem are randomly determined after the first-stage decision). A Stochastic Integer Program may be rewritten as an equivalent Integer Program with a characteristic structure, but is often too large to effectively solve directly. In this thesis we develop new algorithms which exploit convex duality and scenario-wise decomposition of the equivalent Integer Program to find better dual bounds and faster optimal solutions. A major attraction of this approach is that these algorithms will be amenable to parallel computation

    Bundle methods for regularized risk minimization with applications to robust learning

    Get PDF
    Supervised learning in general and regularized risk minimization in particular is about solving optimization problem which is jointly defined by a performance measure and a set of labeled training examples. The outcome of learning, a model, is then used mainly for predicting the labels for unlabeled examples in the testing environment. In real-world scenarios: a typical learning process often involves solving a sequence of similar problems with different parameters before a final model is identified. For learning to be successful, the final model must be produced timely, and the model should be robust to (mild) irregularities in the testing environment. The purpose of this thesis is to investigate ways to speed up the learning process and improve the robustness of the learned model. We first develop a batch convex optimization solver specialized to the regularized risk minimization based on standard bundle methods. The solver inherits two main properties of the standard bundle methods. Firstly, it is capable of solving both differentiable and non-differentiable problems, hence its implementation can be reused for different tasks with minimal modification. Secondly, the optimization is easily amenable to parallel and distributed computation settings; this makes the solver highly scalable in the number of training examples. However, unlike the standard bundle methods, the solver does not have extra parameters which need careful tuning. Furthermore, we prove that the solver has faster convergence rate. In addition to that, the solver is very efficient in computing approximate regularization path and model selection. We also present a convex risk formulation for incorporating invariances and prior knowledge into the learning problem. This formulation generalizes many existing approaches for robust learning in the setting of insufficient or noisy training examples and covariate shift. Lastly, we extend a non-convex risk formulation for binary classification to structured prediction. Empirical results show that the model obtained with this risk formulation is robust to outliers in the training examples

    Apprentissage à grande échelle et applications

    Get PDF
    This thesis presents my main research activities in statistical machine learning aftermy PhD, starting from my post-doc at UC Berkeley to my present research position atInria Grenoble. The first chapter introduces the context and a summary of my scientificcontributions and emphasizes the importance of pluri-disciplinary research. For instance,mathematical optimization has become central in machine learning and the interplay betweensignal processing, statistics, bioinformatics, and computer vision is stronger thanever. With many scientific and industrial fields producing massive amounts of data, theimpact of machine learning is potentially huge and diverse. However, dealing with massivedata raises also many challenges. In this context, the manuscript presents differentcontributions, which are organized in three main topics.Chapter 2 is devoted to large-scale optimization in machine learning with a focus onalgorithmic methods. We start with majorization-minimization algorithms for structuredproblems, including block-coordinate, incremental, and stochastic variants. These algorithmsare analyzed in terms of convergence rates for convex problems and in terms ofconvergence to stationary points for non-convex ones. We also introduce fast schemesfor minimizing large sums of convex functions and principles to accelerate gradient-basedapproaches, based on Nesterov’s acceleration and on Quasi-Newton approaches.Chapter 3 presents the paradigm of deep kernel machine, which is an alliance betweenkernel methods and multilayer neural networks. In the context of visual recognition, weintroduce a new invariant image model called convolutional kernel networks, which is anew type of convolutional neural network with a reproducing kernel interpretation. Thenetwork comes with simple and effective principles to do unsupervised learning, and iscompatible with supervised learning via backpropagation rules.Chapter 4 is devoted to sparse estimation—that is, the automatic selection of modelvariables for explaining observed data; in particular, this chapter presents the result ofpluri-disciplinary collaborations in bioinformatics and neuroscience where the sparsityprinciple is a key to build intepretable predictive models.Finally, the last chapter concludes the manuscript and suggests future perspectives.Ce mémoire présente mes activités de recherche en apprentissage statistique après mathèse de doctorat, dans une période allant de mon post-doctorat à UC Berkeley jusqu’àmon activité actuelle de chercheur chez Inria. Le premier chapitre fournit un contextescientifique dans lequel s’inscrivent mes travaux et un résumé de mes contributions, enmettant l’accent sur l’importance de la recherche pluri-disciplinaire. L’optimisation mathématiqueest ainsi devenue un outil central en apprentissage statistique et les interactionsavec les communautés de vision artificielle, traitement du signal et bio-informatiquen’ont jamais été aussi fortes. De nombreux domaines scientifiques et industriels produisentdes données massives, mais les traiter efficacement nécessite de lever de nombreux verrousscientifiques. Dans ce contexte, ce mémoire présente différentes contributions, qui sontorganisées en trois thématiques.Le chapitre 2 est dédié à l’optimisation à large échelle en apprentissage statistique.Dans un premier lieu, nous étudions plusieurs variantes d’algorithmes de majoration/minimisationpour des problèmes structurés, telles que des variantes par bloc de variables,incrémentales, et stochastiques. Chaque algorithme est analysé en terme de taux deconvergence lorsque le problème est convexe, et nous montrons la convergence de ceux-civers des points stationnaires dans le cas contraire. Des méthodes de minimisation rapidespour traiter le cas de sommes finies de fonctions sont aussi introduites, ainsi que desalgorithmes d’accélération pour les techniques d’optimisation de premier ordre.Le chapitre 3 présente le paradigme des méthodes à noyaux profonds, que l’on peutinterpréter comme un mariage entre les méthodes à noyaux classiques et les techniquesd’apprentissage profond. Dans le contexte de la reconnaissance visuelle, ce chapitre introduitun nouveau modèle d’image invariant appelé réseau convolutionnel à noyaux, qui estun nouveau type de réseau de neurones convolutionnel avec une interprétation en termesde noyaux reproduisants. Le réseau peut être appris simplement sans supervision grâceà des techniques classiques d’approximation de noyaux, mais est aussi compatible avecl’apprentissage supervisé grâce à des règles de backpropagation.Le chapitre 4 est dédié à l’estimation parcimonieuse, c’est à dire, à la séléction automatiquede variables permettant d’expliquer des données observées. En particulier, cechapitre décrit des collaborations pluri-disciplinaires en bioinformatique et neuroscience,où le principe de parcimonie est crucial pour obtenir des modèles prédictifs interprétables.Enfin, le dernier chapitre conclut ce mémoire et présente des perspectives futures

    Graphical Models: Modeling, Optimization, and Hilbert Space Embedding

    No full text
    Over the past two decades graphical models have been widely used as a powerful tool for compactly representing distributions. On the other hand, kernel methods have also been used extensively to come up with rich representations. This thesis aims to combine graphical models with kernels to produce compact models with rich representational abilities. The following four areas are our focus. 1. Conditional random fields for multi-agent reinforcement learning. Conditional random fields (CRFs) are graphical models for modeling the probability of labels given the observations. They have traditionally assumed that, conditioned on the training data, the label sequences of different training examples are independent and identically distributed (iid). We extended the use of CRFs to a class of temporal learning algorithms, namely policy gradient reinforcement learning (RL). Now the labels are no longer iid. They are actions that update the environment and affect the next observation. From an RL point of view, CRFs provide a natural way to model joint actions in a decentralized Markov decision process. Using tree sampling for inference, our experiment shows the RL methods employing CRFs clearly outperform those which do not model the proper joint policy. 2. Bayesian online multi-label classification. Gaussian density filtering provides fast and effective inference for graphical models (Maybeck, 1982). Based on it, we propose a Bayesian online multi-label classification (BOMC) framework which learns a probabilistic model of the linear classifier. The training labels are incorporated to update the posterior of the classifiers via a graphical model similar to TrueSkill (Herbrich et al, 2007). Using samples from the posterior, we label the test data by maximizing the expected F1-score. In our experiments, BOMC delivers significantly higher macro-averaged F1-score than the state-of-the-art online maximum margin learners. 3. Hilbert space embedment of distributions. Graphical models are also an essential tool in kernel measures of independence for non-iid data. Traditional information theory often requires density estimation, which makes it unideal for statistical estimation. Motivated by the fact that distributions often appear in machine learning via expectations, we can characterize the distance between distributions in terms of distances between means, especially means in reproducing kernel Hilbert spaces which are called kernel embeddings. Under this framework, the undirected graphical models further allow us to factorize the kernel embeddings onto cliques, which yields efficient measures of independence for non-iid data (Zhang et al, 2009). 4. Optimization in maximum margin models for structured data. Maximum margin estimation for structured data is an important task where graphical models also play a key role. They are special cases of regularized risk minimization, for which bundle methods (BMRM, Teo et al, 2007) are a state-of-the-art general purpose solver. Smola et al (2007) proved that BMRM requires O(1/epsilon) iterations to converge to an epsilon accurate solution, and we further show that this rate hits the lower bound. Motivated by (Nesterov 2003, 2005), we utilized the composite structure of the objective function and devised an algorithm for the structured loss which converges to an epsilon accurate solution in O(1/sqrt{epsilon}) iterations
    corecore