34 research outputs found

    Bandit-based Estimation of Distribution Algorithms for Noisy Optimization: Rigorous Runtime Analysis

    Get PDF
    International audienceWe show complexity bounds for noisy optimization, in frame- works in which noise is stronger than in previously published papers[19]. We also propose an algorithm based on bandits (variants of [16]) that reaches the bound within logarithmic factors. We emphasize the differ- ences with empirical derived published algorithms

    Handling Expensive Optimization with Large Noise

    Get PDF
    International audienceThis paper exhibits lower and upper bounds on runtimes for expensive noisy optimization problems. Runtimes are expressed in terms of number of fitness evaluations. Fitnesses considered are monotonic transformations of the {\em sphere} function. The analysis focuses on the common case of fitness functions quadratic in the distance to the optimum in the neighborhood of this optimum---it is nonetheless also valid for any monotonic polynomial of degree p>2. Upper bounds are derived via a bandit-based estimation of distribution algorithm that relies on Bernstein races called R-EDA. It is known that the algorithm is consistent even in non-differentiable cases. Here we show that: (i) if the variance of the noise decreases to 0 around the optimum, it can perform optimally for quadratic transformations of the norm to the optimum, (ii) otherwise, it provides a slower convergence rate than the one exhibited empirically by an algorithm called Quadratic Logistic Regression based on surrogate models---although QLR requires a probabilistic prior on the fitness class

    FROM’MIR : Développer des outils de prédiction et de conseil pour maîtriser la fromageabilité des laits destinés à la fabrication des fromages traditionnels franc-comtois

    Get PDF
    Ce volume regroupe les textes issus du programme Casdar "Innovation et Partenariat" et "Recherche finalisée et innovation" de 2014. Il a été réalisé sous l’égide du GIS Relance Agronomique.Mid-infrared spectroscopy prediction equations of the cheese-making properties of milk, established inthe Franche-Comté PDO/PGI context, exist for the first time in France. Laboratory curd yield in DryMatter was consistent with the yields observed in mini-manufactures of soft and pressed cookedcheeses and it is the best predicted parameter. Under our conditions, some coagulation properties suchas curd firmness could be estimated. The acidification properties, which heavily depend on themicrobiological component of milk, are poorly estimated. The best prediction performances wereobtained on individual cow milks. The performances were poorer on the scale of bulk milks, herd tankmilk but especially dairy vat milk. The study of variation factors made it possible to highlight theimportant weight of genetics with a high level of heritability and strong effects of the genome regionsinvolved. The quality and quantity of fodder and the distribution of calves were influential in the contextstudied. In this same context, few factors of variation have been identified at the scale of dairy vat milks,as the practices were very much governed by the PDO specifications. At the end of this project, anobservatory, from the quality of the milk to the quality of the cheese, will be set up in Franche Comté.Studies will also be carried out at the national level to consolidate and improve the equations in othercontexts.Des équations MIR (spectrométrie moyen infrarouge) d'estimation de la fromageabilité des laits,établies en contexte AOP/IGP franc-comtois, existent pour la première fois en France. Le rendementlaboratoire extrait sec (ES), cohérent avec les rendements observés en mini-fabrications de fromages àpâte molle et à pâte pressée cuite, est le paramètre le mieux prédit. Dans nos conditions, certainsaspects de l'aptitude à la coagulation enzymatique, comme la fermeté des gels, peuvent être estimés.L’aptitude à l’acidification, dépendant fortement de la composante microbiologique des laits, est quant àelle mal estimée. Les meilleures performances de prédiction sont obtenues sur les laits individuels devaches. Les performances sont moins bonnes à l’échelle des laits de mélange, des laits de troupeauxmais surtout des laits de cuves de fromagerie. L'étude des facteurs de variation a permis de mettre enévidence le poids important de la génétique avec un niveau d’héritabilité élevé et des effets forts desrégions du génome impliquées. La qualité et la quantité de fourrages ainsi que la répartition desvêlages sont influents dans le contexte étudié. Dans ce même contexte, peu de facteurs de variationont été mis en évidence à l’échelle des laits de cuves, les pratiques étant très encadrées par le cahierdes charges AOP. A l’issue de ce projet, un observatoire, depuis la qualité des laits jusqu’à celle desfromages, va être mis en place en Franche Comté. Des études seront aussi mises en œuvre au niveaunational pour permettre notamment une consolidation et une amélioration des équations dans d'autrescontextes

    Éléments pour l'Apprentissage et l'Optimisation de Fonctions Chères

    No full text
    This work focuses on learning and optimizing expensive functions, that is constructing algorithms learning to identify a concept, to approximate a function or to find an optimum based on examples of this concept (resp. points of the function). The motivating application is learning and optimizing simplified models in numerical engineering, for industrial challenges for which obtaining examples is expensive. It is then necessary to use as few examples as possible for learning (resp. optimizing). The first contribution was the conception and development of a new approach of active learning, based on reinforcement learning. Theoretical foundations for this approach were established. Furthermore, a learning algorithm based on this approach, BAAL, was implemented, and used to provide experimental validation. The approach, originally focused on machine learning, was also extended to optimization. The second contribution is focused on the potential and limits of both active learning and expensive optimization, from a theoretical point of view. Sample complexity bounds were derived: 1/ for batch active learning; 2/ for noisy optimization.Ces travaux de doctorat sont centrés sur l'apprentissage artificiel et l'optimisation, c'est à dire la construction de programmes apprenant à identifier un concept, à approximer une fonction ou à trouver un optimum à partir d'exemples de ce concept (ou de points de la fonction). Le contexte applicatif est l'apprentissage et l'optimisation de modèles simplifiés en ingénierie numérique, pour des problèmes industriels pour lesquels les exemples sont coûteux à obtenir. Il est nécessaire d'en utiliser le moins possible pour l'apprentissage; c'est le principe de l'apprentissage actif et de l'optimisation de fonction chères. Mes efforts de recherche ont d'abord porté sur la conception et le développement d'une nouvelle approche de l'apprentissage Actif, fondée sur l'apprentissage par renforcement. Les fondements théoriques de l'approche ont été établis. Parallèlement, l'implémentation d'un logiciel fondé sur cette approche, BAAL, a permis une validation expérimentale (publications: CAP'09, ECML'09). Une extension de cette approche a été réalisée pour l'optimisation de fonction chères (publication: GECCO 2009). La deuxième partie de mon doctorat s'intéresse aux potentiels et aux limites de l'apprentissage actif et de l'optimisation chère d'un point de vue théorique. Une étude des bornes de complexités de l'apprentissage actif par "paquets" a été réalisée (publication: ECML 2010). Dans le domaine de l'optimisation bruitée, des résultats sur le nombre minimal d'exemples nécessaires pour trouver un optimum ont été obtenus (publications: LION 2010, EvoSTAR 2010)

    Adaptive Noisy Optimization

    Get PDF
    International audienceIn this paper, adaptive noisy optimization on variants of the noisy sphere model is considered, i.e. optimization in which the same algorithm is able to adapt to several frameworks, including some for which no bound has never been derived. Incidentally, bounds derived by [16] for noise quickly decreasing to zero around the optimum are extended to the more general case of a positively lower-bounded noise thanks to a careful use of Bernstein bounds (using empirical estimates of the variance) instead of Chernoff-like variant

    Optimal robust expensive optimization is tractable

    Get PDF
    International audienceFollowing a number of recent papers investigating the possibility of optimal comparison-based optimization algorithms for a given distribution of probability on fitness functions, we (i) discuss the comparison-based constraints (ii) choose a setting in which theoretical tight bounds are known (iii) develop a careful implementation using billiard algorithms, Upper Confidence trees and (iv) experimentally test the tractability of the approach. The results, on still very simple cases, show that the approach, yet still preliminary, could be tested successfully until dimension 10 and horizon 50 iterations within a few hours on a standard computer, with convergence rate far better than the best algorithms

    Optimal robust expensive optimization is tractable

    No full text
    International audienceFollowing a number of recent papers investigating the possibility of optimal comparison-based optimization algorithms for a given distribution of probability on fitness functions, we (i) discuss the comparison-based constraints (ii) choose a setting in which theoretical tight bounds are known (iii) develop a careful implementation using billiard algorithms, Upper Confidence trees and (iv) experimentally test the tractability of the approach. The results, on still very simple cases, show that the approach, yet still preliminary, could be tested successfully until dimension 10 and horizon 50 iterations within a few hours on a standard computer, with convergence rate far better than the best algorithms
    corecore