34 research outputs found

    One-Exact Approximate Pareto Sets

    Full text link
    Papadimitriou and Yannakakis show that the polynomial-time solvability of a certain singleobjective problem determines the class of multiobjective optimization problems that admit a polynomial-time computable (1+ε,…,1+ε)(1+\varepsilon, \dots , 1+\varepsilon)-approximate Pareto set (also called an ε\varepsilon-Pareto set). Similarly, in this article, we characterize the class of problems having a polynomial-time computable approximate ε\varepsilon-Pareto set that is exact in one objective by the efficient solvability of an appropriate singleobjective problem. This class includes important problems such as multiobjective shortest path and spanning tree, and the approximation guarantee we provide is, in general, best possible. Furthermore, for biobjective problems from this class, we provide an algorithm that computes a one-exact ε\varepsilon-Pareto set of cardinality at most twice the cardinality of a smallest such set and show that this factor of 2 is best possible. For three or more objective functions, however, we prove that no constant-factor approximation on the size of the set can be obtained efficiently

    Generalized Lorenz-Mie theory : application to scattering and resonances of photonic complexes

    Get PDF
    Les structures photoniques complexes permettent de façonner la propagation lumineuse à l’échelle de la longueur d’onde au moyen de processus de diffusion et d’interférence. Cette fonctionnalité à l’échelle nanoscopique ouvre la voie à de multiples applications, allant des communications optiques aux biosenseurs. Cette thèse porte principalement sur la modélisation numérique de structures photoniques complexes constituées d’arrangements bidimensionnels de cylindres diélectriques. Deux applications sont privilégiées, soit la conception de dispositifs basés sur des cristaux photoniques pour la manipulation de faisceaux, de même que la réalisation de sources lasers compactes basées sur des molécules photoniques. Ces structures optiques peuvent être analysées au moyen de la théorie de Lorenz-Mie généralisée, une méthode numérique permettant d’exploiter la symétrie cylindrique des diffuseurs sous-jacents. Cette dissertation débute par une description de la théorie de Lorenz-Mie généralisée, obtenue des équations de Maxwell de l’électromagnétisme. D’autres outils théoriques utiles sont également présentés, soit une nouvelle formulation des équations de Maxwell-Bloch pour la modélisation de milieux actifs appelée SALT (steady state ab initio laser theory). Une description sommaire des algorithmes d’optimisation dits métaheuristiques conclut le matériel introductif de la thèse. Nous présentons ensuite la conception et l’optimisation de dispositifs intégrés permettant la génération de faisceaux d’amplitude, de phase et de degré de polarisation contrôlés. Le problème d’optimisation combinatoire associé est solutionné numériquement au moyen de deux métaheuristiques, l’algorithme génétique et la recherche tabou. Une étude théorique des propriétés de micro-lasers basés sur des molécules photoniques – constituées d’un arrangement simple de cylindres actifs – est finalement présentée. En combinant la théorie de Lorenz-Mie et SALT, nous démontrons que les propriétés physiques de ces lasers, plus spécifiquement leur seuil, leur spectre et leur profil d’émission, peuvent être affectés de façon nontriviale par les paramètres du milieu actif sous-jacent. Cette conclusion est hors d’atteinte de l’approche établie qui consiste à calculer les étatsméta-stables de l’équation de Helmholtz et leur facteur de qualité. Une perspective sur la modélisation de milieux photoniques désordonnés conclut cette dissertation.Complex photonic media mold the flow of light at the wavelength scale using multiple scattering and interference effects. This functionality at the nano-scale level paves the way for various applications, ranging from optical communications to biosensing. This thesis is mainly concerned with the numerical modeling of photonic complexes based on twodimensional arrays of cylindrical scatterers. Two applications are considered, namely the use of photonic-crystal-like devices for the design of integrated beam shaping elements, as well as active photonic molecules for the realization of compact laser sources. These photonic structures can be readily analyzed using the 2D Generalized Lorenz-Mie theory (2D-GLMT), a numerical scheme which exploits the symmetry of the underlying cylindrical structures. We begin this thesis by presenting the electromagnetic theory behind 2D-GLMT.Other useful frameworks are also presented, including a recently formulated stationary version of theMaxwell-Bloch equations called steady-state ab initio laser theory (SALT).Metaheuristics, optimization algorithms based on empirical rules for exploring large solution spaces, are also discussed. After laying down the theoretical content, we proceed to the design and optimization of beam shaping devices based on engineered photonic-crystal-like structures. The combinatorial optimization problem associated to beam shaping is tackled using the genetic algorithm (GA) as well as tabu search (TS). Our results show the possibility to design integrated beam shapers tailored for the control of the amplitude, phase and polarization profile of the output beam. A theoretical and numerical study of the lasing characteristics of photonic molecules – composed of a few coupled optically active cylinders – is also presented. Using a combination of 2D-GLMT and SALT, it is shown that the physical properties of photonic molecule lasers, specifically their threshold, spectrum and emission profile, can be significantly affected by the underlying gain medium parameters. These findings are out of reach of the established approach of computing the meta-stable states of the Helmholtz equation and their quality factor. This dissertation is concluded with a research outlook concerning themodeling of disordered photonicmedia

    Cost-sensitive feature selection for support vector machines

    Get PDF
    Feature Selection (FS) is a crucial procedure in Data Science tasks such as Classification, since it identifies the relevant variables, making thus the classification procedures more interpretable and more effective by reducing noise and data overfit. The relevance of features in a classification procedure is linked to the fact that misclassifications costs are frequently asymmetric, since false positive and false negative cases may have very different consequences. However, off-the-shelf FS procedures seldom take into account such cost-sensitivity of errors. In this paper we propose a mathematical-optimization-based FS procedure embedded in one of the most popular classification procedures, namely, Support Vector Machines (SVM), accommodating asymmetric misclassification costs. The key idea is to replace the traditional margin maximization by minimizing the number of features selected, but imposing upper bounds on the false positive and negative rates. The problem is written as an integer linear problem plus a quadratic convex problem for SVM with both linear and radial kernels. The reported numerical experience demonstrates the usefulness of the proposed FS procedure. Indeed, our results on benchmark data sets show that a substantial decrease of the number of features is obtained, whilst the desired trade-off between false positive and false negative rates is achieved

    Feature Selection via Chaotic Antlion Optimization

    Get PDF
    Selecting a subset of relevant properties from a large set of features that describe a dataset is a challenging machine learning task. In biology, for instance, the advances in the available technologies enable the generation of a very large number of biomarkers that describe the data. Choosing the more informative markers along with performing a high-accuracy classification over the data can be a daunting task, particularly if the data are high dimensional. An often adopted approach is to formulate the feature selection problem as a biobjective optimization problem, with the aim of maximizing the performance of the data analysis model (the quality of the data training fitting) while minimizing the number of features used.This work was partially supported by the IPROCOM Marie Curie initial training network, funded through the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme FP7/2007-2013/ under REA grants agreement No. 316555, and by the Romanian National Authority for Scientific Research, CNDIUEFISCDI, project number PN-II-PT-PCCA-2011-3.2- 0917. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript

    A Practical Guide to Multi-Objective Reinforcement Learning and Planning

    Get PDF
    Real-world decision-making tasks are generally complex, requiring trade-offs between multiple, often conflicting, objectives. Despite this, the majority of research in reinforcement learning and decision-theoretic planning either assumes only a single objective, or that multiple objectives can be adequately handled via a simple linear combination. Such approaches may oversimplify the underlying problem and hence produce suboptimal results. This paper serves as a guide to the application of multi-objective methods to difficult problems, and is aimed at researchers who are already familiar with single-objective reinforcement learning and planning methods who wish to adopt a multi-objective perspective on their research, as well as practitioners who encounter multi-objective decision problems in practice. It identifies the factors that may influence the nature of the desired solution, and illustrates by example how these influence the design of multi-objective decision-making systems for complex problems

    {SETH}-Based Lower Bounds for Subset Sum and Bicriteria Path

    Get PDF
    Subset-Sum and k-SAT are two of the most extensively studied problems in computer science, and conjectures about their hardness are among the cornerstones of fine-grained complexity. One of the most intriguing open problems in this area is to base the hardness of one of these problems on the other. Our main result is a tight reduction from k-SAT to Subset-Sum on dense instances, proving that Bellman's 1962 pseudo-polynomial O∗(T)O^{*}(T)-time algorithm for Subset-Sum on nn numbers and target TT cannot be improved to time T1−ε⋅2o(n)T^{1-\varepsilon}\cdot 2^{o(n)} for any ε>0\varepsilon>0, unless the Strong Exponential Time Hypothesis (SETH) fails. This is one of the strongest known connections between any two of the core problems of fine-grained complexity. As a corollary, we prove a "Direct-OR" theorem for Subset-Sum under SETH, offering a new tool for proving conditional lower bounds: It is now possible to assume that deciding whether one out of NN given instances of Subset-Sum is a YES instance requires time (NT)1−o(1)(N T)^{1-o(1)}. As an application of this corollary, we prove a tight SETH-based lower bound for the classical Bicriteria s,t-Path problem, which is extensively studied in Operations Research. We separate its complexity from that of Subset-Sum: On graphs with mm edges and edge lengths bounded by LL, we show that the O(Lm)O(Lm) pseudo-polynomial time algorithm by Joksch from 1966 cannot be improved to O~(L+m)\tilde{O}(L+m), in contrast to a recent improvement for Subset Sum (Bringmann, SODA 2017)

    Multi-Objective Trust-Region Filter Method for Nonlinear Constraints using Inexact Gradients

    Full text link
    In this article, we build on previous work to present an optimization algorithm for nonlinearly constrained multi-objective optimization problems. The algorithm combines a surrogate-assisted derivative-free trust-region approach with the filter method known from single-objective optimization. Instead of the true objective and constraint functions, so-called fully linear models are employed, and we show how to deal with the gradient inexactness in the composite step setting, adapted from single-objective optimization as well. Under standard assumptions, we prove convergence of a subset of iterates to a quasi-stationary point and if constraint qualifications hold, then the limit point is also a KKT-point of the multi-objective problem
    corecore