18 research outputs found

    Polynomial algorithms for p-dispersion problems in a 2d Pareto Front

    Full text link
    Having many best compromise solutions for bi-objective optimization problems, this paper studies p-dispersion problems to select p2p\geqslant 2 representative points in the Pareto Front(PF). Four standard variants of p-dispersion are considered. A novel variant, denoted Max-Sum-Neighbor p-dispersion, is introduced for the specific case of a 2d PF. Firstly, it is proven that 22-dispersion and 33-dispersion problems are solvable in O(n)O(n) time in a 2d PF. Secondly, dynamic programming algorithms are designed for three p-dispersion variants, proving polynomial complexities in a 2d PF. The Max-Min p-dispersion problem is proven solvable in O(pnlogn)O(pn\log n) time and O(n)O(n) memory space. The Max-Sum-Min p-dispersion problem is proven solvable in O(pn3)O(pn^3) time and O(pn2)O(pn^2) space. The Max-Sum-Neighbor p-dispersion problem is proven solvable in O(pn2)O(pn^2) time and O(pn)O(pn) space. Complexity results and parallelization issues are discussed in regards to practical implementation

    Co-evolutionary Hybrid Bi-level Optimization

    Get PDF
    Multi-level optimization stems from the need to tackle complex problems involving multiple decision makers. Two-level optimization, referred as ``Bi-level optimization'', occurs when two decision makers only control part of the decision variables but impact each other (e.g., objective value, feasibility). Bi-level problems are sequential by nature and can be represented as nested optimization problems in which one problem (the ``upper-level'') is constrained by another one (the ``lower-level''). The nested structure is a real obstacle that can be highly time consuming when the lower-level is NPhard\mathcal{NP}-hard. Consequently, classical nested optimization should be avoided. Some surrogate-based approaches have been proposed to approximate the lower-level objective value function (or variables) to reduce the number of times the lower-level is globally optimized. Unfortunately, such a methodology is not applicable for large-scale and combinatorial bi-level problems. After a deep study of theoretical properties and a survey of the existing applications being bi-level by nature, problems which can benefit from a bi-level reformulation are investigated. A first contribution of this work has been to propose a novel bi-level clustering approach. Extending the well-know ``uncapacitated k-median problem'', it has been shown that clustering can be easily modeled as a two-level optimization problem using decomposition techniques. The resulting two-level problem is then turned into a bi-level problem offering the possibility to combine distance metrics in a hierarchical manner. The novel bi-level clustering problem has a very interesting property that enable us to tackle it with classical nested approaches. Indeed, its lower-level problem can be solved in polynomial time. In cooperation with the Luxembourg Centre for Systems Biomedicine (LCSB), this new clustering model has been applied on real datasets such as disease maps (e.g. Parkinson, Alzheimer). Using a novel hybrid and parallel genetic algorithm as optimization approach, the results obtained after a campaign of experiments have the ability to produce new knowledge compared to classical clustering techniques combining distance metrics in a classical manner. The previous bi-level clustering model has the advantage that the lower-level can be solved in polynomial time although the global problem is by definition NP\mathcal{NP}-hard. Therefore, next investigations have been undertaken to tackle more general bi-level problems in which the lower-level problem does not present any specific advantageous properties. Since the lower-level problem can be very expensive to solve, the focus has been turned to surrogate-based approaches and hyper-parameter optimization techniques with the aim of approximating the lower-level problem and reduce the number of global lower-level optimizations. Adapting the well-know bayesian optimization algorithm to solve general bi-level problems, the expensive lower-level optimizations have been dramatically reduced while obtaining very accurate solutions. The resulting solutions and the number of spared lower-level optimizations have been compared to the bi-level evolutionary algorithm based on quadratic approximations (BLEAQ) results after a campaign of experiments on official bi-level benchmarks. Although both approaches are very accurate, the bi-level bayesian version required less lower-level objective function calls. Surrogate-based approaches are restricted to small-scale and continuous bi-level problems although many real applications are combinatorial by nature. As for continuous problems, a study has been performed to apply some machine learning strategies. Instead of approximating the lower-level solution value, new approximation algorithms for the discrete/combinatorial case have been designed. Using the principle employed in GP hyper-heuristics, heuristics are trained in order to tackle efficiently the NPhard\mathcal{NP}-hard lower-level of bi-level problems. This automatic generation of heuristics permits to break the nested structure into two separated phases: \emph{training lower-level heuristics} and \emph{solving the upper-level problem with the new heuristics}. At this occasion, a second modeling contribution has been introduced through a novel large-scale and mixed-integer bi-level problem dealing with pricing in the cloud, i.e., the Bi-level Cloud Pricing Optimization Problem (BCPOP). After a series of experiments that consisted in training heuristics on various lower-level instances of the BCPOP and using them to tackle the bi-level problem itself, the obtained results are compared to the ``cooperative coevolutionary algorithm for bi-level optimization'' (COBRA). Although training heuristics enables to \emph{break the nested structure}, a two phase optimization is still required. Therefore, the emphasis has been put on training heuristics while optimizing the upper-level problem using competitive co-evolution. Instead of adopting the classical decomposition scheme as done by COBRA which suffers from the strong epistatic links between lower-level and upper-level variables, co-evolving the solution and the mean to get to it can cope with these epistatic link issues. The ``CARBON'' algorithm developed in this thesis is a competitive and hybrid co-evolutionary algorithm designed for this purpose. In order to validate the potential of CARBON, numerical experiments have been designed and results have been compared to state-of-the-art algorithms. These results demonstrate that ``CARBON'' makes possible to address nested optimization efficiently

    Interactive optimization for supporting multicriteria decisions in urban and energy system planning

    Get PDF
    Climate change and growing urban populations are increasingly putting pressure on cities to reduce their carbon emissions and transition towards efficient and renewable energy systems. This challenges in particular urban planners, who are expected to integrate technical energy aspects and balance them with the conflicting and often elusive needs of other urban actors. This thesis explores how multicriteria decision analysis, and in particular multiobjective optimization techniques, can support this task. While multiobjective optimization is particularly suited for generating efficient and original alternatives, it presents two shortcomings when targeted at large, intractable problems. First, the problem size prevents a complete identification of all solutions. Second, the preferences required to narrow the problem size are difficult to know and formulate precisely before seeing the possible alternatives. Interactive optimization addresses both of these gaps by involving the human decision-maker in the calculation process, incorporating their preferences at the same time as the generated alternatives enrich their understanding of acceptable tradeoffs and important criteria. For interactive optimization methods to be adopted in practice, computational frameworks are required, which can handle and visualize many objectives simultaneously, provide optimal solutions quickly and representatively, all while remaining simple and intuitive to use and understand by practitioners. Accordingly, the main objective of this thesis is: To develop a decision support methodology which enables the integration of energy issues in the early stages of urban planning. The proposed response and main contribution is SAGESSE (Systematic Analysis, Generation, Exploration, Steering and Synthesis Experience), an interactive multiobjective optimization decision support methodology, which addresses the practical and technical shortcomings above. Its innovative aspect resides in the combination of (i) parallel coordinates as a means to simultaneously explore and steer the alternative-generation process, (ii) a quasi-random sampling technique to efficiently explore the solution space in areas specified by the decision maker, and (iii) the integration of multiattribute decision analysis, cluster analysis and linked data visualization techniques to facilitate the interpretation of the Pareto front in real-time. Developed in collaboration with urban and energy planning practitioners, the methodology was applied to two Swiss urban planning case-studies: one greenfield project, in which all buildings and energy technologies are conceived ex nihilo, and one brownfield project, in which an existing urban neighborhood is redeveloped. These applications led to the progressive development of computational methods based on mathematical programming and data modeling (in the context of another thesis) which, applied with SAGESSE, form the planning support system URBio. Results indicate that the methodology is effective in exploring hundreds of plans and revealing tradeoffs and synergies between multiple objectives. The concrete outcomes of the calculations provide inputs for specifying political targets and deriving urban master plans

    A Polyhedral Study of Mixed 0-1 Set

    Get PDF
    We consider a variant of the well-known single node fixed charge network flow set with constant capacities. This set arises from the relaxation of more general mixed integer sets such as lot-sizing problems with multiple suppliers. We provide a complete polyhedral characterization of the convex hull of the given set

    Exact enumeration of local minima for kmedoids clustering in a 2D Pareto Front

    No full text
    International audienceK-medoids clustering is solvable by dynamic programming in O(N 3) time for a 2D Pareto Front (PF). A key element is a interval clustering optimality. This paper proves this property holds also for local minima for k-medoids. It allows to enumerate the local minima of k-medoids with the same complexity than the computation of global optima for k=2 ou k=3. A pseudo-polynomial enumeration scheme is designed, for small values of k. This allows to understand results obtained by local search approaches in a 2D PF

    Exact enumeration of local minima for kmedoids clustering in a 2D Pareto Front

    No full text
    International audienceK-medoids clustering is solvable by dynamic programming in O(N 3) time for a 2D Pareto Front (PF). A key element is a interval clustering optimality. This paper proves this property holds also for local minima for k-medoids. It allows to enumerate the local minima of k-medoids with the same complexity than the computation of global optima for k=2 ou k=3. A pseudo-polynomial enumeration scheme is designed, for small values of k. This allows to understand results obtained by local search approaches in a 2D PF

    Seventh Biennial Report : June 2003 - March 2005

    No full text
    corecore