499 research outputs found

    Knowledge revision in systems based on an informed tree search strategy : application to cartographic generalisation

    Full text link
    Many real world problems can be expressed as optimisation problems. Solving this kind of problems means to find, among all possible solutions, the one that maximises an evaluation function. One approach to solve this kind of problem is to use an informed search strategy. The principle of this kind of strategy is to use problem-specific knowledge beyond the definition of the problem itself to find solutions more efficiently than with an uninformed strategy. This kind of strategy demands to define problem-specific knowledge (heuristics). The efficiency and the effectiveness of systems based on it directly depend on the used knowledge quality. Unfortunately, acquiring and maintaining such knowledge can be fastidious. The objective of the work presented in this paper is to propose an automatic knowledge revision approach for systems based on an informed tree search strategy. Our approach consists in analysing the system execution logs and revising knowledge based on these logs by modelling the revision problem as a knowledge space exploration problem. We present an experiment we carried out in an application domain where informed search strategies are often used: cartographic generalisation.Comment: Knowledge Revision; Problem Solving; Informed Tree Search Strategy; Cartographic Generalisation., Paris : France (2008

    Finding Near-Optimal Independent Sets at Scale

    Full text link
    The independent set problem is NP-hard and particularly difficult to solve in large sparse graphs. In this work, we develop an advanced evolutionary algorithm, which incorporates kernelization techniques to compute large independent sets in huge sparse networks. A recent exact algorithm has shown that large networks can be solved exactly by employing a branch-and-reduce technique that recursively kernelizes the graph and performs branching. However, one major drawback of their algorithm is that, for huge graphs, branching still can take exponential time. To avoid this problem, we recursively choose vertices that are likely to be in a large independent set (using an evolutionary approach), then further kernelize the graph. We show that identifying and removing vertices likely to be in large independent sets opens up the reduction space---which not only speeds up the computation of large independent sets drastically, but also enables us to compute high-quality independent sets on much larger instances than previously reported in the literature.Comment: 17 pages, 1 figure, 8 tables. arXiv admin note: text overlap with arXiv:1502.0168

    Dynamic Local Search for the Maximum Clique Problem

    Full text link
    In this paper, we introduce DLS-MC, a new stochastic local search algorithm for the maximum clique problem. DLS-MC alternates between phases of iterative improvement, during which suitable vertices are added to the current clique, and plateau search, during which vertices of the current clique are swapped with vertices not contained in the current clique. The selection of vertices is solely based on vertex penalties that are dynamically adjusted during the search, and a perturbation mechanism is used to overcome search stagnation. The behaviour of DLS-MC is controlled by a single parameter, penalty delay, which controls the frequency at which vertex penalties are reduced. We show empirically that DLS-MC achieves substantial performance improvements over state-of-the-art algorithms for the maximum clique problem over a large range of the commonly used DIMACS benchmark instances

    Hybridising heuristics within an estimation distribution algorithm for examination timetabling

    Get PDF
    This paper presents a hybrid hyper-heuristic approach based on estimation distribution algorithms. The main motivation is to raise the level of generality for search methodologies. The objective of the hyper-heuristic is to produce solutions of acceptable quality for a number of optimisation problems. In this work, we demonstrate the generality through experimental results for different variants of exam timetabling problems. The hyper-heuristic represents an automated constructive method that searches for heuristic choices from a given set of low-level heuristics based only on non-domain-specific knowledge. The high-level search methodology is based on a simple estimation distribution algorithm. It is capable of guiding the search to select appropriate heuristics in different problem solving situations. The probability distribution of low-level heuristics at different stages of solution construction can be used to measure their effectiveness and possibly help to facilitate more intelligent hyper-heuristic search methods

    Global Optimisation for Energy System

    Get PDF
    The goal of global optimisation is to find globally optimal solutions, avoiding local optima and other stationary points. The aim of this thesis is to provide more efficient global optimisation tools for energy systems planning and operation. Due to the ongoing increasing of complexity and decentralisation of power systems, the use of advanced mathematical techniques that produce reliable solutions becomes necessary. The task of developing such methods is complicated by the fact that most energy-related problems are nonconvex due to the nonlinear Alternating Current Power Flow equations and the existence of discrete elements. In some cases, the computational challenges arising from the presence of non-convexities can be tackled by relaxing the definition of convexity and identifying classes of problems that can be solved to global optimality by polynomial time algorithms. One such property is known as invexity and is defined by every stationary point of a problem being a global optimum. This thesis investigates how the relation between the objective function and the structure of the feasible set is connected to invexity and presents necessary conditions for invexity in the general case and necessary and sufficient conditions for problems with two degrees of freedom. However, nonconvex problems often do not possess any provable convenient properties, and specialised methods are necessary for providing global optimality guarantees. A widely used technique is solving convex relaxations in order to find a bound on the optimal solution. Semidefinite Programming relaxations can provide good quality bounds, but they suffer from a lack of scalability. We tackle this issue by proposing an algorithm that combines decomposition and linearisation approaches. In addition to continuous non-convexities, many problems in Energy Systems model discrete decisions and are expressed as mixed-integer nonlinear programs (MINLPs). The formulation of a MINLP is of significant importance since it affects the quality of dual bounds. In this thesis we investigate algebraic characterisations of on/off constraints and develop a strengthened version of the Quadratic Convex relaxation of the Optimal Transmission Switching problem. All presented methods were implemented in mathematical modelling and optimisation frameworks PowerTools and Gravity

    A statistical learning based approach for parameter fine-tuning of metaheuristics

    Get PDF
    Metaheuristics are approximation methods used to solve combinatorial optimization problems. Their performance usually depends on a set of parameters that need to be adjusted. The selection of appropriate parameter values causes a loss of efficiency, as it requires time, and advanced analytical and problem-specific skills. This paper provides an overview of the principal approaches to tackle the Parameter Setting Problem, focusing on the statistical procedures employed so far by the scientific community. In addition, a novel methodology is proposed, which is tested using an already existing algorithm for solving the Multi-Depot Vehicle Routing Problem.Peer ReviewedPostprint (published version

    Visual and computational analysis of structure-activity relationships in high-throughput screening data

    Get PDF
    Novel analytic methods are required to assimilate the large volumes of structural and bioassay data generated by combinatorial chemistry and high-throughput screening programmes in the pharmaceutical and agrochemical industries. This paper reviews recent work in visualisation and data mining that can be used to develop structure-activity relationships from such chemical/biological datasets
    corecore