82 research outputs found

    Neural Networks for Predicting Algorithm Runtime Distributions

    Full text link
    Many state-of-the-art algorithms for solving hard combinatorial problems in artificial intelligence (AI) include elements of stochasticity that lead to high variations in runtime, even for a fixed problem instance. Knowledge about the resulting runtime distributions (RTDs) of algorithms on given problem instances can be exploited in various meta-algorithmic procedures, such as algorithm selection, portfolios, and randomized restarts. Previous work has shown that machine learning can be used to individually predict mean, median and variance of RTDs. To establish a new state-of-the-art in predicting RTDs, we demonstrate that the parameters of an RTD should be learned jointly and that neural networks can do this well by directly optimizing the likelihood of an RTD given runtime observations. In an empirical study involving five algorithms for SAT solving and AI planning, we show that neural networks predict the true RTDs of unseen instances better than previous methods, and can even do so when only few runtime observations are available per training instance

    Fine-grained Search Space Classification for Hard Enumeration Variants of Subset Problems

    Full text link
    We propose a simple, powerful, and flexible machine learning framework for (i) reducing the search space of computationally difficult enumeration variants of subset problems and (ii) augmenting existing state-of-the-art solvers with informative cues arising from the input distribution. We instantiate our framework for the problem of listing all maximum cliques in a graph, a central problem in network analysis, data mining, and computational biology. We demonstrate the practicality of our approach on real-world networks with millions of vertices and edges by not only retaining all optimal solutions, but also aggressively pruning the input instance size resulting in several fold speedups of state-of-the-art algorithms. Finally, we explore the limits of scalability and robustness of our proposed framework, suggesting that supervised learning is viable for tackling NP-hard problems in practice.Comment: AAAI 201

    Heuristic Optimisation in Financial Modelling

    Get PDF
    There is a large number of optimisation problems in theoretical and applied finance that are difficult to solve as they exhibit multiple local optima or are not ‘well- behaved’ in other ways (eg, discontinuities in the objective function). One way to deal with such problems is to adjust and to simplify them, for instance by dropping constraints, until they can be solved with standard numerical methods. This paper argues that an alternative approach is the application of optimisation heuristics like Simulated Annealing or Genetic Algorithms. These methods have been shown to be capable to handle non-convex optimisation problems with all kinds of constraints. To motivate the use of such techniques in finance, the paper presents several actual problems where classical methods fail. Next, several well-known heuristic techniques that may be deployed in such cases are described. Since such presentations are quite general, the paper describes in some detail how a particular problem, portfolio selection, can be tackled by a particular heuristic method, Threshold Accepting. Finally, the stochastics of the solutions obtained from heuristics are discussed. It is shown, again for the example from portfolio selection, how this random character of the solutions can be exploited to inform the distribution of computations.Optimisation heuristics, Financial Optimisation, Portfolio Optimisation

    Solving the Optimal Trading Trajectory Problem Using a Quantum Annealer

    Get PDF
    We solve a multi-period portfolio optimization problem using D-Wave Systems' quantum annealer. We derive a formulation of the problem, discuss several possible integer encoding schemes, and present numerical examples that show high success rates. The formulation incorporates transaction costs (including permanent and temporary market impact), and, significantly, the solution does not require the inversion of a covariance matrix. The discrete multi-period portfolio optimization problem we solve is significantly harder than the continuous variable problem. We present insight into how results may be improved using suitable software enhancements, and why current quantum annealing technology limits the size of problem that can be successfully solved today. The formulation presented is specifically designed to be scalable, with the expectation that as quantum annealing technology improves, larger problems will be solvable using the same techniques.Comment: 7 pages; expanded and update

    Heuristic optimisation in financial modelling

    Get PDF
    There is a large number of optimisation problems in theoretical and applied finance that are difficult to solve as they exhibit multiple local optima or are not ‘well-behaved' in other ways (e.g., discontinuities in the objective function). One way to deal with such problems is to adjust and to simplify them, for instance by dropping constraints, until they can be solved with standard numerical methods. We argue that an alternative approach is the application of optimisation heuristics like Simulated Annealing or Genetic Algorithms. These methods have been shown to be capable of handling non-convex optimisation problems with all kinds of constraints. To motivate the use of such techniques in finance, we present several actual problems where classical methods fail. Next, several well-known heuristic techniques that may be deployed in such cases are described. Since such presentations are quite general, we then describe in some detail how a particular problem, portfolio selection, can be tackled by a particular heuristic method, Threshold Accepting. Finally, the stochastics of the solutions obtained from heuristics are discussed. We show, again for the example from portfolio selection, how this random character of the solutions can be exploited to inform the distribution of computation

    Optimal Communication Structures for Concurrent Computing

    Get PDF
    This research focuses on communicative solvers that run concurrently and exchange information to improve performance. This “team of solvers” enables individual algorithms to communicate information regarding their progress and intermediate solutions, and allows them to synchronize memory structures with more “successful” counterparts. The result is that fewer nodes spend computational resources on “struggling” processes. The research is focused on optimization of communication structures that maximize algorithmic efficiency using the theoretical framework of Markov chains. Existing research addressing communication between the cooperative solvers on parallel systems lacks generality: Most studies consider a limited number of communication topologies and strategies, while the evaluation of different configurations is mostly limited to empirical testing. Currently, there is no theoretical framework for tuning communication between cooperative solvers to match the underlying hardware and software. Our goal is to provide such functionality by mapping solvers’ dynamics to Markov processes, and formulating the automatic tuning of communication as a well-defined optimization problem with an objective to maximize solvers’ performance metrics

    Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    Get PDF
    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs

    Teams of global equilibrium search algorithms for solving weighted MAXIMUM CUT problem in parallel

    No full text
    In this paper, we investigate the impact of communication between optimization algorithms running in parallel. In particular we focus on the weighted maximum cut (WMAXCUT) problem and compare different communication strategies between teams of GES algorithms running in parallel. The results obtained by teams encourage the development of team algorithms. They were significantly better than the algorithmic portfolio (no communication) approach and suggest that the communication between algorithms running in parallel is a promising research direction.Досліджено обмін інформацією між оптимізаційними алгоритмами, працюючими паралельно над однією задачею. Вивчалась задача про максимальний зважений розріз графа (WMAXCUT) і порівняння різних стратегій взаємодії між командами алгоритмів GES. Отримані результати свідчать про те, що обмін інформацією між алгоритмами, працюючими паралельно, є перспективним напрямом дослідження

    Learning dynamic algorithm portfolios

    Get PDF
    Algorithm selection can be performed using a model of runtime distribution, learned during a preliminary training phase. There is a trade-off between the performance of model-based algorithm selection, and the cost of learning the model. In this paper, we treat this trade-off in the context of bandit problems. We propose a fully dynamic and online algorithm selection technique, with no separate training phase: all candidate algorithms are run in parallel, while a model incrementally learns their runtime distributions. A redundant set of time allocators uses the partially trained model to propose machine time shares for the algorithms. A bandit problem solver mixes the model-based shares with a uniform share, gradually increasing the impact of the best time allocators as the model improves. We present experiments with a set of SAT solvers on a mixed SAT-UNSAT benchmark; and with a set of solvers for the Auction Winner Determination proble
    corecore