178 research outputs found

    Computer-inspired Quantum Experiments

    Full text link
    The design of new devices and experiments in science and engineering has historically relied on the intuitions of human experts. This credo, however, has changed. In many disciplines, computer-inspired design processes, also known as inverse-design, have augmented the capability of scientists. Here we visit different fields of physics in which computer-inspired designs are applied. We will meet vastly diverse computational approaches based on topological optimization, evolutionary strategies, deep learning, reinforcement learning or automated reasoning. Then we draw our attention specifically on quantum physics. In the quest for designing new quantum experiments, we face two challenges: First, quantum phenomena are unintuitive. Second, the number of possible configurations of quantum experiments explodes combinatorially. To overcome these challenges, physicists began to use algorithms for computer-designed quantum experiments. We focus on the most mature and \textit{practical} approaches that scientists used to find new complex quantum experiments, which experimentalists subsequently have realized in the laboratories. The underlying idea is a highly-efficient topological search, which allows for scientific interpretability. In that way, some of the computer-designs have led to the discovery of new scientific concepts and ideas -- demonstrating how computer algorithm can genuinely contribute to science by providing unexpected inspirations. We discuss several extensions and alternatives based on optimization and machine learning techniques, with the potential of accelerating the discovery of practical computer-inspired experiments or concepts in the future. Finally, we discuss what we can learn from the different approaches in the fields of physics, and raise several fascinating possibilities for future research.Comment: Comments and suggestions for additional references are welcome

    Combined optimization algorithms applied to pattern classification

    Get PDF
    Accurate classification by minimizing the error on test samples is the main goal in pattern classification. Combinatorial optimization is a well-known method for solving minimization problems, however, only a few examples of classifiers axe described in the literature where combinatorial optimization is used in pattern classification. Recently, there has been a growing interest in combining classifiers and improving the consensus of results for a greater accuracy. In the light of the "No Ree Lunch Theorems", we analyse the combination of simulated annealing, a powerful combinatorial optimization method that produces high quality results, with the classical perceptron algorithm. This combination is called LSA machine. Our analysis aims at finding paradigms for problem-dependent parameter settings that ensure high classifica, tion results. Our computational experiments on a large number of benchmark problems lead to results that either outperform or axe at least competitive to results published in the literature. Apart from paxameter settings, our analysis focuses on a difficult problem in computation theory, namely the network complexity problem. The depth vs size problem of neural networks is one of the hardest problems in theoretical computing, with very little progress over the past decades. In order to investigate this problem, we introduce a new recursive learning method for training hidden layers in constant depth circuits. Our findings make contributions to a) the field of Machine Learning, as the proposed method is applicable in training feedforward neural networks, and to b) the field of circuit complexity by proposing an upper bound for the number of hidden units sufficient to achieve a high classification rate. One of the major findings of our research is that the size of the network can be bounded by the input size of the problem and an approximate upper bound of 8 + √2n/n threshold gates as being sufficient for a small error rate, where n := log/SL and SL is the training set

    Adaptive Search and Constraint Optimisation in Engineering Design

    Get PDF
    The dissertation presents the investigation and development of novel adaptive computational techniques that provide a high level of performance when searching complex high-dimensional design spaces characterised by heavy non-linear constraint requirements. The objective is to develop a set of adaptive search engines that will allow the successful negotiation of such spaces to provide the design engineer with feasible high performance solutions. Constraint optimisation currently presents a major problem to the engineering designer and many attempts to utilise adaptive search techniques whilst overcoming these problems are in evidence. The most widely used method (which is also the most general) is to incorporate the constraints in the objective function and then use methods for unconstrained search. The engineer must develop and adjust an appropriate penalty function. There is no general solution to this problem neither in classical numerical optimisation nor in evolutionary computation. Some recent theoretical evidence suggests that the problem can only be solved by incorporating a priori knowledge into the search engine. Therefore, it becomes obvious that there is a need to classify constrained optimisation problems according to the degree of available or utilised knowledge and to develop search techniques applicable at each stage. The contribution of this thesis is to provide such a view of constrained optimisation, starting from problems that handle the constraints on the representation level, going through problems that have explicitly defined constraints (i.e., an easily computed closed form like a solvable equation), and ending with heavily constrained problems with implicitly defined constraints (incorporated into a single simulation model). At each stage we develop applicable adaptive search techniques that optimally exploit the degree of available a priori knowledge thus providing excellent quality of results and high performance. The proposed techniques are tested using both well known test beds and real world engineering design problems provided by industry.British Aerospace, Rolls Royce and Associate

    Evolutionary Computation

    Get PDF
    This book presents several recent advances on Evolutionary Computation, specially evolution-based optimization methods and hybrid algorithms for several applications, from optimization and learning to pattern recognition and bioinformatics. This book also presents new algorithms based on several analogies and metafores, where one of them is based on philosophy, specifically on the philosophy of praxis and dialectics. In this book it is also presented interesting applications on bioinformatics, specially the use of particle swarms to discover gene expression patterns in DNA microarrays. Therefore, this book features representative work on the field of evolutionary computation and applied sciences. The intended audience is graduate, undergraduate, researchers, and anyone who wishes to become familiar with the latest research work on this field

    Machine learning into metaheuristics: A survey and taxonomy of data-driven metaheuristics

    Get PDF
    During the last years, research in applying machine learning (ML) to design efficient, effective and robust metaheuristics became increasingly popular. Many of those data driven metaheuristics have generated high quality results and represent state-of-the-art optimization algorithms. Although various appproaches have been proposed, there is a lack of a comprehensive survey and taxonomy on this research topic. In this paper we will investigate different opportunities for using ML into metaheuristics. We define uniformly the various ways synergies which might be achieved. A detailed taxonomy is proposed according to the concerned search component: target optimization problem, low-level and high-level components of metaheuristics. Our goal is also to motivate researchers in optimization to include ideas from ML into metaheuristics. We identify some open research issues in this topic which needs further in-depth investigations

    Automated Heuristic Generation By Intelligent Search

    Get PDF
    This thesis presents research that examines the effectiveness of several different program synthesis techniques when used to automate the creation of heuristics for a local search-based Boolean Satisfiability solver. Previous research focused on the automated creation of heuristics has almost exclusively relied on evolutionary computation techniques such as genetic programming to achieve its goal. In wider program synthesis research, there are many other techniques which can automate the creation of programs. However, little effort has been expended on utilising these alternate techniques in automated heuristic creation. In this thesis we analyse how three different program synthesis techniques perform when used to automatically create heuristics for our problem domain. These are genetic programming, exhaustive enumeration and a new technique called local search program synthesis. We show how genetic programming can create effective heuristics for our domain. By generating millions of heuristics, we demonstrate how exhaustive enumeration can create small, easily understandable and effective heuristics. Through an analysis of the memoized results from the exhaustive enumeration experiments, we then describe local search program synthesis, a program synthesis technique based on the minimum tree edit distance metric. Using the memoized results, we simulate local search program synthesis on our domain, and present evidence that suggests it is a viable technique for automatically creating heuristics. We then define the necessary algorithms required to use local search program synthesis without any reliance on memoized data. Through experimentation, we show how local search program synthesis can be used to create effective heuristics for our domain. We then identify examples of heuristics created that are of higher quality than those produced from other program synthesis methods. At certain points in this thesis, we perform a more detailed analysis on some of the heuristics created. Through this analysis, we show that, on certain problem instances, several of the heuristics have better performance than some state-of-the-art, hand-crafted heuristics

    Metaheuristic and Multiobjective Approaches for Space Allocation

    Get PDF
    This thesis presents an investigation on the application of metaheuristic techniques to tackle the space allocation problem in academic institutions. This is a combinatorial optimisation problem which refers to the distribution of the available room space among a set of entities (staff, research students, computer rooms, etc.) in such a way that the space is utilised as efficiently as possible and the additional constraints are satisfied as much as possible. The literature on the application of optimisation techniques to approach the problem mentioned above is scarce. This thesis provides a description and formulation of the problem. It also proposes and compares a range of heuristics for the initialisation of solutions and for neighbourhood exploration. Four well-known metaheuristics (iterative improvement, simulated annealing, tabu search and genetic algorithms) are adapted and tuned for their application to the problem investigated here. The performance of these techniques is assessed and benchmark results are obtained. Also, hybrid approaches are designed that produce sets of high quality and diverse solutions in much shorter time than those required by space administrators who construct solutions manually. The hybrid approaches are also adapted to tackle the space allocation problem from a two-objective perspective. It is also revealed that the use of aggregating functions or relaxed dominance to evaluate solutions in Pareto optimisation, can be more beneficial than the standard dominance relation to enhance the performance of some multiobjective optimisers in some problem domains. A range of single-solution metaheuristics are extended to create hybrid evolutionary approaches based on the scheme of cooperative local search. This scheme promotes the cooperation of a population of local searchers by means of mechanisms to share the information gained during the search. This thesis also reports the best results known so far for a set of test instances of the space allocation problem in academic institutions. This thesis pioneers the application of metaheuristics to solve the space allocation problem. The major contributions are: provides a formulation of the problem together with tests data sets, reports the best known results for these test instances, investigates the multiobjective nature of the problem and proposes a new form of hybridising metaheuristics

    Metaheuristic and Multiobjective Approaches for Space Allocation

    Get PDF
    This thesis presents an investigation on the application of metaheuristic techniques to tackle the space allocation problem in academic institutions. This is a combinatorial optimisation problem which refers to the distribution of the available room space among a set of entities (staff, research students, computer rooms, etc.) in such a way that the space is utilised as efficiently as possible and the additional constraints are satisfied as much as possible. The literature on the application of optimisation techniques to approach the problem mentioned above is scarce. This thesis provides a description and formulation of the problem. It also proposes and compares a range of heuristics for the initialisation of solutions and for neighbourhood exploration. Four well-known metaheuristics (iterative improvement, simulated annealing, tabu search and genetic algorithms) are adapted and tuned for their application to the problem investigated here. The performance of these techniques is assessed and benchmark results are obtained. Also, hybrid approaches are designed that produce sets of high quality and diverse solutions in much shorter time than those required by space administrators who construct solutions manually. The hybrid approaches are also adapted to tackle the space allocation problem from a two-objective perspective. It is also revealed that the use of aggregating functions or relaxed dominance to evaluate solutions in Pareto optimisation, can be more beneficial than the standard dominance relation to enhance the performance of some multiobjective optimisers in some problem domains. A range of single-solution metaheuristics are extended to create hybrid evolutionary approaches based on the scheme of cooperative local search. This scheme promotes the cooperation of a population of local searchers by means of mechanisms to share the information gained during the search. This thesis also reports the best results known so far for a set of test instances of the space allocation problem in academic institutions. This thesis pioneers the application of metaheuristics to solve the space allocation problem. The major contributions are: provides a formulation of the problem together with tests data sets, reports the best known results for these test instances, investigates the multiobjective nature of the problem and proposes a new form of hybridising metaheuristics

    Generalising weighted model counting

    Get PDF
    Given a formula in propositional or (finite-domain) first-order logic and some non-negative weights, weighted model counting (WMC) is a function problem that asks to compute the sum of the weights of the models of the formula. Originally used as a flexible way of performing probabilistic inference on graphical models, WMC has found many applications across artificial intelligence (AI), machine learning, and other domains. Areas of AI that rely on WMC include explainable AI, neural-symbolic AI, probabilistic programming, and statistical relational AI. WMC also has applications in bioinformatics, data mining, natural language processing, prognostics, and robotics. In this work, we are interested in revisiting the foundations of WMC and considering generalisations of some of the key definitions in the interest of conceptual clarity and practical efficiency. We begin by developing a measure-theoretic perspective on WMC, which suggests a new and more general way of defining the weights of an instance. This new representation can be as succinct as standard WMC but can also expand as needed to represent less-structured probability distributions. We demonstrate the performance benefits of the new format by developing a novel WMC encoding for Bayesian networks. We then show how existing WMC encodings for Bayesian networks can be transformed into this more general format and what conditions ensure that the transformation is correct (i.e., preserves the answer). Combining the strengths of the more flexible representation with the tricks used in existing encodings yields further efficiency improvements in Bayesian network probabilistic inference. Next, we turn our attention to the first-order setting. Here, we argue that the capabilities of practical model counting algorithms are severely limited by their inability to perform arbitrary recursive computations. To enable arbitrary recursion, we relax the restrictions that typically accompany domain recursion and generalise circuits (used to express a solution to a model counting problem) to graphs that are allowed to have cycles. These improvements enable us to find efficient solutions to counting fundamental structures such as injections and bijections that were previously unsolvable by any available algorithm. The second strand of this work is concerned with synthetic data generation. Testing algorithms across a wide range of problem instances is crucial to ensure the validity of any claim about one algorithm’s superiority over another. However, benchmarks are often limited and fail to reveal differences among the algorithms. First, we show how random instances of probabilistic logic programs (that typically use WMC algorithms for inference) can be generated using constraint programming. We also introduce a new constraint to control the independence structure of the underlying probability distribution and provide a combinatorial argument for the correctness of the constraint model. This model allows us to, for the first time, experimentally investigate inference algorithms on more than just a handful of instances. Second, we introduce a random model for WMC instances with a parameter that influences primal treewidth—the parameter most commonly used to characterise the difficulty of an instance. We show that the easy-hard-easy pattern with respect to clause density is different for algorithms based on dynamic programming and algebraic decision diagrams than for all other solvers. We also demonstrate that all WMC algorithms scale exponentially with respect to primal treewidth, although at differing rates
    • …
    corecore