3,726 research outputs found

    Learning the structure of Bayesian Networks: A quantitative assessment of the effect of different algorithmic schemes

    Full text link
    One of the most challenging tasks when adopting Bayesian Networks (BNs) is the one of learning their structure from data. This task is complicated by the huge search space of possible solutions, and by the fact that the problem is NP-hard. Hence, full enumeration of all the possible solutions is not always feasible and approximations are often required. However, to the best of our knowledge, a quantitative analysis of the performance and characteristics of the different heuristics to solve this problem has never been done before. For this reason, in this work, we provide a detailed comparison of many different state-of-the-arts methods for structural learning on simulated data considering both BNs with discrete and continuous variables, and with different rates of noise in the data. In particular, we investigate the performance of different widespread scores and algorithmic approaches proposed for the inference and the statistical pitfalls within them

    New insights on neutral binary representations for evolutionary optimization

    Get PDF
    This paper studies a family of redundant binary representations NNg(l, k), which are based on the mathematical formulation of error control codes, in particular, on linear block codes, which are used to add redundancy and neutrality to the representations. The analysis of the properties of uniformity, connectivity, synonymity, locality and topology of the NNg(l, k) representations is presented, as well as the way an (1+1)-ES can be modeled using Markov chains and applied to NK fitness landscapes with adjacent neighborhood.The results show that it is possible to design synonymously redundant representations that allow an increase of the connectivity between phenotypes. For easy problems, synonymously NNg(l, k) representations, with high locality, and where it is not necessary to present high values of connectivity are the most suitable for an efficient evolutionary search. On the contrary, for difficult problems, NNg(l, k) representations with low locality, which present connectivity between intermediate to high and with intermediate values of synonymity are the best ones. These results allow to conclude that NNg(l, k) representations with better performance in NK fitness landscapes with adjacent neighborhood do not exhibit extreme values of any of the properties commonly considered in the literature of evolutionary computation. This conclusion is contrary to what one would expect when taking into account the literature recommendations. This may help understand the current difficulty to formulate redundant representations, which are proven to be successful in evolutionary computation. (C) 2016 Elsevier B.V. All rights reserved

    A Survey on Software Testing Techniques using Genetic Algorithm

    Full text link
    The overall aim of the software industry is to ensure delivery of high quality software to the end user. To ensure high quality software, it is required to test software. Testing ensures that software meets user specifications and requirements. However, the field of software testing has a number of underlying issues like effective generation of test cases, prioritisation of test cases etc which need to be tackled. These issues demand on effort, time and cost of the testing. Different techniques and methodologies have been proposed for taking care of these issues. Use of evolutionary algorithms for automatic test generation has been an area of interest for many researchers. Genetic Algorithm (GA) is one such form of evolutionary algorithms. In this research paper, we present a survey of GA approach for addressing the various issues encountered during software testing.Comment: 13 Page

    Inheritance-Based Diversity Measures for Explicit Convergence Control in Evolutionary Algorithms

    Full text link
    Diversity is an important factor in evolutionary algorithms to prevent premature convergence towards a single local optimum. In order to maintain diversity throughout the process of evolution, various means exist in literature. We analyze approaches to diversity that (a) have an explicit and quantifiable influence on fitness at the individual level and (b) require no (or very little) additional domain knowledge such as domain-specific distance functions. We also introduce the concept of genealogical diversity in a broader study. We show that employing these approaches can help evolutionary algorithms for global optimization in many cases.Comment: GECCO '18: Genetic and Evolutionary Computation Conference, 2018, Kyoto, Japa

    The Right Mutation Strength for Multi-Valued Decision Variables

    Full text link
    The most common representation in evolutionary computation are bit strings. This is ideal to model binary decision variables, but less useful for variables taking more values. With very little theoretical work existing on how to use evolutionary algorithms for such optimization problems, we study the run time of simple evolutionary algorithms on some OneMax-like functions defined over Ω={0,1,,r1}n\Omega = \{0, 1, \dots, r-1\}^n. More precisely, we regard a variety of problem classes requesting the component-wise minimization of the distance to an unknown target vector zΩz \in \Omega. For such problems we see a crucial difference in how we extend the standard-bit mutation operator to these multi-valued domains. While it is natural to select each position of the solution vector to be changed independently with probability 1/n1/n, there are various ways to then change such a position. If we change each selected position to a random value different from the original one, we obtain an expected run time of Θ(nrlogn)\Theta(nr \log n). If we change each selected position by either +1+1 or 1-1 (random choice), the optimization time reduces to Θ(nr+nlogn)\Theta(nr + n\log n). If we use a random mutation strength i{0,1,,r1}ni \in \{0,1,\ldots,r-1\}^n with probability inversely proportional to ii and change the selected position by either +i+i or i-i (random choice), then the optimization time becomes Θ(nlog(r)(log(n)+log(r)))\Theta(n \log(r)(\log(n)+\log(r))), bringing down the dependence on rr from linear to polylogarithmic. One of our results depends on a new variant of the lower bounding multiplicative drift theorem.Comment: an extended abstract of this work is to appear at GECCO 201

    Structural matching by discrete relaxation

    Get PDF
    This paper describes a Bayesian framework for performing relational graph matching by discrete relaxation. Our basic aim is to draw on this framework to provide a comparative evaluation of a number of contrasting approaches to relational matching. Broadly speaking there are two main aspects to this study. Firstly we locus on the issue of how relational inexactness may be quantified. We illustrate that several popular relational distance measures can be recovered as specific limiting cases of the Bayesian consistency measure. The second aspect of our comparison concerns the way in which structural inexactness is controlled. We investigate three different realizations ai the matching process which draw on contrasting control models. The main conclusion of our study is that the active process of graph-editing outperforms the alternatives in terms of its ability to effectively control a large population of contaminating clutter
    corecore