981 research outputs found
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Falcon Optimization Algorithm for Bayesian Networks Structure Learning
In machine-learning, one of the useful scientific models for producing the structure of knowledge is Bayesian network, which can draw probabilistic dependency relationships between variables. The score and search is a method used for learning the structure of a Bayesian network. The authors apply the Falcon Optimization Algorithm (FOA) as a new approach to learning the structure of Bayesian networks. This paper uses the Reversing, Deleting, Moving and Inserting operations to adopt the FOA for approaching the optimal solution of Bayesian network structure. Essentially, the falcon prey search strategy is used in the FOA algorithm. The result of the proposed technique is compared with Pigeon Inspired optimization, Greedy Search, and Simulated Annealing using the BDeu score function. The authors have also examined the performances of the confusion matrix of these techniques utilizing several benchmark data sets. As shown by the evaluations, the proposed method has more reliable performance than the other algorithms including producing better scores and accuracy values
Learning the structure of Bayesian Networks: A quantitative assessment of the effect of different algorithmic schemes
One of the most challenging tasks when adopting Bayesian Networks (BNs) is
the one of learning their structure from data. This task is complicated by the
huge search space of possible solutions, and by the fact that the problem is
NP-hard. Hence, full enumeration of all the possible solutions is not always
feasible and approximations are often required. However, to the best of our
knowledge, a quantitative analysis of the performance and characteristics of
the different heuristics to solve this problem has never been done before.
For this reason, in this work, we provide a detailed comparison of many
different state-of-the-arts methods for structural learning on simulated data
considering both BNs with discrete and continuous variables, and with different
rates of noise in the data. In particular, we investigate the performance of
different widespread scores and algorithmic approaches proposed for the
inference and the statistical pitfalls within them
Problem dependent metaheuristic performance in Bayesian network structure learning.
Bayesian network (BN) structure learning from data has been an active research area in the machine learning field in recent decades. Much of the research has considered BN structure learning as an optimization problem. However, the finding of optimal BN from data is NP-hard. This fact has driven the use of heuristic algorithms for solving this kind of problem. Amajor recent focus in BN structure learning is on search and score algorithms. In these algorithms, a scoring function is introduced and a heuristic search algorithm is used to evaluate each network with respect to the training data. The optimal network is produced according to the best score evaluated. This thesis investigates a range of search and score algorithms to understand the relationship between technique performance and structure features of the problems. The main contributions of this thesis include (a) Two novel Ant Colony Optimization based search and score algorithms for BN structure learning; (b) Node juxtaposition distribution for studying the relationship between the best node ordering and the optimal BN structure; (c) Fitness landscape analysis for investigating the di erent performances of both chain score function and the CH score function; (d) A classifier method is constructed by utilizing receiver operating characteristic curve with the results on fitness landscape analysis; and finally (e) a selective o -line hyperheuristic algorithm is built for unseen BN structure learning with search and score algorithms. In this thesis, we also construct a new algorithm for producing BN benchmark structures and apply our novel approaches to a range of benchmark problems and real world problem
On the use of local search heuristics to improve GES-based Bayesian network learning
Bayesian networks learning is computationally expensive even in the case of sacrificing the optimality of the result. Many methods aim at obtaining quality solutions in affordable times. Most of them are based on local search algorithms, as they allow evaluating candidate networks in a very efficient way, and can be further improved by us ing local search-based metaheuristics to avoid getting stuck in local optima. This approach has been successfully applied in searching for network structures in the space of directed acyclic graphs.
Other algorithms search for the networks in the space of equiva lence classes. The most important of these is GES (Greedy Equiv alence Search). It guarantees obtaining the optimal network under certain conditions. However, it can also get stuck in local optima
when learning from datasets with limited size. This article proposes the use of local search-based metaheuristics as a way to improve the behaviour of GES in such circumstances. These methods also guar antee asymptotical optimality, and the experiments show that they improve upon the score of the networks obtained with GES
- …