3,203 research outputs found
A review on Estimation of Distribution Algorithms in Permutation-based Combinatorial Optimization Problems
Estimation of Distribution Algorithms (EDAs) are a set of algorithms
that belong to the field of Evolutionary Computation. Characterized by the use of
probabilistic models to represent the solutions and the dependencies between the
variables of the problem, these algorithms have been applied to a wide set of academic
and real-world optimization problems, achieving competitive results in most
scenarios. Nevertheless, there are some optimization problems, whose solutions can
be naturally represented as permutations, for which EDAs have not been extensively
developed. Although some work has been carried out in this direction, most
of the approaches are adaptations of EDAs designed for problems based on integer
or real domains, and only a few algorithms have been specifically designed to
deal with permutation-based problems. In order to set the basis for a development
of EDAs in permutation-based problems similar to that which occurred in other
optimization fields (integer and real-value problems), in this paper we carry out a
thorough review of state-of-the-art EDAs applied to permutation-based problems.
Furthermore, we provide some ideas on probabilistic modeling over permutation
spaces that could inspire the researchers of EDAs to design new approaches for
these kinds of problems
A random key based estimation of distribution algorithm for the permutation flowshop scheduling problem.
Random Key (RK) is an alternative representation for permutation problems that enables application of techniques generally used for continuous optimisation. Although the benefit of RKs to permutation optimisation has been shown, its use within Estimation of Distribution Algorithms (EDAs) has been a challenge. Recent research proposing a RK-based EDA (RKEDA) has shown that RKs can produce competitive results with state of the art algorithms. Following promising results on the Permutation Flowshop Scheduling Problem, this paper presents an analysis of RK-EDA for optimising the total flow time. Experiments show that RK-EDA outperforms other permutationbased EDAs on instances of large dimensions. The difference in performance between RK-EDA and the state of the art algorithms also decreases when the problem difficulty increases
Kernels of Mallows Models under the Hamming Distance for solving the Quadratic Assignment Problem
The Quadratic Assignment Problem (QAP) is a well-known permutation-based combinatorial optimization problem with real applications in industrial and logistics environments. Motivated by the challenge that this NP-hard problem represents, it has captured the attention of the optimization community for decades. As a result, a large number of algorithms have been proposed to tackle this problem. Among these, exact methods are only able to solve instances of size . To overcome this limitation, many metaheuristic methods have been applied to the QAP.
In this work, we follow this direction by approaching the QAP through Estimation of Distribution Algorithms (EDAs). Particularly, a non-parametric distance-based exponential probabilistic model is used. Based on the analysis of the characteristics of the QAP, and previous work in the area, we introduce Kernels of Mallows Model under the Hamming distance to the context of EDAs. Conducted experiments point out that the performance of the proposed algorithm in the QAP is superior to (i) the classical EDAs adapted to deal with the QAP, and also (ii) to the specific EDAs proposed in the literature to deal with permutation problems.Severo Ochoa SEV-2013-0323
TIN2016-78365-R
PID2019-106453GAI00
SVP-2014-068574
TIN2017-82626-
A review on probabilistic graphical models in evolutionary computation
Thanks to their inherent properties, probabilistic graphical models are one of the prime candidates for machine learning and decision making tasks especially in uncertain domains. Their capabilities, like representation, inference and learning, if used effectively, can greatly help to build intelligent systems that are able to act accordingly in different problem domains. Evolutionary algorithms is one such discipline that has employed probabilistic graphical models to improve the search for optimal solutions in complex problems. This paper shows how probabilistic graphical models have been used in evolutionary algorithms to improve their performance in solving complex problems. Specifically, we give a survey of probabilistic model building-based evolutionary algorithms, called estimation of distribution algorithms, and compare different methods for probabilistic modeling in these algorithms
Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable
There has been significant recent interest in parallel graph processing due
to the need to quickly analyze the large graphs available today. Many graph
codes have been designed for distributed memory or external memory. However,
today even the largest publicly-available real-world graph (the Hyperlink Web
graph with over 3.5 billion vertices and 128 billion edges) can fit in the
memory of a single commodity multicore server. Nevertheless, most experimental
work in the literature report results on much smaller graphs, and the ones for
the Hyperlink graph use distributed or external memory. Therefore, it is
natural to ask whether we can efficiently solve a broad class of graph problems
on this graph in memory.
This paper shows that theoretically-efficient parallel graph algorithms can
scale to the largest publicly-available graphs using a single machine with a
terabyte of RAM, processing them in minutes. We give implementations of
theoretically-efficient parallel algorithms for 20 important graph problems. We
also present the optimizations and techniques that we used in our
implementations, which were crucial in enabling us to process these large
graphs quickly. We show that the running times of our implementations
outperform existing state-of-the-art implementations on the largest real-world
graphs. For many of the problems that we consider, this is the first time they
have been solved on graphs at this scale. We have made the implementations
developed in this work publicly-available as the Graph-Based Benchmark Suite
(GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium
on Parallelism in Algorithms and Architectures (SPAA), 201
RK-EDA: a novel random key based estimation of distribution algorithm.
The challenges of solving problems naturally represented as permutations by Estimation of Distribution Algorithms (EDAs) have been a recent focus of interest in the evolutionary computation community. One of the most common alternative representations for permutation based problems is the Random Key (RK), which enables the use of continuous approaches for this problem domain. However, the use of RK in EDAs have not produced competitive results to date and more recent research on permutation based EDAs have focused on creating superior algorithms with specially adapted representations. In this paper, we present RK-EDA; a novel RK based EDA that uses a cooling scheme to balance the exploration and exploitation of a search space by controlling the variance in its probabilistic model. Unlike the general performance of RK based EDAs, RK-EDA is actually competitive with the best EDAs on common permutation test problems: Flow Shop Scheduling, Linear Ordering, Quadratic Assignment, and Travelling Salesman Problems
Block Crossings in Storyline Visualizations
Storyline visualizations help visualize encounters of the characters in a
story over time. Each character is represented by an x-monotone curve that goes
from left to right. A meeting is represented by having the characters that
participate in the meeting run close together for some time. In order to keep
the visual complexity low, rather than just minimizing pairwise crossings of
curves, we propose to count block crossings, that is, pairs of intersecting
bundles of lines.
Our main results are as follows. We show that minimizing the number of block
crossings is NP-hard, and we develop, for meetings of bounded size, a
constant-factor approximation. We also present two fixed-parameter algorithms
and, for meetings of size 2, a greedy heuristic that we evaluate
experimentally.Comment: Appears in the Proceedings of the 24th International Symposium on
Graph Drawing and Network Visualization (GD 2016
Image Reconstruction from Bag-of-Visual-Words
The objective of this work is to reconstruct an original image from
Bag-of-Visual-Words (BoVW). Image reconstruction from features can be a means
of identifying the characteristics of features. Additionally, it enables us to
generate novel images via features. Although BoVW is the de facto standard
feature for image recognition and retrieval, successful image reconstruction
from BoVW has not been reported yet. What complicates this task is that BoVW
lacks the spatial information for including visual words. As described in this
paper, to estimate an original arrangement, we propose an evaluation function
that incorporates the naturalness of local adjacency and the global position,
with a method to obtain related parameters using an external image database. To
evaluate the performance of our method, we reconstruct images of objects of 101
kinds. Additionally, we apply our method to analyze object classifiers and to
generate novel images via BoVW
False discovery rate analysis of brain diffusion direction maps
Diffusion tensor imaging (DTI) is a novel modality of magnetic resonance
imaging that allows noninvasive mapping of the brain's white matter. A
particular map derived from DTI measurements is a map of water principal
diffusion directions, which are proxies for neural fiber directions. We
consider a study in which diffusion direction maps were acquired for two groups
of subjects. The objective of the analysis is to find regions of the brain in
which the corresponding diffusion directions differ between the groups. This is
attained by first computing a test statistic for the difference in direction at
every brain location using a Watson model for directional data. Interesting
locations are subsequently selected with control of the false discovery rate.
More accurate modeling of the null distribution is obtained using an empirical
null density based on the empirical distribution of the test statistics across
the brain. Further, substantial improvements in power are achieved by local
spatial averaging of the test statistic map. Although the focus is on one
particular study and imaging technology, the proposed inference methods can be
applied to other large scale simultaneous hypothesis testing problems with a
continuous underlying spatial structure.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS133 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …