121 research outputs found
Self-adjusting offspring population sizes outperform fixed parameters on the cliff function
In the discrete domain, self-adjusting parameters of evolutionary algorithms (EAs) has emerged as a fruitful research area with many runtime analyses showing that self-adjusting parameters can out-perform the best fixed parameters. Most existing runtime analyses focus on elitist EAs on simple problems, for which moderate performance gains were shown. Here we consider a much more challenging scenario: the multimodal function Cliff, defined as an example where a (1, λ) EA is effective, and for which the best known upper runtime bound for standard EAs is O(n25).We prove that a (1, λ) EA self-adjusting the offspring population size λ using success-based rules optimises Cliff in O(n) expected generations and O(n log n) expected evaluations. Along the way, we prove tight upper and lower bounds on the runtime for fixed λ (up to a logarithmic factor) and identify the runtime for the best fixed λ as nη for η ≈ 3.9767 (up to sub-polynomial factors). Hence, the self-adjusting (1, λ) EA outperforms the best fixed parameter by a factor of at least n2.9767 (up to sub-polynomial factors)
Leveraging Wikidata's edit history in knowledge graph refinement tasks
Knowledge graphs have been adopted in many diverse fields for a variety of
purposes. Most of those applications rely on valid and complete data to deliver
their results, pressing the need to improve the quality of knowledge graphs. A
number of solutions have been proposed to that end, ranging from rule-based
approaches to the use of probabilistic methods, but there is an element that
has not been considered yet: the edit history of the graph. In the case of
collaborative knowledge graphs (e.g., Wikidata), those edits represent the
process in which the community reaches some kind of fuzzy and distributed
consensus over the information that best represents each entity, and can hold
potentially interesting information to be used by knowledge graph refinement
methods. In this paper, we explore the use of edit history information from
Wikidata to improve the performance of type prediction methods. To do that, we
have first built a JSON dataset containing the edit history of every instance
from the 100 most important classes in Wikidata. This edit history information
is then explored and analyzed, with a focus on its potential applicability in
knowledge graph refinement tasks. Finally, we propose and evaluate two new
methods to leverage this edit history information in knowledge graph embedding
models for type prediction tasks. Our results show an improvement in one of the
proposed methods against current approaches, showing the potential of using
edit information in knowledge graph refinement tasks and opening new promising
research lines within the field.Comment: 18 pages, 7 figures. Submitted to the Journal of Web Semantic
Self-Adjusting Population Sizes for Non-Elitist Evolutionary Algorithms: Why Success Rates Matter
Evolutionary algorithms (EAs) are general-purpose optimisers that come with
several parameters like the sizes of parent and offspring populations or the
mutation rate. It is well known that the performance of EAs may depend
drastically on these parameters. Recent theoretical studies have shown that
self-adjusting parameter control mechanisms that tune parameters during the
algorithm run can provably outperform the best static parameters in EAs on
discrete problems. However, the majority of these studies concerned elitist EAs
and we do not have a clear answer on whether the same mechanisms can be applied
for non-elitist EAs.
We study one of the best-known parameter control mechanisms, the one-fifth
success rule, to control the offspring population size in the
non-elitist EA. It is known that the EA has a sharp
threshold with respect to the choice of where the expected runtime on
the benchmark function OneMax changes from polynomial to exponential time.
Hence, it is not clear whether parameter control mechanisms are able to find
and maintain suitable values of .
For OneMax we show that the answer crucially depends on the success rate
(i.e. a one--th success rule). We prove that, if the success rate is
appropriately small, the self-adjusting EA optimises OneMax in
expected generations and expected evaluations, the best
possible runtime for any unary unbiased black-box algorithm. A small success
rate is crucial: we also show that if the success rate is too large, the
algorithm has an exponential runtime on OneMax and other functions with similar
characteristics.Comment: This is an extended version of a paper that appeared in the
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO
2021
On the Impossibility of Batch Update for Cryptographic Accumulators
A cryptographic accumulator is a scheme where a set of elements is represented by a single short value. This value, along with another value called witness allows to prove membership into the set. In their survey on accumulators [FN02], Fazzio and Nicolisi noted that the Camenisch and Lysyanskaya\u27s construction[CL02] was such that the time to update a witness after m changes to the accumulated value was proportional to m. They posed the question whether batch update was possible, namely if it was possible to build a cryptographic
accumulator where the time to update witnesses is independent from the number of changes in the accumulated set.
Recently, Wang et al. answered positively by giving a construction for an accumulator with batch update in [WWP07, WWP08]. In this work we show that the construction is not secure by exhibiting an attack. Moreover, we prove it cannot be fixed. If the accumulated value has been updated m times, then the time to update a witness must be at least
(m) in the worst case
Runtime Analysis of Success-Based Parameter Control Mechanisms for Evolutionary Algorithms on Multimodal Problems
Evolutionary algorithms are simple general-purpose optimisers often used to solve complex engineering and design problems. They mimic the process of natural evolution: they use a population of possible solutions to a problem that evolves by mutating and recombining solutions, identifying increasingly better solutions over time. Evolutionary algorithms have been applied to a broad range of problems in various disciplines with remarkable success. However, the reasons behind their success are often elusive: their performance often depends crucially, and unpredictably, on their parameter settings. It is, furthermore, well known that there are no globally good parameters, that is, the correct parameters for one problem may differ substantially to the parameters needed for another, making it harder to translate previous successfully implemented parameters to new problems. Therefore, understanding how to properly select the parameters is an important but challenging task. This is commonly known as the parameter selection problem.
A promising solution to this problem is the use of automated dynamic parameter selection schemes (parameter control) that allow evolutionary algorithms to identify and continuously track optimal parameters throughout the course of evolution without human intervention. In recent years the study of parameter control mechanisms in evolutionary algorithms has emerged as a very fruitful research area. However, most existing runtime analyses focus on simple problems with benign characteristics, for which fixed parameter settings already run efficiently and only moderate performance gains were shown. The aim of this thesis is to
understand how parameter control mechanisms can be used on more complex and challenging problems with many local optima (multimodal problems) to speed up optimisation.
We use advanced methods from the analysis of algorithms and probability theory to evaluate the performance of evolutionary algorithms, estimating the expected time until an algorithm finds satisfactory solutions for illustrative and relevant optimisation problems as a vital stepping stone towards designing more efficient evolutionary algorithms. We first analyse current parameter control mechanisms on multimodal problems to understand their strengths and weaknesses. Subsequently we use this knowledge to design parameter control mechanisms that mitigate the weaknesses of current mechanisms while maintaining their strengths. Finally, we show with theoretical and empirical analyses that these enhanced parameter control mechanisms are able to outperform the best fixed parameter settings on multimodal optimisation
- …