115 research outputs found

    Leveraging Wikidata's edit history in knowledge graph refinement tasks

    Full text link
    Knowledge graphs have been adopted in many diverse fields for a variety of purposes. Most of those applications rely on valid and complete data to deliver their results, pressing the need to improve the quality of knowledge graphs. A number of solutions have been proposed to that end, ranging from rule-based approaches to the use of probabilistic methods, but there is an element that has not been considered yet: the edit history of the graph. In the case of collaborative knowledge graphs (e.g., Wikidata), those edits represent the process in which the community reaches some kind of fuzzy and distributed consensus over the information that best represents each entity, and can hold potentially interesting information to be used by knowledge graph refinement methods. In this paper, we explore the use of edit history information from Wikidata to improve the performance of type prediction methods. To do that, we have first built a JSON dataset containing the edit history of every instance from the 100 most important classes in Wikidata. This edit history information is then explored and analyzed, with a focus on its potential applicability in knowledge graph refinement tasks. Finally, we propose and evaluate two new methods to leverage this edit history information in knowledge graph embedding models for type prediction tasks. Our results show an improvement in one of the proposed methods against current approaches, showing the potential of using edit information in knowledge graph refinement tasks and opening new promising research lines within the field.Comment: 18 pages, 7 figures. Submitted to the Journal of Web Semantic

    Self-Adjusting Population Sizes for Non-Elitist Evolutionary Algorithms: Why Success Rates Matter

    Full text link
    Evolutionary algorithms (EAs) are general-purpose optimisers that come with several parameters like the sizes of parent and offspring populations or the mutation rate. It is well known that the performance of EAs may depend drastically on these parameters. Recent theoretical studies have shown that self-adjusting parameter control mechanisms that tune parameters during the algorithm run can provably outperform the best static parameters in EAs on discrete problems. However, the majority of these studies concerned elitist EAs and we do not have a clear answer on whether the same mechanisms can be applied for non-elitist EAs. We study one of the best-known parameter control mechanisms, the one-fifth success rule, to control the offspring population size λ\lambda in the non-elitist (1,λ)(1,\lambda) EA. It is known that the (1,λ)(1,\lambda) EA has a sharp threshold with respect to the choice of λ\lambda where the expected runtime on the benchmark function OneMax changes from polynomial to exponential time. Hence, it is not clear whether parameter control mechanisms are able to find and maintain suitable values of λ\lambda. For OneMax we show that the answer crucially depends on the success rate ss (i.e. a one-(s+1)(s+1)-th success rule). We prove that, if the success rate is appropriately small, the self-adjusting (1,λ)(1,\lambda) EA optimises OneMax in O(n)O(n) expected generations and O(nlogn)O(n \log n) expected evaluations, the best possible runtime for any unary unbiased black-box algorithm. A small success rate is crucial: we also show that if the success rate is too large, the algorithm has an exponential runtime on OneMax and other functions with similar characteristics.Comment: This is an extended version of a paper that appeared in the Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2021

    On the Impossibility of Batch Update for Cryptographic Accumulators

    Get PDF
    A cryptographic accumulator is a scheme where a set of elements is represented by a single short value. This value, along with another value called witness allows to prove membership into the set. In their survey on accumulators [FN02], Fazzio and Nicolisi noted that the Camenisch and Lysyanskaya\u27s construction[CL02] was such that the time to update a witness after m changes to the accumulated value was proportional to m. They posed the question whether batch update was possible, namely if it was possible to build a cryptographic accumulator where the time to update witnesses is independent from the number of changes in the accumulated set. Recently, Wang et al. answered positively by giving a construction for an accumulator with batch update in [WWP07, WWP08]. In this work we show that the construction is not secure by exhibiting an attack. Moreover, we prove it cannot be fixed. If the accumulated value has been updated m times, then the time to update a witness must be at least (m) in the worst case

    Runtime Analysis of Success-Based Parameter Control Mechanisms for Evolutionary Algorithms on Multimodal Problems

    Get PDF
    Evolutionary algorithms are simple general-purpose optimisers often used to solve complex engineering and design problems. They mimic the process of natural evolution: they use a population of possible solutions to a problem that evolves by mutating and recombining solutions, identifying increasingly better solutions over time. Evolutionary algorithms have been applied to a broad range of problems in various disciplines with remarkable success. However, the reasons behind their success are often elusive: their performance often depends crucially, and unpredictably, on their parameter settings. It is, furthermore, well known that there are no globally good parameters, that is, the correct parameters for one problem may differ substantially to the parameters needed for another, making it harder to translate previous successfully implemented parameters to new problems. Therefore, understanding how to properly select the parameters is an important but challenging task. This is commonly known as the parameter selection problem. A promising solution to this problem is the use of automated dynamic parameter selection schemes (parameter control) that allow evolutionary algorithms to identify and continuously track optimal parameters throughout the course of evolution without human intervention. In recent years the study of parameter control mechanisms in evolutionary algorithms has emerged as a very fruitful research area. However, most existing runtime analyses focus on simple problems with benign characteristics, for which fixed parameter settings already run efficiently and only moderate performance gains were shown. The aim of this thesis is to understand how parameter control mechanisms can be used on more complex and challenging problems with many local optima (multimodal problems) to speed up optimisation. We use advanced methods from the analysis of algorithms and probability theory to evaluate the performance of evolutionary algorithms, estimating the expected time until an algorithm finds satisfactory solutions for illustrative and relevant optimisation problems as a vital stepping stone towards designing more efficient evolutionary algorithms. We first analyse current parameter control mechanisms on multimodal problems to understand their strengths and weaknesses. Subsequently we use this knowledge to design parameter control mechanisms that mitigate the weaknesses of current mechanisms while maintaining their strengths. Finally, we show with theoretical and empirical analyses that these enhanced parameter control mechanisms are able to outperform the best fixed parameter settings on multimodal optimisation

    Economics of Wind Integration: An Acceptance Costs Approach

    Get PDF
    corecore