583 research outputs found

    Backtracking algorithms for constructing the Hamiltonian decomposition of a 4-regular multigraph

    Full text link
    We consider a Hamiltonian decomposition problem of partitioning a regular multigraph into edge-disjoint Hamiltonian cycles. It is known that verifying vertex nonadjacency in the 1-skeleton of the symmetric and asymmetric traveling salesperson polytopes is an NP-complete problem. On the other hand, a sufficient condition for two vertices to be nonadjacent can be formulated as a combinatorial problem of finding a Hamiltonian decomposition of a 4-regular multigraph. We present two backtracking algorithms for verifying vertex nonadjacency in the 1-skeleton of the traveling salesperson polytope and constructing a Hamiltonian decomposition: an algorithm based on a simple path extension and an algorithm based on the chain edge fixing procedure. According to the results of computational experiments for undirected multigraphs, both backtracking algorithms lost to the known general variable neighborhood search algorithm. However, for directed multigraphs, the algorithm based on chain edge fixing showed comparable results with heuristics on instances with the existing solution and better results on instances of the problem where the Hamiltonian decomposition does not exist.Comment: In Russian. Computational experiments are revise

    Traveling Salesman Problem

    Get PDF
    This book is a collection of current research in the application of evolutionary algorithms and other optimal algorithms to solving the TSP problem. It brings together researchers with applications in Artificial Immune Systems, Genetic Algorithms, Neural Networks and Differential Evolution Algorithm. Hybrid systems, like Fuzzy Maps, Chaotic Maps and Parallelized TSP are also presented. Most importantly, this book presents both theoretical as well as practical applications of TSP, which will be a vital tool for researchers and graduate entry students in the field of applied Mathematics, Computing Science and Engineering

    Алгоритмы поиска с возвратом для построения гамильтонова разложения 4-регулярного мультиграфа

    Get PDF
    We consider a Hamiltonian decomposition problem of partitioning a regular graph into edge-disjoint Hamiltonian cycles. It is known that verifying vertex non-adjacency in the 1-skeleton of the symmetric and asymmetric traveling salesperson polytopes is an NP-complete problem. On the other hand, a suffcient condition for two vertices to be non-adjacent can be formulated as a combinatorial problem of finding a Hamiltonian decomposition of a 4-regular multigraph. We present two backtracking algorithms for verifying vertex non-adjacency in the 1-skeleton of the traveling salesperson polytope and constructing a Hamiltonian decomposition: an algorithm based on a simple path extension and an algorithm based on the chain edge fixing procedure. Based on the results of the computational experiments for undirected multigraphs, both backtracking algorithms lost to the known heuristic general variable neighborhood search algorithm. However, for directed multigraphs, the algorithm based on chain fixing of edges showed comparable results with heuristics on instances with existing solutions, and better results on instances of the problem where the Hamiltonian decomposition does not exist.Рассматривается задача построения гамильтонова разложения регулярного мультиграфа на гамильтоновы циклы без общих рёбер. Известно, что проверка несмежности вершин в полиэдральных графах симметричного и асимметричного многогранников коммивояжёра является NP-полной задачей. С другой стороны, достаточное условие несмежности вершин можно сформулировать в виде комбинаторной задачи построения гамильтонова разложения 4-регулярного мультиграфа. В статье представлены два алгоритма поиска с возвратом для проверки несмежности вершин в полиэдральном графе коммивояжёра и построения гамильтонова разложения 4-регулярного мультиграфа: алгоритм на основе последовательного расширения простого пути и алгоритм на основе процедуры цепного фиксирования рёбер. По результатам вычислительных экспериментов для неориентированных мультиграфов оба переборных алгоритма проиграли известному эвристическому алгоритму поиска с переменными окрестностями. Однако для ориентированных мультиграфов алгоритм на основе цепного фиксирования рёбер показал сопоставимые результаты с эвристиками на экземплярах задачи, имеющих решение, и лучшие результаты на экземплярах задачи, для которых гамильтонова разложения не существует

    Indistinguishable Proofs of Work or Knowledge

    Get PDF
    We introduce a new class of protocols called Proofs of Work or Knowledge (PoWorKs). In a PoWorK, a prover can convince a verifier that she has either performed work or that she possesses knowledge of a witness to a public statement without the verifier being able to distinguish which of the two has taken place. We formalise PoWorK in terms of three properties, completeness, f -soundness and indistinguishability (where f is a function that determines the tightness of the proof of work aspect) and present a construction that transforms 3-move HVZK protocols into 3-move public-coin PoWorKs. To formalise the work aspect in a PoWorK protocol we define cryptographic puzzles that adhere to certain uniformity conditions, which may also be of independent interest. We instantiate our puzzles in the random oracle (RO) model as well as via constructing “dense” versions of suitably hard one-way functions. We then showcase PoWorK protocols by presenting a number of applications. We first show how non-interactive PoWorKs can be used to reduce spam email by forcing users sending an e-mail to either prove to the mail server they are approved contacts of the recipient or to perform computational work. As opposed to previous approaches that applied proofs of work to this problem, our proposal of using PoWorKs is privacy-preserving as it hides the list of the receiver’s approved contacts from the mail server. Our second application, shows how PoWorK can be used to compose cryptocurrencies that are based on proofs of work (“Bitcoin-like”) with cryptocurrencies that are based on knowledge relations (these include cryptocurrencies that are based on “proof of stake”, and others). The resulting PoWorK-based cryptocurrency inherits the robustness properties of the underlying two systems while PoWorK-indistinguishability ensures a uniform population of miners. Finally, we show that PoWorK protocols imply straight-line quasi-polynomial simulatable arguments of knowledge and based on our construction we obtain an efficient straight-line concurrent 3-move statistically quasi-polynomial simulatable argument of knowledge

    Security and Fairness of Blockchain Consensus Protocols

    Get PDF
    The increasing popularity of blockchain technology has created a need to study and understand consensus protocols, their properties, and security. As users seek alternatives to traditional intermediaries, such as banks, the challenge lies in establishing trust within a robust and secure system. This dissertation explores the landscape beyond cryptocurrencies, including consensus protocols and decentralized finance (DeFi). Cryptocurrencies, like Bitcoin and Ethereum, symbolize the global recognition of blockchain technology. At the core of every cryptocurrency lies a consensus protocol. Utilizing a proof-of-work consensus mechanism, Bitcoin ensures network security through energy-intensive mining. Ethereum, a representative of the proof-of-stake mechanism, enhances scalability and energy efficiency. Ripple, with its native XRP, utilizes a consensus algorithm based on voting for efficient cross-border transactions. The first part of the dissertation dives into Ripple's consensus protocol, analyzing its security. The Ripple network operates on a Byzantine fault-tolerant agreement protocol. Unlike traditional Byzantine protocols, Ripple lacks global knowledge of all participating nodes, relying on each node's trust for voting. This dissertation offers a detailed abstract description of the Ripple consensus protocol derived from the source code. Additionally, it highlights potential safety and liveness violations in the protocol during simple executions and relatively benign network assumptions. The second part of this thesis focuses on decentralized finance, a rapidly growing sector of the blockchain industry. DeFi applications aim to provide financial services without intermediaries, such as banks. However, the lack of regulation leaves space for different kinds of attacks. This dissertation focuses on the so-called front-running attacks. Front-running is a transaction-ordering attack where a malicious party exploits the knowledge of pending transactions to gain an advantage. To mitigate this problem, recent efforts introduced order fairness for transactions as a safety property for consensus, enhancing traditional agreement and liveness properties. Our work addresses limitations in existing formalizations and proposes a new differential order fairness property. The novel quick order-fair atomic broadcast (QOF) protocol ensures transaction delivery in a differentially fair order, proving more efficient than current protocols. It works optimally in asynchronous and eventually synchronous networks, tolerating up to one-third parties corruption, an improvement from previous solutions tolerating fewer faults. This work is further extended by presenting a modular implementation of the QOF protocol. Empirical evaluations compare QOF's performance to a fairness-lacking consensus protocol, revealing a marginal 5\% throughput decrease and approximately 50ms latency increase. The study contributes to understanding the practical aspects of QOF protocol, establishing connections with similar fairness-imposing protocols from the literature. The last part of this dissertation provides an overview of existing protocols designed to prevent transaction reordering within DeFi. These defense methods are systematically classified into four categories. The first category employs distributed cryptography to prevent side information leaks to malicious insiders, ensuring a causal order on the consensus-generated transaction sequence. The second category, receive-order fairness, analyzes how individual parties participating in the consensus protocol receive transactions, imposing corresponding constraints on the resulting order. The third category, known as randomized order, aims to neutralize the influence of consensus-running parties on transaction order. The fourth category, architectural separation, proposes separating the task of ordering transactions and assigning them to a distinct service

    Sublinear Computation Paradigm

    Get PDF
    This open access book gives an overview of cutting-edge work on a new paradigm called the “sublinear computation paradigm,” which was proposed in the large multiyear academic research project “Foundations of Innovative Algorithms for Big Data.” That project ran from October 2014 to March 2020, in Japan. To handle the unprecedented explosion of big data sets in research, industry, and other areas of society, there is an urgent need to develop novel methods and approaches for big data analysis. To meet this need, innovative changes in algorithm theory for big data are being pursued. For example, polynomial-time algorithms have thus far been regarded as “fast,” but if a quadratic-time algorithm is applied to a petabyte-scale or larger big data set, problems are encountered in terms of computational resources or running time. To deal with this critical computational and algorithmic bottleneck, linear, sublinear, and constant time algorithms are required. The sublinear computation paradigm is proposed here in order to support innovation in the big data era. A foundation of innovative algorithms has been created by developing computational procedures, data structures, and modelling techniques for big data. The project is organized into three teams that focus on sublinear algorithms, sublinear data structures, and sublinear modelling. The work has provided high-level academic research results of strong computational and algorithmic interest, which are presented in this book. The book consists of five parts: Part I, which consists of a single chapter on the concept of the sublinear computation paradigm; Parts II, III, and IV review results on sublinear algorithms, sublinear data structures, and sublinear modelling, respectively; Part V presents application results. The information presented here will inspire the researchers who work in the field of modern algorithms

    Big data-driven multimodal traffic management : trends and challenges

    Get PDF

    On the k-Abelian Equivalence Relation of Finite Words

    Get PDF
    This thesis is devoted to the so-called k-abelian equivalence relation of sequences of symbols, that is, words. This equivalence relation is a generalization of the abelian equivalence of words. Two words are abelian equivalent if one is a permutation of the other. For any positive integer k, two words are called k-abelian equivalent if each word of length at most k occurs equally many times as a factor in the two words. The k-abelian equivalence defines an equivalence relation, even a congruence, of finite words. A hierarchy of equivalence classes in between the equality relation and the abelian equivalence of words is thus obtained. Most of the literature on the k-abelian equivalence deals with infinite words. In this thesis we consider several aspects of the equivalence relations, the main objective being to build a fairly comprehensive picture on the structure of the k-abelian equivalence classes themselves. The main part of the thesis deals with the structural aspects of k-abelian equivalence classes. We also consider aspects of k-abelian equivalence in infinite words. We survey known characterizations of the k-abelian equivalence of finite words from the literature and also introduce novel characterizations. For the analysis of structural properties of the equivalence relation, the main tool is the characterization by the rewriting rule called the k-switching. Using this rule it is straightforward to show that the language comprised of the lexicographically least elements of the k-abelian equivalence classes is regular. Further word-combinatorial analysis of the lexicographically least elements leads us to describe the deterministic finite automata recognizing this language. Using tools from formal language theory combined with our analysis, we give an optimal expression for the asymptotic growth rate of the number of k-abelian equivalence classes of length n over an m-letter alphabet. Explicit formulae are computed for small values of k and m, and these sequences appear in Sloane’s Online Encyclopedia of Integer Sequences. Due to the fact that the k-abelian equivalence relation is a congruence of the free monoid, we study equations over the k-abelian equivalence classes. The main result in this setting is that any system of equations of k-abelian equivalence classes is equivalent to one of its finite subsystems, i.e., the monoid defined by the k-abelian equivalence relation possesses the compactness property. Concerning infinite words, we mainly consider the (k-)abelian complexity function. We complete a classification of the asymptotic abelian complexities of pure morphic binary words. In other words, given a morphism which has an infinite binary fixed point, the limit superior asymptotic abelian complexity of the fixed point can be computed (in principle). We also give a new proof of the fact that the k-abelian complexity of a Sturmian word is n + 1 for length n 2k. In fact, we consider several aspects of the k-abelian equivalence relation in Sturmian words using a dynamical interpretation of these words. We reprove the fact that any Sturmian word contains arbitrarily large k-abelian repetitions. The methods used allow to analyze the situation in more detail, and this leads us to define the so-called k-abelian critical exponent which measures the ratio of the exponent and the length of the root of a k-abelian repetition. This notion is connected to a deep number theoretic object called the Lagrange spectrum

    Proceedings of the 5th bwHPC Symposium

    Get PDF
    In modern science, the demand for more powerful and integrated research infrastructures is growing constantly to address computational challenges in data analysis, modeling and simulation. The bwHPC initiative, founded by the Ministry of Science, Research and the Arts and the universities in Baden-Württemberg, is a state-wide federated approach aimed at assisting scientists with mastering these challenges. At the 5th bwHPC Symposium in September 2018, scientific users, technical operators and government representatives came together for two days at the University of Freiburg. The symposium provided an opportunity to present scientific results that were obtained with the help of bwHPC resources. Additionally, the symposium served as a platform for discussing and exchanging ideas concerning the use of these large scientific infrastructures as well as its further development

    Machine Learning

    Get PDF
    Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behavior. Machine learning addresses more specifically the ability to improve automatically through experience
    corecore