3,270 research outputs found

    On the Design of Cryptographic Primitives

    Full text link
    The main objective of this work is twofold. On the one hand, it gives a brief overview of the area of two-party cryptographic protocols. On the other hand, it proposes new schemes and guidelines for improving the practice of robust protocol design. In order to achieve such a double goal, a tour through the descriptions of the two main cryptographic primitives is carried out. Within this survey, some of the most representative algorithms based on the Theory of Finite Fields are provided and new general schemes and specific algorithms based on Graph Theory are proposed

    Shared Memory Parallel Subgraph Enumeration

    Full text link
    The subgraph enumeration problem asks us to find all subgraphs of a target graph that are isomorphic to a given pattern graph. Determining whether even one such isomorphic subgraph exists is NP-complete---and therefore finding all such subgraphs (if they exist) is a time-consuming task. Subgraph enumeration has applications in many fields, including biochemistry and social networks, and interestingly the fastest algorithms for solving the problem for biochemical inputs are sequential. Since they depend on depth-first tree traversal, an efficient parallelization is far from trivial. Nevertheless, since important applications produce data sets with increasing difficulty, parallelism seems beneficial. We thus present here a shared-memory parallelization of the state-of-the-art subgraph enumeration algorithms RI and RI-DS (a variant of RI for dense graphs) by Bonnici et al. [BMC Bioinformatics, 2013]. Our strategy uses work stealing and our implementation demonstrates a significant speedup on real-world biochemical data---despite a highly irregular data access pattern. We also improve RI-DS by pruning the search space better; this further improves the empirical running times compared to the already highly tuned RI-DS.Comment: 18 pages, 12 figures, To appear at the 7th IEEE Workshop on Parallel / Distributed Computing and Optimization (PDCO 2017

    Why Philosophers Should Care About Computational Complexity

    Get PDF
    One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong. In particular, I argue that computational complexity theory---the field that studies the resources (such as time, space, and randomness) needed to solve computational problems---leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume's problem of induction, Goodman's grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis.Comment: 58 pages, to appear in "Computability: G\"odel, Turing, Church, and beyond," MIT Press, 2012. Some minor clarifications and corrections; new references adde

    Detecting genus in vertex links for the fast enumeration of 3-manifold triangulations

    Full text link
    Enumerating all 3-manifold triangulations of a given size is a difficult but increasingly important problem in computational topology. A key difficulty for enumeration algorithms is that most combinatorial triangulations must be discarded because they do not represent topological 3-manifolds. In this paper we show how to preempt bad triangulations by detecting genus in partially-constructed vertex links, allowing us to prune the enumeration tree substantially. The key idea is to manipulate the boundary edges surrounding partial vertex links using expected logarithmic time operations. Practical testing shows the resulting enumeration algorithm to be significantly faster, with up to 249x speed-ups even for small problems where comparisons are feasible. We also discuss parallelisation, and describe new data sets that have been obtained using high-performance computing facilities.Comment: 16 pages, 7 figures, 3 tables; v2: minor revisions; to appear in ISSAC 201

    Higher-order CIS codes

    Full text link
    We introduce {\bf complementary information set codes} of higher-order. A binary linear code of length tktk and dimension kk is called a complementary information set code of order tt (tt-CIS code for short) if it has tt pairwise disjoint information sets. The duals of such codes permit to reduce the cost of masking cryptographic algorithms against side-channel attacks. As in the case of codes for error correction, given the length and the dimension of a tt-CIS code, we look for the highest possible minimum distance. In this paper, this new class of codes is investigated. The existence of good long CIS codes of order 33 is derived by a counting argument. General constructions based on cyclic and quasi-cyclic codes and on the building up construction are given. A formula similar to a mass formula is given. A classification of 3-CIS codes of length ≀12\le 12 is given. Nonlinear codes better than linear codes are derived by taking binary images of Z4\Z_4-codes. A general algorithm based on Edmonds' basis packing algorithm from matroid theory is developed with the following property: given a binary linear code of rate 1/t1/t it either provides tt disjoint information sets or proves that the code is not tt-CIS. Using this algorithm, all optimal or best known [tk,k][tk, k] codes where t=3,4,
,256t=3, 4, \dots, 256 and 1≀k≀⌊256/t⌋1 \le k \le \lfloor 256/t \rfloor are shown to be tt-CIS for all such kk and tt, except for t=3t=3 with k=44k=44 and t=4t=4 with k=37k=37.Comment: 13 pages; 1 figur

    Detecting Small Query Graphs in A Large Graph via Neural Subgraph Search

    Full text link
    Recent advances have shown the success of using reinforcement learning and search to solve NP-hard graph-related tasks, such as Traveling Salesman Optimization, Graph Edit Distance computation, etc. However, it remains unclear how one can efficiently and accurately detect the occurrences of a small query graph in a large target graph, which is a core operation in graph database search, biomedical analysis, social group finding, etc. This task is called Subgraph Matching which essentially performs subgraph isomorphism check between a query graph and a large target graph. One promising approach to this classical problem is the "learning-to-search" paradigm, where a reinforcement learning (RL) agent is designed with a learned policy to guide a search algorithm to quickly find the solution without any solved instances for supervision. However, for the specific task of Subgraph Matching, though the query graph is usually small given by the user as input, the target graph is often orders-of-magnitude larger. It poses challenges to the neural network design and can lead to solution and reward sparsity. In this paper, we propose NSUBS with two innovations to tackle the challenges: (1) A novel encoder-decoder neural network architecture to dynamically compute the matching information between the query and the target graphs at each search state; (2) A novel look-ahead loss function for training the policy network. Experiments on six large real-world target graphs show that NSUBS can significantly improve the subgraph matching performance

    A Survey on Graph Kernels

    Get PDF
    Graph kernels have become an established and widely-used technique for solving classification tasks on graphs. This survey gives a comprehensive overview of techniques for kernel-based graph classification developed in the past 15 years. We describe and categorize graph kernels based on properties inherent to their design, such as the nature of their extracted graph features, their method of computation and their applicability to problems in practice. In an extensive experimental evaluation, we study the classification accuracy of a large suite of graph kernels on established benchmarks as well as new datasets. We compare the performance of popular kernels with several baseline methods and study the effect of applying a Gaussian RBF kernel to the metric induced by a graph kernel. In doing so, we find that simple baselines become competitive after this transformation on some datasets. Moreover, we study the extent to which existing graph kernels agree in their predictions (and prediction errors) and obtain a data-driven categorization of kernels as result. Finally, based on our experimental results, we derive a practitioner's guide to kernel-based graph classification
    • 

    corecore