2,040 research outputs found

    Modular polynomials via isogeny volcanoes

    Full text link
    We present a new algorithm to compute the classical modular polynomial Phi_n in the rings Z[X,Y] and (Z/mZ)[X,Y], for a prime n and any positive integer m. Our approach uses the graph of n-isogenies to efficiently compute Phi_n mod p for many primes p of a suitable form, and then applies the Chinese Remainder Theorem (CRT). Under the Generalized Riemann Hypothesis (GRH), we achieve an expected running time of O(n^3 (log n)^3 log log n), and compute Phi_n mod m using O(n^2 (log n)^2 + n^2 log m) space. We have used the new algorithm to compute Phi_n with n over 5000, and Phi_n mod m with n over 20000. We also consider several modular functions g for which Phi_n^g is smaller than Phi_n, allowing us to handle n over 60000.Comment: corrected a typo in equation (14), 31 page

    Space-efficient Feature Maps for String Alignment Kernels

    Get PDF
    String kernels are attractive data analysis tools for analyzing string data. Among them, alignment kernels are known for their high prediction accuracies in string classifications when tested in combination with SVM in various applications. However, alignment kernels have a crucial drawback in that they scale poorly due to their quadratic computation complexity in the number of input strings, which limits large-scale applications in practice. We address this need by presenting the first approximation for string alignment kernels, which we call space-efficient feature maps for edit distance with moves (SFMEDM), by leveraging a metric embedding named edit sensitive parsing (ESP) and feature maps (FMs) of random Fourier features (RFFs) for large-scale string analyses. The original FMs for RFFs consume a huge amount of memory proportional to the dimension d of input vectors and the dimension D of output vectors, which prohibits its large-scale applications. We present novel space-efficient feature maps (SFMs) of RFFs for a space reduction from O(dD) of the original FMs to O(d) of SFMs with a theoretical guarantee with respect to concentration bounds. We experimentally test SFMEDM on its ability to learn SVM for large-scale string classifications with various massive string data, and we demonstrate the superior performance of SFMEDM with respect to prediction accuracy, scalability and computation efficiency.Comment: Full version for ICDM'19 pape

    Kernelized Hashcode Representations for Relation Extraction

    Full text link
    Kernel methods have produced state-of-the-art results for a number of NLP tasks such as relation extraction, but suffer from poor scalability due to the high cost of computing kernel similarities between natural language structures. A recently proposed technique, kernelized locality-sensitive hashing (KLSH), can significantly reduce the computational cost, but is only applicable to classifiers operating on kNN graphs. Here we propose to use random subspaces of KLSH codes for efficiently constructing an explicit representation of NLP structures suitable for general classification methods. Further, we propose an approach for optimizing the KLSH model for classification problems by maximizing an approximation of mutual information between the KLSH codes (feature vectors) and the class labels. We evaluate the proposed approach on biomedical relation extraction datasets, and observe significant and robust improvements in accuracy w.r.t. state-of-the-art classifiers, along with drastic (orders-of-magnitude) speedup compared to conventional kernel methods.Comment: To appear in the proceedings of conference, AAAI-1

    Computing Hilbert class polynomials with the Chinese Remainder Theorem

    Get PDF
    We present a space-efficient algorithm to compute the Hilbert class polynomial H_D(X) modulo a positive integer P, based on an explicit form of the Chinese Remainder Theorem. Under the Generalized Riemann Hypothesis, the algorithm uses O(|D|^(1/2+o(1))log P) space and has an expected running time of O(|D|^(1+o(1)). We describe practical optimizations that allow us to handle larger discriminants than other methods, with |D| as large as 10^13 and h(D) up to 10^6. We apply these results to construct pairing-friendly elliptic curves of prime order, using the CM method.Comment: 37 pages, corrected a typo that misstated the heuristic complexit

    Finite-Block-Length Analysis in Classical and Quantum Information Theory

    Full text link
    Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects

    Time-Space Tradeoffs for the Memory Game

    Get PDF
    A single-player game of Memory is played with nn distinct pairs of cards, with the cards in each pair bearing identical pictures. The cards are laid face-down. A move consists of revealing two cards, chosen adaptively. If these cards match, i.e., they bear the same picture, they are removed from play; otherwise, they are turned back to face down. The object of the game is to clear all cards while minimizing the number of moves. Past works have thoroughly studied the expected number of moves required, assuming optimal play by a player has that has perfect memory. In this work, we study the Memory game in a space-bounded setting. We prove two time-space tradeoff lower bounds on algorithms (strategies for the player) that clear all cards in TT moves while using at most SS bits of memory. First, in a simple model where the pictures on the cards may only be compared for equality, we prove that ST=Ω(n2logn)ST = \Omega(n^2 \log n). This is tight: it is easy to achieve ST=O(n2logn)ST = O(n^2 \log n) essentially everywhere on this tradeoff curve. Second, in a more general model that allows arbitrary computations, we prove that ST2=Ω(n3)ST^2 = \Omega(n^3). We prove this latter tradeoff by modeling strategies as branching programs and extending a classic counting argument of Borodin and Cook with a novel probabilistic argument. We conjecture that the stronger tradeoff ST=Ω~(n2)ST = \widetilde{\Omega}(n^2) in fact holds even in this general model

    Accelerating the CM method

    Full text link
    Given a prime q and a negative discriminant D, the CM method constructs an elliptic curve E/\Fq by obtaining a root of the Hilbert class polynomial H_D(X) modulo q. We consider an approach based on a decomposition of the ring class field defined by H_D, which we adapt to a CRT setting. This yields two algorithms, each of which obtains a root of H_D mod q without necessarily computing any of its coefficients. Heuristically, our approach uses asymptotically less time and space than the standard CM method for almost all D. Under the GRH, and reasonable assumptions about the size of log q relative to |D|, we achieve a space complexity of O((m+n)log q) bits, where mn=h(D), which may be as small as O(|D|^(1/4)log q). The practical efficiency of the algorithms is demonstrated using |D| > 10^16 and q ~ 2^256, and also |D| > 10^15 and q ~ 2^33220. These examples are both an order of magnitude larger than the best previous results obtained with the CM method.Comment: 36 pages, minor edits, to appear in the LMS Journal of Computation and Mathematic
    corecore