788,758 research outputs found

    A Note on Key Rank

    Get PDF
    In recent years key rank has become an important aspect of side-channel analysis, enabling an evaluation lab to analyse the security of a device after a side-channel attack. In particular, it enables the lab to do so when the enumeration effort would be beyond their computing power. Due to its importance there has been a host of work investigating key rank over the last few years. In this work we build upon the existing literature to make progress on understanding various properties of key rank. We begin by showing when two different scoring methods will provide the same rank. This has been implicitly used by various algorithms in the past but here it is shown for a large class of functions. We conclude by giving the computational complexity of key rank. This implies that it is unlikely for, considerably, better algorithms to exist

    Key Polynomials

    Full text link
    The notion of key polynomials was first introduced in 1936 by S. Maclane in the case of discrete rank 1 valuations. . Let K -> L be a field extension and {\nu} a valuation of K. The original motivation for introducing key polynomials was the problem of describing all the extensions {\mu} of {\nu} to L. Take a valuation {\mu} of L extending the valuation {\nu}. In the case when {\nu} is discrete of rank 1 and L is a simple algebraic extension of K Maclane introduced the notions of key polynomials for {\mu} and augmented valuations and proved that {\mu} is obtained as a limit of a family of augmented valuations on the polynomial ring K[x]. In a series of papers, M. Vaqui\'e generalized MacLane's notion of key polynomials to the case of arbitrary valuations {\nu} (that is, valuations which are not necessarily discrete of rank 1). In the paper Valuations in algebraic field extensions, published in the Journal of Algebra in 2007, F.J. Herrera Govantes, M.A. Olalla Acosta and M. Spivakovsky develop their own notion of key polynomials for extensions (K, {\nu}) -> (L, {\mu}) of valued fields, where {\nu} is of archimedian rank 1 (not necessarily discrete) and give an explicit description of the limit key polynomials. Our purpose in this paper is to clarify the relationship between the two notions of key polynomials already developed by vaqui\'e and by F.J. Herrera Govantes, M.A. Olalla Acosta and M. Spivakovsky.Comment: arXiv admin note: text overlap with arXiv:math/0605193 by different author

    Key polynomials for simple extensions of valued fields

    Full text link
    Let ι:KLK(x)\iota:K\hookrightarrow L\cong K(x) be a simple transcendental extension of valued fields, where KK is equipped with a valuation ν\nu of rank 1. That is, we assume given a rank 1 valuation ν\nu of KK and its extension ν\nu' to LL. Let (Rν,Mν,kν)(R_\nu,M_\nu,k_\nu) denote the valuation ring of ν\nu. The purpose of this paper is to present a refined version of MacLane's theory of key polynomials, similar to those considered by M. Vaqui\'e, and reminiscent of related objects studied by Abhyankar and Moh (approximate roots) and T.C. Kuo. Namely, we associate to ι\iota a countable well ordered set Q={Qi}iΛK[x]; \mathbf{Q}=\{Q_i\}_{i\in\Lambda}\subset K[x]; the QiQ_i are called {\bf key polynomials}. Key polynomials QiQ_i which have no immediate predecessor are called {\bf limit key polynomials}. Let βi=ν(Qi)\beta_i=\nu'(Q_i). We give an explicit description of the limit key polynomials (which may be viewed as a generalization of the Artin--Schreier polynomials). We also give an upper bound on the order type of the set of key polynomials. Namely, we show that if char kν=0\operatorname{char}\ k_\nu=0 then the set of key polynomials has order type at most ω\omega, while in the case char kν=p>0\operatorname{char}\ k_\nu=p>0 this order type is bounded above by ω×ω\omega\times\omega, where ω\omega stands for the first infinite ordinal.Comment: arXiv admin note: substantial text overlap with arXiv:math/060519

    A Tight Lower Bound for Decrease-Key in the Pure Heap Model

    Full text link
    We improve the lower bound on the amortized cost of the decrease-key operation in the pure heap model and show that any pure-heap-model heap (that has a \bigoh{\log n} amortized-time extract-min operation) must spend \bigom{\log\log n} amortized time on the decrease-key operation. Our result shows that sort heaps as well as pure-heap variants of numerous other heaps have asymptotically optimal decrease-key operations in the pure heap model. In addition, our improved lower bound matches the lower bound of Fredman [J. ACM 46(4):473-501 (1999)] for pairing heaps [M.L. Fredman, R. Sedgewick, D.D. Sleator, and R.E. Tarjan. Algorithmica 1(1):111-129 (1986)] and surpasses it for pure-heap variants of numerous other heaps with augmented data such as pointer rank-pairing heaps.Comment: arXiv admin note: substantial text overlap with arXiv:1302.664

    On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence

    Get PDF
    We introduce a framework for quasi-Newton forward--backward splitting algorithms (proximal quasi-Newton methods) with a metric induced by diagonal ±\pm rank-rr symmetric positive definite matrices. This special type of metric allows for a highly efficient evaluation of the proximal mapping. The key to this efficiency is a general proximal calculus in the new metric. By using duality, formulas are derived that relate the proximal mapping in a rank-rr modified metric to the original metric. We also describe efficient implementations of the proximity calculation for a large class of functions; the implementations exploit the piece-wise linear nature of the dual problem. Then, we apply these results to acceleration of composite convex minimization problems, which leads to elegant quasi-Newton methods for which we prove convergence. The algorithm is tested on several numerical examples and compared to a comprehensive list of alternatives in the literature. Our quasi-Newton splitting algorithm with the prescribed metric compares favorably against state-of-the-art. The algorithm has extensive applications including signal processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115
    corecore