315 research outputs found

    Complete Subdivision Algorithms, II: Isotopic Meshing of Singular Algebraic Curves

    Get PDF
    Given a real valued function f(X,Y), a box region B_0 in R^2 and a positive epsilon, we want to compute an epsilon-isotopic polygonal approximation to the restriction of the curve S=f^{-1}(0)={p in R^2: f(p)=0} to B_0. We focus on subdivision algorithms because of their adaptive complexity and ease of implementation. Plantinga and Vegter gave a numerical subdivision algorithm that is exact when the curve S is bounded and non-singular. They used a computational model that relied only on function evaluation and interval arithmetic. We generalize their algorithm to any bounded (but possibly non-simply connected) region that does not contain singularities of S. With this generalization as a subroutine, we provide a method to detect isolated algebraic singularities and their branching degree. This appears to be the first complete purely numerical method to compute isotopic approximations of algebraic curves with isolated singularities

    DoubleMod and SingleMod: Simple Randomized Secret-Key Encryption with Bounded Homomorphicity

    Get PDF
    An encryption relation f Z Z with decryption function f 1 is “group-homomorphic” if, for any suitable plaintexts x1 and x2, x1+x2 = f 1( f (x1)+f (x2)). It is “ring-homomorphic” if furthermore x1x2 = f 1( f (x1) f (x2)); it is “field-homomorphic” if furthermore 1=x1 = f 1( f (1=x1)). Such relations would support oblivious processing of encrypted data. We propose a simple randomized encryption relation f over the integers, called DoubleMod, which is “bounded ring-homomorphic” or what some call ”somewhat homomorphic.” Here, “bounded” means that the number of additions and multiplications that can be performed, while not allowing the encrypted values to go out of range, is limited (any pre-specified bound on the operation-count can be accommodated). Let R be any large integer. For any plaintext x 2 ZR, DoubleMod encrypts x as f (x) = x + au + bv, where a and b are randomly chosen integers in some appropriate interval, while (u; v) is the secret key. Here u > R2 is a large prime and the smallest prime factor of v exceeds u. With knowledge of the key, but not of a and b, the receiver decrypts the ciphertext by computing f 1(y) = (y mod v) mod u. DoubleMod generalizes an independent idea of van Dijk et al. 2010. We present and refine a new CCA1 chosen-ciphertext attack that finds the secret key of both systems (ours and van Dijk et al.’s) in linear time in the bit length of the security parameter. Under a known-plaintext attack, breaking DoubleMod is at most as hard as solving the Approximate GCD (AGCD) problem. The complexity of AGCD is not known. We also introduce the SingleMod field-homomorphic cryptosystems. The simplest SingleMod system based on the integers can be broken trivially. We had hoped, that if SingleMod is implemented inside non-Euclidean quadratic or higher-order fields with large discriminants, where GCD computations appear di cult, it may be feasible to achieve a desired level of security. We show, however, that a variation of our chosen-ciphertext attack works against SingleMod even in non-Euclidean fields

    Gaussian Behavior of Quadratic Irrationals

    Get PDF
    We study the probabilistic behaviour of the continued fraction expansion of a quadratic irrational number, when weighted by some "additive" cost. We prove asymptotic Gaussian limit laws, with an optimal speed of convergence. We deal with the underlying dynamical system associated with the Gauss map, and its weighted periodic trajectories. We work with analytic combinatorics methods, and mainly with bivariate Dirichlet generating functions; we use various tools, from number theory (the Landau Theorem), from probability (the Quasi-Powers Theorem), or from dynamical systems: our main object of study is the (weighted) transfer operator, that we relate with the generating functions of interest. The present paper exhibits a strong parallelism with the methods which have been previously introduced by Baladi and Vall\'ee in the study of rational trajectories. However, the present study is more involved and uses a deeper functional analysis framework.Comment: 39 pages In this second version, we have added an annex that provides a detailed study of the trace of the weighted transfer operator. We have also corrected an error that appeared in the computation of the norm of the operator when acting in the Banach space of analytic functions defined in the paper. Also, we give a simpler proof for Theorem

    Quantifying Riverbed Sediment Using Recreational-Grade Side Scan Sonar

    Get PDF
    The size and organization of bed material, bed texture, is a fundamental attribute of channels and is one component of the physical habitat of aquatic ecosystems. Multiple discipline-specific definitions of texture exist and there is not a universally accepted metric(s) to quantify the spectrum of possible bed textures found in aquatic environments. Moreover, metrics to describe texture are strictly statistical. Recreational-grade side scan sonar systems now offer the possibility of imaging submerged riverbed sediment at resolutions potentially sufficient to identify subtle changes in bed texture with minimal cost,expertise in sonar, or logistical effort. However, inferring riverbed sediment from side scan sonar data is limited because recreational-grade systems were not designed for this purpose and methods to interpret the data have relied on manual and semi-automated routines. Visual interpretation of side scan sonar data is not practically applied to large volumes of data because it is labor intensive and lacks reproducibility. This thesis addresses current limitations associated with visual interpretation with two objectives: 1) objectively quantify side scan sonar imagery texture, and 2) develop an automated texture segmentation algorithm for broad-scale substrate characterization. To address objective 1), I used a time series of imagery collected along a 1.6 km reach of the Colorado River in Marble Canyon, AZ. A statistically based texture analysis was performed on georeferenced side scan sonar imagery to identify objective metrics that could be used to discriminate different sediment types. A Grey Level Co-occurrence Matrix based texture analysis was found to successfully discriminate the textures associated with different sediment types. Texture varies significantly at the scale of ≈ 9 m2 on side scan sonar imagery on a regular 25 cm grid. A minimum of three and maximum of five distinct textures could be observed directly from side scan sonar imagery. To address objective 2), linear least squares and a Gaussian mixture modeling approach were developed and tested. Both sediment classification methods were found to successfully classify heterogeneous riverbeds into homogeneous patches of sand, gravel, and boulders. Gaussian mixture models outperformed the least squares models because they classified gravel with the highest accuracies.Additionally, substrate maps derived from a Gaussian modeling approach were found to be able to better estimate reach averaged proportions of different sediments types when they were compared to similar maps derived from multibeam sonar

    Fuel Prediction and Reduction in Public Transportation by Sensors Monitoring and Bayesian Networks

    Get PDF
    We exploit the use of a controller area network (CAN-bus) to monitor sensors on the buses of local public transportation in a big European city. The aim is to advise fleet managers and policymakers on how to reduce fuel consumption so that air pollution is controlled and public services are improved. We deploy heuristic algorithms and exhaustive ones to generate Bayesian networks among the monitored variables. The aim is to describe the relevant relationships between the variables, to discover and confirm the possible cause–effect relationships, to predict the fuel consumption dependent on the contextual conditions of traffic, and to enable an intervention analysis to be conducted on the variables so that our goals are achieved. We propose a validation technique using Bayesian networks based on Granger causality: it relies upon observations of the time series formed by successive values of the variables in time. We use the same method based on Granger causality to rank the Bayesian networks obtained as well. A comparison of the Bayesian networks discovered against the ground truth is proposed in a synthetic data set, specifically generated for this study: the results confirm the validity of the Bayesian networks that agree on most of the existing relationships

    Quantum álgorithms for the combinatorial invariants of numerical semigroups

    Get PDF
    It was back in spring 2014 when the author of this doctoral dissertation was finishing its master thesis, whose main objective was the understanding of Peter W. Shor’s most praised result, a quantum algorithm capable of factoring integers in polynomial time. During the development of this master thesis, me and my yet-tobe doctoral advisor studied the main aspects of quantum computing from a purely algebraic perspective. This research eventually evolved into a sufficiently thorough canvas capable of explaining the main aspects and features of the mentioned algorithm from within an undergraduate context. Just after its conclusion, we seated down and elaborated a research plan for a future Ph.D. thesis, which would expectantly involve quantum computing but also a branch of algebra whose apparently innocent definitions hide some really hard problems from a computational perspective: the theory of numerical semigroups. As will be seen later, the definition of numerical semigroup does not involve sophisticated knowledge from any somewhat obscure and distant branch of the tree of mathematics. Nonetheless, a number of combinatorial problems associated with these numerical semigroups are extremely hard to solve, even when the size of the input is relatively small. Some examples of these problems are the calculations of the Frobenius number, the Apéry set, and the Sylvester denumerant, all of them bearing the name of legendary mathematicians. This thesis is the result of our multiple attempts to tackle those combinatorial problems with the help of a hypothetical quantum computer. First, Chapter 2 is devoted to numerical semigroups and computational complexity theory, and is divided into three sections. In Section 2.1, we give the formal definition of a numerical semigroup, along with a description of the main problems involved with them. In Section 2.2, we sketch the fundamental concepts of complexity theory, in order to understand the true significance within the inherent hardness concealed in the resolution of those problems. Finally, in Section 2.3 we prove the computational complexity of the problems we aim to solve. Chapter 3 is the result of our outline of the theory of quantum computing. We give the basic definitions and concepts needed for understanding the particular place that quantum computers occupy in the world of Turing machines, and also the main elements that compose this particular model of computation: quantum bits and quantum entanglement. We also explain the two most common models of quantum computation, namely quantum circuits and adiabatic quantum computers. For all of them we give mathematical definitions, but always having in mind the physical experiments from which they stemmed. Chapter 4 is also about quantum computing, but from an algorithmical perspective. We present the most important quantum algorithms to date in a standardized way, explaining their context, their impact and consequences, while giving a mathematical proof of their correctness and worked-out examples. We begin with the early algorithms of Deutsch, Deutsch-Jozsa, and Simon, and then proceed to explain their importance in the dawn of quantum computation. Then, we describe the major landmarks: Shor’s factoring, Grover’s search, and quantum counting. Chapter 5 is the culmination of all previously explained concepts, as it includes the description of various quantum algorithms capable of solving the main problems inside the branch of numerical semigrops. We present quantum circuit algorithms for the Sylvester denumerant and the numerical semigroup membership, and adiabatic quantum algorithms for the Ap´ery Set and the Frobenius problem. We also describe a C++ library called numsem, specially developed within the context of this doctoral thesis and which helps us to study the computational hardness of all previously explained problems from a classical perspective. This thesis is intended to be autoconclusive at least in the main branches of mathematics in which it is supported; that is to say numerical semigroups, computational complexity theory, and quantum computation. Nevertheless, for the majority of concepts explained here we give references for the interested reader that wants to delve more into them
    corecore