5,906 research outputs found

    Point-Separable Classes of Simple Computable Planar Curves

    Full text link
    In mathematics curves are typically defined as the images of continuous real functions (parametrizations) defined on a closed interval. They can also be defined as connected one-dimensional compact subsets of points. For simple curves of finite lengths, parametrizations can be further required to be injective or even length-normalized. All of these four approaches to curves are classically equivalent. In this paper we investigate four different versions of computable curves based on these four approaches. It turns out that they are all different, and hence, we get four different classes of computable curves. More interestingly, these four classes are even point-separable in the sense that the sets of points covered by computable curves of different versions are also different. However, if we consider only computable curves of computable lengths, then all four versions of computable curves become equivalent. This shows that the definition of computable curves is robust, at least for those of computable lengths. In addition, we show that the class of computable curves of computable lengths is point-separable from the other four classes of computable curves

    Combinatorics of tight geodesics and stable lengths

    Full text link
    We give an algorithm to compute the stable lengths of pseudo-Anosovs on the curve graph, answering a question of Bowditch. We also give a procedure to compute all invariant tight geodesic axes of pseudo-Anosovs. Along the way we show that there are constants 1<a1<a21<a_1<a_2 such that the minimal upper bound on `slices' of tight geodesics is bounded below and above by a1ξ(S)a_1^{\xi(S)} and a2ξ(S)a_2^{\xi(S)}, where ξ(S)\xi(S) is the complexity of the surface. As a consequence, we give the first computable bounds on the asymptotic dimension of curve graphs and mapping class groups. Our techniques involve a generalization of Masur--Minsky's tight geodesics and a new class of paths on which their tightening procedure works.Comment: 19 pages, 2 figure

    Lines Missing Every Random Point

    Full text link
    We prove that there is, in every direction in Euclidean space, a line that misses every computably random point. We also prove that there exist, in every direction in Euclidean space, arbitrarily long line segments missing every double exponential time random point.Comment: Added a section: "Betting in Doubly Exponential Time.

    B\'ezier curves that are close to elastica

    Full text link
    We study the problem of identifying those cubic B\'ezier curves that are close in the L2 norm to planar elastic curves. The problem arises in design situations where the manufacturing process produces elastic curves; these are difficult to work with in a digital environment. We seek a sub-class of special B\'ezier curves as a proxy. We identify an easily computable quantity, which we call the lambda-residual, that accurately predicts a small L2 distance. We then identify geometric criteria on the control polygon that guarantee that a B\'ezier curve has lambda-residual below 0.4, which effectively implies that the curve is within 1 percent of its arc-length to an elastic curve in the L2 norm. Finally we give two projection algorithms that take an input B\'ezier curve and adjust its length and shape, whilst keeping the end-points and end-tangent angles fixed, until it is close to an elastic curve.Comment: 13 pages, 15 figure

    Mutual Dimension

    Get PDF
    We define the lower and upper mutual dimensions mdim(x:y)mdim(x:y) and Mdim(x:y)Mdim(x:y) between any two points xx and yy in Euclidean space. Intuitively these are the lower and upper densities of the algorithmic information shared by xx and yy. We show that these quantities satisfy the main desiderata for a satisfactory measure of mutual algorithmic information. Our main theorem, the data processing inequality for mutual dimension, says that, if f:Rm→Rnf:\mathbb{R}^m \rightarrow \mathbb{R}^n is computable and Lipschitz, then the inequalities mdim(f(x):y)≤mdim(x:y)mdim(f(x):y) \leq mdim(x:y) and Mdim(f(x):y)≤Mdim(x:y)Mdim(f(x):y) \leq Mdim(x:y) hold for all x∈Rmx \in \mathbb{R}^m and y∈Rty \in \mathbb{R}^t. We use this inequality and related inequalities that we prove in like fashion to establish conditions under which various classes of computable functions on Euclidean space preserve or otherwise transform mutual dimensions between points.Comment: This article is 29 pages and has been submitted to ACM Transactions on Computation Theory. A preliminary version of part of this material was reported at the 2013 Symposium on Theoretical Aspects of Computer Science in Kiel, German

    Applying MDL to Learning Best Model Granularity

    Get PDF
    The Minimum Description Length (MDL) principle is solidly based on a provably ideal method of inference using Kolmogorov complexity. We test how the theory behaves in practice on a general problem in model selection: that of learning the best model granularity. The performance of a model depends critically on the granularity, for example the choice of precision of the parameters. Too high precision generally involves modeling of accidental noise and too low precision may lead to confusion of models that should be distinguished. This precision is often determined ad hoc. In MDL the best model is the one that most compresses a two-part code of the data set: this embodies ``Occam's Razor.'' In two quite different experimental settings the theoretical value determined using MDL coincides with the best value found experimentally. In the first experiment the task is to recognize isolated handwritten characters in one subject's handwriting, irrespective of size and orientation. Based on a new modification of elastic matching, using multiple prototypes per character, the optimal prediction rate is predicted for the learned parameter (length of sampling interval) considered most likely by MDL, which is shown to coincide with the best value found experimentally. In the second experiment the task is to model a robot arm with two degrees of freedom using a three layer feed-forward neural network where we need to determine the number of nodes in the hidden layer giving best modeling performance. The optimal model (the one that extrapolizes best on unseen examples) is predicted for the number of nodes in the hidden layer considered most likely by MDL, which again is found to coincide with the best value found experimentally.Comment: LaTeX, 32 pages, 5 figures. Artificial Intelligence journal, To appea

    Algorithmic aspects of branched coverings

    Get PDF
    This is the announcement, and the long summary, of a series of articles on the algorithmic study of Thurston maps. We describe branched coverings of the sphere in terms of group-theoretical objects called bisets, and develop a theory of decompositions of bisets. We introduce a canonical "Levy" decomposition of an arbitrary Thurston map into homeomorphisms, metrically-expanding maps and maps doubly covered by torus endomorphisms. The homeomorphisms decompose themselves into finite-order and pseudo-Anosov maps, and the expanding maps decompose themselves into rational maps. As an outcome, we prove that it is decidable when two Thurston maps are equivalent. We also show that the decompositions above are computable, both in theory and in practice.Comment: 60-page announcement of 5-part text, to apper in Ann. Fac. Sci. Toulouse. Minor typos corrected, and major rewrite of section 7.8, which was studying a different map than claime
    • …
    corecore