77 research outputs found

    Hinged Dissections Exist

    Full text link
    We prove that any finite collection of polygons of equal area has a common hinged dissection. That is, for any such collection of polygons there exists a chain of polygons hinged at vertices that can be folded in the plane continuously without self-intersection to form any polygon in the collection. This result settles the open problem about the existence of hinged dissections between pairs of polygons that goes back implicitly to 1864 and has been studied extensively in the past ten years. Our result generalizes and indeed builds upon the result from 1814 that polygons have common dissections (without hinges). We also extend our common dissection result to edge-hinged dissections of solid 3D polyhedra that have a common (unhinged) dissection, as determined by Dehn's 1900 solution to Hilbert's Third Problem. Our proofs are constructive, giving explicit algorithms in all cases. For a constant number of planar polygons, both the number of pieces and running time required by our construction are pseudopolynomial. This bound is the best possible, even for unhinged dissections. Hinged dissections have possible applications to reconfigurable robotics, programmable matter, and nanomanufacturing.Comment: 22 pages, 14 figure

    From Random Lines to Metric Spaces

    Full text link
    Consider an improper Poisson line process, marked by positive speeds so as to satisfy a scale-invariance property (actually, scale-equivariance). The line process can be characterized by its intensity measure, which belongs to a one-parameter family if scale and Euclidean invariance are required. This paper investigates a proposal by Aldous, namely that the line process could be used to produce a scale-invariant random spatial network (SIRSN) by means of connecting up points using paths which follow segments from the line process at the stipulated speeds. It is shown that this does indeed produce a scale-invariant network, under suitable conditions on the parameter; indeed that this produces a parameter-dependent random geodesic metric for d-dimensional space (d2d\geq2), where geodesics are given by minimum-time paths. Moreover in the planar case it is shown that the resulting geodesic metric space has an almost-everywhere-unique-geodesic property, that geodesics are locally of finite mean length, and that if an independent Poisson point process is connected up by such geodesics then the resulting network places finite length in each compact region. It is an open question whether the result is a SIRSN (in Aldous' sense; so placing finite mean length in each compact region), but it may be called a pre-SIRSN.Comment: Version 1: 46 pages, 10 figures Version 2: 47 pages, 10 figures (various typos and stylistic amendments, added dedication to Burkholder, added references concerning Lipschitz property and Sobolev space

    Performance Improvements of Common Sparse Numerical Linear Algebra Computations

    Get PDF
    Manufacturers of computer hardware are able to continuously sustain an unprecedented pace of progress in computing speed of their products, partially due to increased clock rates but also because of ever more complicated chip designs. With new processor families appearing every few years, it is increasingly harder to achieve high performance rates in sparse matrix computations. This research proposes new methods for sparse matrix factorizations and applies in an iterative code generalizations of known concepts from related disciplines. The proposed solutions and extensions are implemented in ways that tend to deliver efficiency while retaining ease of use of existing solutions. The implementations are thoroughly timed and analyzed using a commonly accepted set of test matrices. The tests were conducted on modern processors that seem to have gained an appreciable level of popularity and are fairly representative for a wider range of processor types that are available on the market now or in the near future. The new factorization technique formally introduced in the early chapters is later on proven to be quite competitive with state of the art software currently available. Although not totally superior in all cases (as probably no single approach could possibly be), the new factorization algorithm exhibits a few promising features. In addition, an all-embracing optimization effort is applied to an iterative algorithm that stands out for its robustness. This also gives satisfactory results on the tested computing platforms in terms of performance improvement. The same set of test matrices is used to enable an easy comparison between both investigated techniques, even though they are customarily treated separately in the literature. Possible extensions of the presented work are discussed. They range from easily conceivable merging with existing solutions to rather more evolved schemes dependent on hard to predict progress in theoretical and algorithmic research

    A study of finite gap solutions to the nonlinear Schrödinger equation

    No full text
    The vector nonlinear Schrödinger equation is an envelope equation which models the propagation of ultra-short light pulses and continuous-wave beams along optical fibres. Previous work has focused almost entirely on soliton solutions to the equation using a Lax representation originally developed by Manakov. We prove recursion formulae for the family of higher-order nonlinear Schrödinger equations, along with its associated Lax hierarchy, before investigating finite gap solutions using an algebrogeometric approach which introduces Baker-Akhiezer functions defined upon the Riemann surface of the relevant spectral curve. We extend this approach to account for solutions of arbitrary genus and compare it with an alternative method describing solutions of genus two. The scalar nonlinear Schrödinger and Heisenberg ferromagnet equations were shown to be equivalent following work by Lakshmanan; we generalise this idea by introducing the Heisenberg ferromagnet hierarchy and show it is entirely gauge equivalent to the scalar nonlinear Schrödinger hierarchy in the attractive case. We also investigate the polarisation state evolution of general solutions to the vector nonlinear Schrödinger equation and study possible degenerations to the Heisenberg ferromagnet equation

    Complete Issue 26, 2002

    Get PDF

    A Bird’s Eye View of Human Language Evolution

    Get PDF
    Comparative studies of linguistic faculties in animals pose an evolutionary paradox: language involves certain perceptual and motor abilities, but it is not clear that this serves as more than an input–output channel for the externalization of language proper. Strikingly, the capability for auditory–vocal learning is not shared with our closest relatives, the apes, but is present in such remotely related groups as songbirds and marine mammals. There is increasing evidence for behavioral, neural, and genetic similarities between speech acquisition and birdsong learning. At the same time, researchers have applied formal linguistic analysis to the vocalizations of both primates and songbirds. What have all these studies taught us about the evolution of language? Is the comparative study of an apparently species-specific trait like language feasible? We argue that comparative analysis remains an important method for the evolutionary reconstruction and causal analysis of the mechanisms underlying language. On the one hand, common descent has been important in the evolution of the brain, such that avian and mammalian brains may be largely homologous, particularly in the case of brain regions involved in auditory perception, vocalization, and auditory memory. On the other hand, there has been convergent evolution of the capacity for auditory–vocal learning, and possibly for structuring of external vocalizations, such that apes lack the abilities that are shared between songbirds and humans. However, significant limitations to this comparative analysis remain. While all birdsong may be classified in terms of a particularly simple kind of concatenation system, the regular languages, there is no compelling evidence to date that birdsong matches the characteristic syntactic complexity of human language, arising from the composition of smaller forms like words and phrases into larger ones

    The Significance of Evidence-based Reasoning for Mathematics, Mathematics Education, Philosophy and the Natural Sciences

    Get PDF
    In this multi-disciplinary investigation we show how an evidence-based perspective of quantification---in terms of algorithmic verifiability and algorithmic computability---admits evidence-based definitions of well-definedness and effective computability, which yield two unarguably constructive interpretations of the first-order Peano Arithmetic PA---over the structure N of the natural numbers---that are complementary, not contradictory. The first yields the weak, standard, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically verifiable Tarskian truth values to the formulas of PA under the interpretation. The second yields a strong, finitary, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically computable Tarskian truth values to the formulas of PA under the interpretation. We situate our investigation within a broad analysis of quantification vis a vis: * Hilbert's epsilon-calculus * Goedel's omega-consistency * The Law of the Excluded Middle * Hilbert's omega-Rule * An Algorithmic omega-Rule * Gentzen's Rule of Infinite Induction * Rosser's Rule C * Markov's Principle * The Church-Turing Thesis * Aristotle's particularisation * Wittgenstein's perspective of constructive mathematics * An evidence-based perspective of quantification. By showing how these are formally inter-related, we highlight the fragility of both the persisting, theistic, classical/Platonic interpretation of quantification grounded in Hilbert's epsilon-calculus; and the persisting, atheistic, constructive/Intuitionistic interpretation of quantification rooted in Brouwer's belief that the Law of the Excluded Middle is non-finitary. We then consider some consequences for mathematics, mathematics education, philosophy, and the natural sciences, of an agnostic, evidence-based, finitary interpretation of quantification that challenges classical paradigms in all these disciplines

    Towards a classification of continuity and on the emergence of generality

    Get PDF
    This dissertation has for its primary task the investigation, articulation, and comparison of a variety of concepts of continuity, as developed throughout the history of philosophy and a part of mathematics. It also motivates and aims to better understand some of the conceptual and historical connections between characterizations of the continuous, on the one hand, and ideas and commitments about what makes for generality (and universality), on the other. Many thinkers of the past have acknowledged the need for advanced science and philosophy to pass through the “labyrinth of the continuum” and to develop a sufficiently rich and precise model or description of the continuous; but it has been far less widely appreciated how the resulting description informs our ideas and commitments regarding how (and whether) things become general (or how we think about universality). The introduction provides some motivation for the project and gives some overview of the chapters. The first two chapters are devoted to Aristotle, as Aristotle’s Physics is arguably the foundational book on continuity. The first two chapters show that Aristotle\u27s efforts to understand and formulate a rich and demanding concept of the continuous reached across many of his investigations; in particular, these two chapters aim to better situate certain structural similarities and conceptual overlaps between his Posterior Analytics and his Physics, further revealing connections between the structure of demonstration or proof (the subject of logic and the sciences) and the structure of bodies in motion (the subject of physics and study of nature). This chapter also contributes to the larger narrative about continuity, where Aristotle emerges as one of the more articulate and influential early proponents of an account that aligns continuity with closeness or relations of nearness. Chapter 3 is devoted to Duns Scotus and Nicolas Oresme, and more generally, to the Medieval debate surrounding the “latitude of forms” or the “intension and remission of forms,” in which concerted efforts were made to re-focus attention onto the type of continuous motions mostly ignored by the tradition that followed in the wake of Aristotelian physics. In this context, the traditional appropriation of Aristotle’s thoughts on unity, contrariety, genera, forms, quantity and quality, and continuity is challenged in a number of important ways, reclaiming some of the largely overlooked insights of Aristotle into the intimate connections between continua and genera. By realizing certain of Scotus’s ideas concerning the intension and remission of qualities, Oresme initiates a radical transformation in the concept of continuity, and this chapter argues that Oresme’s efforts are best understood as an early attempt at freeing the concept of continuity from its ancient connection to closeness. Chapters 4 and 5 are devoted to unpacking and re-interpreting Spinoza’s powerful theory of what makes for the ‘oneness’ of a body in general and how ‘ones’ can compose to form ever more composite ‘ones’ (all the way up to Nature as a whole). Much of Spinoza reads like an elaboration on Oresme’s new model of continuity; however, the legacy of the Cartesian emphasis on local motion makes it difficult for Spinoza to give up on closeness altogether. Chapter 4 is dedicated to a closer look at some subtleties and arguments surrounding Descartes’ definition of local motion and ‘one body’, and Chapter 5 builds on this to develop Spinoza’s ideas about how the concept of ‘one body’ scales, in which context a number of far-reaching connections between continuity and generality are also unpacked. Chapter 6 leaves the realm of philosophy and is dedicated to the contributions to the continuitygenerality connection from one field of contemporary mathematics: sheaf theory (and, more generally, category theory). The aim of this chapter is to present something like a “tour” of the main philosophical contributions made by the idea of a sheaf to the specification of the concept of continuity (with particular regard for its connections to universality). The concluding chapter steps back and discusses a number of distinct characterizations of continuity in more abstract and synthetic terms, while touching on some of the corresponding representations of generality to which each such model gives rise. This chapter ends with a brief discussion of some of the arguments that have been deployed in the past to claim that continuity (or discreteness) is “better.

    Models and Methods for Random Fields in Spatial Statistics with Computational Efficiency from Markov Properties

    Get PDF
    The focus of this work is on the development of new random field models and methods suitable for the analysis of large environmental data sets. A large part is devoted to a number of extensions to the newly proposed Stochastic Partial Differential Equation (SPDE) approach for representing Gaussian fields using Gaussian Markov Random Fields (GMRFs). The method is based on that Gaussian Matérn field can be viewed as solutions to a certain SPDE, and is useful for large spatial problems where traditional methods are too computationally intensive to use. A variation of the method using wavelet basis functions is proposed and using a simulation-based study, the wavelet approximations are compared with two of the most popular methods for efficient approximations of Gaussian fields. A new class of spatial models, including the Gaussian Matérn fields and a wide family of fields with oscillating covariance functions, is also constructed using nested SPDEs. The SPDE method is extended to this model class and it is shown that all desirable properties are preserved, such as computational efficiency, applicability to data on general smooth manifolds, and simple non-stationary extensions. Finally, the SPDE method is extended to a larger class of non-Gaussian random fields with Matérn covariance functions, including certain Laplace Moving Average (LMA) models. In particular it is shown how the SPDE formulation can be used to obtain an efficient simulation method and an accurate parameter estimation technique for a LMA model. A method for estimating spatially dependent temporal trends is also developed. The method is based on using a space-varying regression model, accounting for spatial dependency in the data, and it is used to analyze temporal trends in vegetation data from the African Sahel in order to find regions that have experienced significant changes in the vegetation cover over the studied time period. The problem of estimating such regions is investigated further in the final part of the thesis where a method for estimating excursion sets, and the related problem of finding uncertainty regions for contour curves, for latent Gaussian fields is proposed. The method is based on using a parametric family for the excursion sets in combination with Integrated Nested Laplace Approximations (INLA) and an importance sampling-based algorithm for estimating joint probabilities

    The Significance of Evidence-based Reasoning in Mathematics, Mathematics Education, Philosophy, and the Natural Sciences

    Get PDF
    In this multi-disciplinary investigation we show how an evidence-based perspective of quantification---in terms of algorithmic verifiability and algorithmic computability---admits evidence-based definitions of well-definedness and effective computability, which yield two unarguably constructive interpretations of the first-order Peano Arithmetic PA---over the structure N of the natural numbers---that are complementary, not contradictory. The first yields the weak, standard, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically verifiable Tarskian truth values to the formulas of PA under the interpretation. The second yields a strong, finitary, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically computable Tarskian truth values to the formulas of PA under the interpretation. We situate our investigation within a broad analysis of quantification vis a vis: * Hilbert's epsilon-calculus * Goedel's omega-consistency * The Law of the Excluded Middle * Hilbert's omega-Rule * An Algorithmic omega-Rule * Gentzen's Rule of Infinite Induction * Rosser's Rule C * Markov's Principle * The Church-Turing Thesis * Aristotle's particularisation * Wittgenstein's perspective of constructive mathematics * An evidence-based perspective of quantification. By showing how these are formally inter-related, we highlight the fragility of both the persisting, theistic, classical/Platonic interpretation of quantification grounded in Hilbert's epsilon-calculus; and the persisting, atheistic, constructive/Intuitionistic interpretation of quantification rooted in Brouwer's belief that the Law of the Excluded Middle is non-finitary. We then consider some consequences for mathematics, mathematics education, philosophy, and the natural sciences, of an agnostic, evidence-based, finitary interpretation of quantification that challenges classical paradigms in all these disciplines
    corecore