1,309 research outputs found
Polynomial-time sortable stacks of burnt pancakes
Pancake flipping, a famous open problem in computer science, can be
formalised as the problem of sorting a permutation of positive integers using
as few prefix reversals as possible. In that context, a prefix reversal of
length k reverses the order of the first k elements of the permutation. The
burnt variant of pancake flipping involves permutations of signed integers, and
reversals in that case not only reverse the order of elements but also invert
their signs. Although three decades have now passed since the first works on
these problems, neither their computational complexity nor the maximal number
of prefix reversals needed to sort a permutation is yet known. In this work, we
prove a new lower bound for sorting burnt pancakes, and show that an important
class of permutations, known as "simple permutations", can be optimally sorted
in polynomial time.Comment: Accepted pending minor revisio
The Interpolating Random Spline Cryptosystem and the Chaotic-Map Public-Key Cryptosystem
The feasibility of implementing the interpolating cubic spline function as encryption and decryption transformations is presented. The encryption method can be viewed as computing a transposed polynomial. The main characteristic of the spline cryptosystem is that the domain and range of encryption are defined over real numbers, instead of the traditional integer numbers. Moreover, the spline cryptosystem can be implemented in terms of inexpensive multiplications and additions.
Using spline functions, a series of discontiguous spline segments can execute the modular arithmetic of the RSA system. The similarity of the RSA and spline functions within the integer domain is demonstrated. Furthermore, we observe that such a reformulation of RSA cryptosystem can be characterized as polynomials with random offsets between ciphertext values and plaintext values. This contrasts with the spline cryptosystems, so that a random spline system has been developed. The random spline cryptosystem is an advanced structure of spline cryptosystem. Its mathematical indeterminacy on computing keys with interpolants no more than 4 and numerical sensitivity to the random offset t( increases its utility.
This article also presents a chaotic public-key cryptosystem employing a one-dimensional difference equation as well as a quadratic difference equation. This system makes use of the El Gamal’s scheme to accomplish the encryption process. We note that breaking this system requires the identical work factor that is needed in solving discrete logarithm with the same size of moduli
Braids: A Survey
This article is about Artin's braid group and its role in knot theory. We set
ourselves two goals: (i) to provide enough of the essential background so that
our review would be accessible to graduate students, and (ii) to focus on those
parts of the subject in which major progress was made, or interesting new
proofs of known results were discovered, during the past 20 years. A central
theme that we try to develop is to show ways in which structure first
discovered in the braid groups generalizes to structure in Garside groups,
Artin groups and surface mapping class groups. However, the literature is
extensive, and for reasons of space our coverage necessarily omits many very
interesting developments. Open problems are noted and so-labelled, as we
encounter them.Comment: Final version, revised to take account of the comments of readers. A
review article, to appear in the Handbook of Knot Theory, edited by W.
Menasco and M. Thistlethwaite. 91 pages, 24 figure
Evolution of whole genomes through inversions:models and algorithms for duplicates, ancestors, and edit scenarios
Advances in sequencing technology are yielding DNA sequence data at an alarming rate – a rate reminiscent of Moore's law. Biologists' abilities to analyze this data, however, have not kept pace. On the other hand, the discrete and mechanical nature of the cell life-cycle has been tantalizing to computer scientists. Thus in the 1980s, pioneers of the field now called Computational Biology began to uncover a wealth of computer science problems, some confronting modern Biologists and some hidden in the annals of the biological literature. In particular, many interesting twists were introduced to classical string matching, sorting, and graph problems. One such problem, first posed in 1941 but rediscovered in the early 1980s, is that of sorting by inversions (also called reversals): given two permutations, find the minimum number of inversions required to transform one into the other, where an inversion inverts the order of a subpermutation. Indeed, many genomes have evolved mostly or only through inversions. Thus it becomes possible to trace evolutionary histories by inferring sequences of such inversions that led to today's genomes from a distant common ancestor. But unlike the classic edit distance problem where string editing was relatively simple, editing permutation in this way has proved to be more complex. In this dissertation, we extend the theory so as to make these edit distances more broadly applicable and faster to compute, and work towards more powerful tools that can accurately infer evolutionary histories. In particular, we present work that for the first time considers genomic distances between any pair of genomes, with no limitation on the number of occurrences of a gene. Next we show that there are conditions under which an ancestral genome (or one close to the true ancestor) can be reliably reconstructed. Finally we present new methodology that computes a minimum-length sequence of inversions to transform one permutation into another in, on average, O(n log n) steps, whereas the best worst-case algorithm to compute such a sequence uses O(n√n log n) steps
The Coupled Electron-Ion Monte Carlo Method
In these Lecture Notes we review the principles of the Coupled Electron-Ion
Monte Carlo methods and discuss some recent results on metallic hydrogen.Comment: 38 pages, 6 figures, Lecture notes for the International School of
Solid State Physics, 34th course: "Computer Simulation in Condensed Matter:
from Materials to Chemical Biology", 20 July-1 August 2005 Erice (Italy). To
appear in Lecture Notes in Physics (2006
Neural replay in representation, learning and planning
Spontaneous neural activity is rarely the subject of investigation in cognitive neuroscience. This may be due to a dominant metaphor of cognition as the information processing unit, whereas internally generated thoughts are often considered as noise. Adopting a reinforcement learning (RL) framework, I consider cognition in terms of an agent trying to attain its internal goals. This framework motivated me to address in my thesis the role of spontaneous neural activity in human cognition. First, I developed a general method, called temporal delayed linear modelling (TDLM), to enable me to analyse this spontaneous activity. TDLM can be thought of as a domain general sequence detection method. It combines nonlinear classification and linear temporal modelling. This enables testing for statistical regularities in sequences of neural representations of a decoded state space. Although developed for use with human non- invasive neuroimaging data, the method can be extended to analyse rodent electrophysiological recordings. Next, I applied TDLM to study spontaneous neural activity during rest in humans. As in rodents, I found that spontaneously generated neural events tended to occur in structured sequences. These sequences are accelerated in time compared to those that related to actual experience (30 -50 ms state-to-state time lag). These sequences, termed replay, reverse their direction after reward receipt. Notably, this human replay is not a recapitulation of prior experience, but follows sequence implied by a learnt abstract structural knowledge, suggesting a factorized representation of structure and sensory information. Finally, I test the role of neural replay in model-based learning and planning in humans. Following reward receipt, I found significant backward replay of non-local experience with a 160 ms lag. This replay prioritises and facilitates the learning of action values. In a separate sequential planning task, I show these neural sequences go forward in direction, depicting the trajectory subjects about to take. The research presented in this thesis reveals a rich role of spontaneous neural activity in supporting internal computations that underpin planning and inference in human cognition
Recommended from our members
Novel Quantum Monte Carlo Approaches for Quantum Liquids
Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While the eventual hope is to apply this algorithm to the exploration of yet unidentified high-pressure, low-temperature phases of hydrogen, I employ this algorithm to determine whether or not quantum hard spheres can form a low-temperature bcc solid if exchange is not taken into account. In the final chapter of this thesis, I use Path Integral Monte Carlo once again to explore whether glassy para-hydrogen exhibits superfluidity. Physicists have long searched for ways to coax hydrogen into becoming a superfluid. I present evidence that, while glassy hydrogen does not crystallize at the temperatures at which hydrogen might become a superfluid, it nevertheless does not exhibit superfluidity. This is because the average binding energy per p-H2 molecule poses a severe barrier to exchange regardless of whether the system is crystalline. All in all, this work extends the reach of Quantum Monte Carlo methods to new systems and brings the power of existing methods to bear on new problems
Finding Periodic Apartments : A Computational Study of Hyperbolic Buildings
This thesis presents a computational study of a fundamental open conjecture in geometric group theory using an intricate combination of Boolean Satisfiability and orderly generation. In particular, we focus on Gromov’s subgroup conjecture (GSC), which states that “each one-ended hyperbolic group contains a subgroup isomorphic to the fundamental group of a closed surface of genus at least 2”. Several classes of groups have been shown to satisfy GSC, but the status of non-right-angled groups with regard to GSC is presently unknown, and may provide counterexamples to the conjecture. With this in mind Kangaslampi and Vdovina constructed 23 such groups utilizing the theory of hyperbolic buildings [International Journal of Algebra and Computation, vol. 20, no. 4, pp. 591–603, 2010], and ran an exhaustive computational analysis of surface subgroups of genus 2 arising from so-called periodic apartments [Experimental Mathematics, vol. 26, no. 1, pp. 54–61, 2017]. While they were able to rule out 5 of the 23 groups as potential counterexamples to GSC, they reported that their computational approach does not scale to genera higher than 2. We extend the work of Kangaslampi and Vdovina by developing two new approaches to analyzing the subgroups arising from periodic apartments in the 23 groups utilizing different combinations of SAT solving and orderly generation. We develop novel SAT encodings and a specialized orderly algorithm for the approaches, and perform an exhaustive analysis (over the 23 groups) of the genus 3 subgroups arising from periodic apartments. With the aid of massively parallel computation we also exhaust the case of genus 4. As a result we rule out 4 additional groups as counterexamples to GSC leaving 14 of the 23 groups for further inspection. In addition to this our approach provides an independent verification of the genus 2 results reported by Kangaslampi and Vdovina
A new parallelisation technique for heterogeneous CPUs
Parallelization has moved in recent years into the mainstream compilers, and the demand
for parallelizing tools that can do a better job of automatic parallelization is higher than
ever. During the last decade considerable attention has been focused on developing programming
tools that support both explicit and implicit parallelism to keep up with the
power of the new multiple core technology. Yet the success to develop automatic parallelising
compilers has been limited mainly due to the complexity of the analytic process
required to exploit available parallelism and manage other parallelisation measures such
as data partitioning, alignment and synchronization.
This dissertation investigates developing a programming tool that automatically parallelises
large data structures on a heterogeneous architecture and whether a high-level programming
language compiler can use this tool to exploit implicit parallelism and make use
of the performance potential of the modern multicore technology. The work involved the
development of a fully automatic parallelisation tool, called VSM, that completely hides
the underlying details of general purpose heterogeneous architectures. The VSM implementation
provides direct and simple access for users to parallelise array operations on the
Cell’s accelerators without the need for any annotations or process directives. This work
also involved the extension of the Glasgow Vector Pascal compiler to work with the VSM
implementation as a one compiler system. The developed compiler system, which is called
VP-Cell, takes a single source code and parallelises array expressions automatically.
Several experiments were conducted using Vector Pascal benchmarks to show the validity
of the VSM approach. The VP-Cell system achieved significant runtime performance
on one accelerator as compared to the master processor’s performance and near-linear
speedups over code runs on the Cell’s accelerators. Though VSM was mainly designed for
developing parallelising compilers it also showed a considerable performance by running
C code over the Cell’s accelerators
- …