15 research outputs found
Arithmetical Congruence Preservation: from Finite to Infinite
Various problems on integers lead to the class of congruence preserving
functions on rings, i.e. functions verifying divides for all
. We characterized these classes of functions in terms of sums of rational
polynomials (taking only integral values) and the function giving the least
common multiple of . The tool used to obtain these
characterizations is "lifting": if is a surjective morphism,
and a function on a lifting of is a function on such that
. In this paper we relate the finite and infinite notions
by proving that the finite case can be lifted to the infinite one. For -adic
and profinite integers we get similar characterizations via lifting. We also
prove that lattices of recognizable subsets of are stable under inverse
image by congruence preserving functions
Integral Difference Ratio Functions on Integers
number theoryInternational audienceTo Jozef, on his 80th birthday, with our gratitude for sharing with us his prophetic vision of Informatique Abstract. Various problems lead to the same class of functions from integers to integers: functions having integral difference ratio, i.e. verifying f (a) â f (b) ⥠0 (mod (a â b)) for all a > b. In this paper we characterize this class of functions from Z to Z via their a la Newton series expansions on a suitably chosen basis of polynomials (with rational coefficients). We also exhibit an example of such a function which is not polynomial but Bessel like
Commutative positive varieties of languages
We study the commutative positive varieties of languages closed under various
operations: shuffle, renaming and product over one-letter alphabets
Commutative positive varieties of languages
We study the commutative positive varieties of languages closed under various operations: Shuffle, renaming and product over one-letter alphabets
An algebraic approach to energy problems II - the algebra of energy functions
Energy and resource management problems are important in areas such as embedded systems or autonomous systems. They are concerned with the question whether a given system admits infinite schedules during which certain tasks can be repeatedly accomplished and the system never runs out of energy (or other resources). In order to develop a general theory of energy problems, we introduce energy automata: finite automata whose transitions are labeled with energy functions which specify how energy values change from one system state to another. We show that energy functions form a *-continuous Kleene Ï-algebra, as an application of a general result that finitely additive, locally *-closed and T-continuous functions on complete lattices form *-continuous Kleene Ï-algebras. This permits to solve energy problems in energy automata in a generic, algebraic way. In order to put our work in context, we also review extensions of energy problems to higher dimensions and to games
On the fly type specialization without type analysis
Les langages de programmation typĂ©s dynamiquement tels que JavaScript et Python repoussent la vĂ©rification de typage jusquâau moment de lâexĂ©cution. Afin dâoptimiser la performance de ces langages, les implĂ©mentations de machines virtuelles pour langages dynamiques doivent tenter dâĂ©liminer les tests de typage dynamiques redondants. Cela se fait habituellement en utilisant une analyse dâinfĂ©rence de types. Cependant, les analyses de ce genre sont souvent coĂ»teuses et impliquent des compromis entre le temps de compilation et la prĂ©cision des rĂ©sultats obtenus. Ceci a conduit Ă la conception dâarchitectures de VM de plus en plus complexes.
Nous proposons le versionnement paresseux de blocs de base, une technique de compilation Ă la volĂ©e simple qui Ă©limine efficacement les tests de typage dynamiques redondants sur les chemins dâexĂ©cution critiques. Cette nouvelle approche gĂ©nĂšre paresseusement des versions spĂ©cialisĂ©es des blocs de base tout en propageant de lâinformation de typage contextualisĂ©e. Notre technique ne nĂ©cessite pas lâutilisation dâanalyses de programme coĂ»teuses, nâest pas contrainte par les limitations de prĂ©cision des analyses dâinfĂ©rence de types traditionnelles et Ă©vite la complexitĂ© des techniques dâoptimisation spĂ©culatives.
Trois extensions sont apportĂ©es au versionnement de blocs de base afin de lui donner des capacitĂ©s dâoptimisation interprocĂ©durale. Une premiĂšre extension lui donne la possibilitĂ© de joindre des informations de typage aux propriĂ©tĂ©s des objets et aux variables globales. Puis, la spĂ©cialisation de points dâentrĂ©e lui permet de passer de lâinformation de typage des fonctions appellantes aux fonctions appellĂ©es. Finalement, la spĂ©cialisation des continuations dâappels permet de transmettre le type des valeurs de retour des fonctions appellĂ©es aux appellants sans coĂ»t dynamique. Nous dĂ©montrons empiriquement que ces extensions permettent au versionnement de blocs de base dâĂ©liminer plus de tests de typage dynamiques que toute analyse dâinfĂ©rence de typage statique.Dynamically typed programming languages such as JavaScript and Python defer type checking to run time. In order to maximize performance, dynamic language virtual
machine implementations must attempt to eliminate redundant dynamic type checks. This is typically done using type inference analysis. However, type inference analyses
are often costly and involve tradeoffs between compilation time and resulting precision. This has lead to the creation of increasingly complex multi-tiered VM architectures.
We introduce lazy basic block versioning, a simple just-in-time compilation technique which effectively removes redundant type checks from critical code paths. This
novel approach lazily generates type-specialized versions of basic blocks on the fly while propagating context-dependent type information. This does not require the use of costly
program analyses, is not restricted by the precision limitations of traditional type analyses and avoids the implementation complexity of speculative optimization techniques.
Three extensions are made to the basic block versioning technique in order to give it interprocedural optimization capabilities. Typed object shapes give it the ability to
attach type information to object properties and global variables. Entry point specialization allows it to pass type information from callers to callees, and call continuation
specialization makes it possible to pass return value type information back to callers without dynamic overhead. We empirically demonstrate that these extensions enable
basic block versioning to exceed the capabilities of static whole-program type analyses
On the performance and programming of reversible molecular computers
If the 20th century was known for the computational revolution, what will the 21st be known for? Perhaps the recent strides in the nascent fields of molecular programming and biological computation will help bring about the âComing Era of Nanotechnologyâ promised in Drexlerâs âEngines of Creationâ. Though there is still far to go, there is much reason for optimism. This thesis examines the underlying principles needed to realise the computational aspects of such âenginesâ in a performant way. Its main body focusses on the ways in which thermodynamics constrains the operation and design of such systems, and it ends with the proposal of a model of computation appropriate for exploiting these constraints.
These thermodynamic constraints are approached from three different directions. The first considers the maximum possible aggregate performance of a system of computers of given volume, V, with a given supply of free energy. From this perspective, reversible computing is imperative in order to circumvent the Landauer limit. A result of Frank is refined and strengthened, showing that the adiabatic regime reversible computer performance is the best possible for any computerâquantum or classical. This therefore shows a universal scaling law governing the performance of compact computers of ~V^(5/6), compared to ~V^(2/3) for conventional computers. For the case of molecular computers, it is shown how to attain this bound. The second direction extends this performance analysis to the case where individual computational particles or sub-units can interact with one another. The third extends it to interactions with shared, non-computational parts of the system. It is found that accommodating these interactions in molecular computers imposes a performance penalty that undermines the earlier scaling result. Nonetheless, scaling superior to that of irreversible computers can be preserved, and appropriate mitigations and considerations are discussed. These analyses are framed in a context of molecular computation, but where possible more general computational systems are considered.
The proposed model, the Ś-calculus, is appropriate for programming reversible molecular computers taking into account these constraints. A variety of examples and mathematical analyses accompany it. Moreover, abstract sketches of potential molecular implementations are provided. Developing these into viable schemes suitable for experimental validation will be a focus of future work
Recommended from our members
Quantum Stochastic Processes and Quantum Many-Body Physics
This dissertation investigates the theory of quantum stochastic processes and its applications in quantum many-body physics.
The main goal is to analyse complexity-theoretic aspects of both static and dynamic properties of physical systems modelled by quantum stochastic processes.
The thesis consists of two parts: the first one addresses the computational complexity of certain quantum and classical divisibility questions, whereas the second one addresses the topic of Hamiltonian complexity theory.
In the divisibility part, we discuss the question whether one can efficiently sub-divide a map describing the evolution of a system in a noisy environment, i.e. a CPTP- or stochastic map for quantum and classical processes, respectively, and we prove that taking the nth root of a CPTP or stochastic map is an NP-complete problem.
Furthermore, we show that answering the question whether one can divide up a random variable into a sum of iid random variables , i.e. , is poly-time computable; relaxing the iid condition renders the problem NP-hard.
In the local Hamiltonian part, we study computation embedded into the ground state of a many-body quantum system, going beyond "history state" constructions with a linear clock.
We first develop a series of mathematical techniques which allow us to study the energy spectrum of the resulting Hamiltonian, and extend classical string rewriting to the quantum setting.
This allows us to construct the most physically-realistic QMAEXP-complete instances for the LOCAL HAMILTONIAN problem (i.e. the question of estimating the ground state energy of a quantum many-body system) known to date, both in one- and three dimensions.
Furthermore, we study weighted versions of linear history state constructions, allowing us to obtain tight lower and upper bounds on the promise gap of the LOCAL HAMILTONIAN problem in various cases.
We finally study a classical embedding of a Busy Beaver Turing Machine into a low-dimensional lattice spin model, which allows us to dictate a transition from a purely classical phase to a Toric Code phase at arbitrarily large and potentially even uncomputable system sizes