15 research outputs found

    Arithmetical Congruence Preservation: from Finite to Infinite

    Full text link
    Various problems on integers lead to the class of congruence preserving functions on rings, i.e. functions verifying a−ba-b divides f(a)−f(b)f(a)-f(b) for all a,ba,b. We characterized these classes of functions in terms of sums of rational polynomials (taking only integral values) and the function giving the least common multiple of 1,2,
,k1,2,\ldots,k. The tool used to obtain these characterizations is "lifting": if Ï€â€‰âŁ:X→Y\pi\colon X\to Y is a surjective morphism, and ff a function on YY a lifting of ff is a function FF on XX such that π∘F=f∘π\pi\circ F=f\circ\pi. In this paper we relate the finite and infinite notions by proving that the finite case can be lifted to the infinite one. For pp-adic and profinite integers we get similar characterizations via lifting. We also prove that lattices of recognizable subsets of ZZ are stable under inverse image by congruence preserving functions

    Integral Difference Ratio Functions on Integers

    Get PDF
    number theoryInternational audienceTo Jozef, on his 80th birthday, with our gratitude for sharing with us his prophetic vision of Informatique Abstract. Various problems lead to the same class of functions from integers to integers: functions having integral difference ratio, i.e. verifying f (a) − f (b) ≡ 0 (mod (a − b)) for all a > b. In this paper we characterize this class of functions from Z to Z via their a la Newton series expansions on a suitably chosen basis of polynomials (with rational coefficients). We also exhibit an example of such a function which is not polynomial but Bessel like

    Commutative positive varieties of languages

    Full text link
    We study the commutative positive varieties of languages closed under various operations: shuffle, renaming and product over one-letter alphabets

    Commutative positive varieties of languages

    Get PDF
    We study the commutative positive varieties of languages closed under various operations: Shuffle, renaming and product over one-letter alphabets

    An algebraic approach to energy problems II - the algebra of energy functions

    Get PDF
    Energy and resource management problems are important in areas such as embedded systems or autonomous systems. They are concerned with the question whether a given system admits infinite schedules during which certain tasks can be repeatedly accomplished and the system never runs out of energy (or other resources). In order to develop a general theory of energy problems, we introduce energy automata: finite automata whose transitions are labeled with energy functions which specify how energy values change from one system state to another. We show that energy functions form a *-continuous Kleene ω-algebra, as an application of a general result that finitely additive, locally *-closed and T-continuous functions on complete lattices form *-continuous Kleene ω-algebras. This permits to solve energy problems in energy automata in a generic, algebraic way. In order to put our work in context, we also review extensions of energy problems to higher dimensions and to games

    On the fly type specialization without type analysis

    Full text link
    Les langages de programmation typĂ©s dynamiquement tels que JavaScript et Python repoussent la vĂ©rification de typage jusqu’au moment de l’exĂ©cution. Afin d’optimiser la performance de ces langages, les implĂ©mentations de machines virtuelles pour langages dynamiques doivent tenter d’éliminer les tests de typage dynamiques redondants. Cela se fait habituellement en utilisant une analyse d’infĂ©rence de types. Cependant, les analyses de ce genre sont souvent coĂ»teuses et impliquent des compromis entre le temps de compilation et la prĂ©cision des rĂ©sultats obtenus. Ceci a conduit Ă  la conception d’architectures de VM de plus en plus complexes. Nous proposons le versionnement paresseux de blocs de base, une technique de compilation Ă  la volĂ©e simple qui Ă©limine efficacement les tests de typage dynamiques redondants sur les chemins d’exĂ©cution critiques. Cette nouvelle approche gĂ©nĂšre paresseusement des versions spĂ©cialisĂ©es des blocs de base tout en propageant de l’information de typage contextualisĂ©e. Notre technique ne nĂ©cessite pas l’utilisation d’analyses de programme coĂ»teuses, n’est pas contrainte par les limitations de prĂ©cision des analyses d’infĂ©rence de types traditionnelles et Ă©vite la complexitĂ© des techniques d’optimisation spĂ©culatives. Trois extensions sont apportĂ©es au versionnement de blocs de base afin de lui donner des capacitĂ©s d’optimisation interprocĂ©durale. Une premiĂšre extension lui donne la possibilitĂ© de joindre des informations de typage aux propriĂ©tĂ©s des objets et aux variables globales. Puis, la spĂ©cialisation de points d’entrĂ©e lui permet de passer de l’information de typage des fonctions appellantes aux fonctions appellĂ©es. Finalement, la spĂ©cialisation des continuations d’appels permet de transmettre le type des valeurs de retour des fonctions appellĂ©es aux appellants sans coĂ»t dynamique. Nous dĂ©montrons empiriquement que ces extensions permettent au versionnement de blocs de base d’éliminer plus de tests de typage dynamiques que toute analyse d’infĂ©rence de typage statique.Dynamically typed programming languages such as JavaScript and Python defer type checking to run time. In order to maximize performance, dynamic language virtual machine implementations must attempt to eliminate redundant dynamic type checks. This is typically done using type inference analysis. However, type inference analyses are often costly and involve tradeoffs between compilation time and resulting precision. This has lead to the creation of increasingly complex multi-tiered VM architectures. We introduce lazy basic block versioning, a simple just-in-time compilation technique which effectively removes redundant type checks from critical code paths. This novel approach lazily generates type-specialized versions of basic blocks on the fly while propagating context-dependent type information. This does not require the use of costly program analyses, is not restricted by the precision limitations of traditional type analyses and avoids the implementation complexity of speculative optimization techniques. Three extensions are made to the basic block versioning technique in order to give it interprocedural optimization capabilities. Typed object shapes give it the ability to attach type information to object properties and global variables. Entry point specialization allows it to pass type information from callers to callees, and call continuation specialization makes it possible to pass return value type information back to callers without dynamic overhead. We empirically demonstrate that these extensions enable basic block versioning to exceed the capabilities of static whole-program type analyses

    On the performance and programming of reversible molecular computers

    Get PDF
    If the 20th century was known for the computational revolution, what will the 21st be known for? Perhaps the recent strides in the nascent fields of molecular programming and biological computation will help bring about the ‘Coming Era of Nanotechnology’ promised in Drexler’s ‘Engines of Creation’. Though there is still far to go, there is much reason for optimism. This thesis examines the underlying principles needed to realise the computational aspects of such ‘engines’ in a performant way. Its main body focusses on the ways in which thermodynamics constrains the operation and design of such systems, and it ends with the proposal of a model of computation appropriate for exploiting these constraints. These thermodynamic constraints are approached from three different directions. The first considers the maximum possible aggregate performance of a system of computers of given volume, V, with a given supply of free energy. From this perspective, reversible computing is imperative in order to circumvent the Landauer limit. A result of Frank is refined and strengthened, showing that the adiabatic regime reversible computer performance is the best possible for any computer—quantum or classical. This therefore shows a universal scaling law governing the performance of compact computers of ~V^(5/6), compared to ~V^(2/3) for conventional computers. For the case of molecular computers, it is shown how to attain this bound. The second direction extends this performance analysis to the case where individual computational particles or sub-units can interact with one another. The third extends it to interactions with shared, non-computational parts of the system. It is found that accommodating these interactions in molecular computers imposes a performance penalty that undermines the earlier scaling result. Nonetheless, scaling superior to that of irreversible computers can be preserved, and appropriate mitigations and considerations are discussed. These analyses are framed in a context of molecular computation, but where possible more general computational systems are considered. The proposed model, the Ś-calculus, is appropriate for programming reversible molecular computers taking into account these constraints. A variety of examples and mathematical analyses accompany it. Moreover, abstract sketches of potential molecular implementations are provided. Developing these into viable schemes suitable for experimental validation will be a focus of future work
    corecore