43 research outputs found

    Liquid Clocks - Refinement Types for Time-Dependent Stream Functions

    Get PDF
    The concept of liquid clocks introduced in this paper is a significant step towards a more precise compile-time framework for the analysis of synchronous and polychromous languages. Compiling languages such as Lustre or SIGNAL indeed involves a number of static analyses of programs before they can be synthesized into executable code, e.g., synchronicity class characterization, clock assignment, static scheduling or causality analysis. These analyses are often equivalent to undecidable problems, necessitating abstracting such programs to provide sound yet incomplete analyses. Such abstractions unfortunately often lead to the rejection of programs that could very well be synthesized into deterministic code, provided abstraction refinement steps could be applied for more accurate analysis. To reduce the false negatives occurring during the compilation process, we leverage recent advances in type theory -- with the definition of decidable classes of value-dependent type systems -- and formal verification, linked to the development of efficient SAT/SMT solvers, to provide a type-theoretic approach that considers all the above analyses as type inference problems. In order to simplify the exposition of our new approach in this paper, we define a refinement type system for a minimalistic, synchronous, stream-processing language to concisely represent, analyse, and verify logical and quantitative properties of programs expressed as stream-processing data-flow networks. Our type system provides a new framework to represent logical time (clocks) and scheduling properties, and to describe their relations with stream values and, possibly, other quantas. We show how to analyze synchronous stream processing programs (à la Lustre, Signal) to enable previously described analyzes involved in compiling such programs. We also prove the soundness of our type system and elaborate on the adaptability of this core framework by outlining its extensibility to specific models of computations and other quantas

    TESL: A Model with Metric Time for Modeling and Simulation

    Get PDF
    Real-time and distributed systems are increasingly finding their way into critical embedded systems. On one side, computations need to be achieved within specific time constraints. On the other side, computations may be spread among various units which are not necessarily sharing a global clock. Our study is focused on a specification language - named TESL - used for coordinating concurrent models with timed constraints. We explore various questions related to time when modeling systems, and aim at showing that TESL can be introduced as a reasonable balance of expressiveness and decidability to tackle issues in complex systems. This paper introduces (1) an overview of the TESL language and its main properties (polychrony, stutter-invariance, coinduction for simulation), (2) extensions to the language and their applications

    Improved False Causal Loop Detection in Polychronous Specificationof Embedded Software

    Get PDF
    As opposed to single clocked synchronous programming paradigms, polychronous formalism allows specification of concurrent data flow computation on signals such that various data flows can evolve asynchronous with respect to each other. Explicit constraints and constraints implied by the syntactic structures impart certain intrinsic properties to models specified polychronously. One of the major steps in designing a synthesis engine for polychronous specifications is the characterization of specified models into categories such as inherently sequential or inherently multi-threaded. In this paper, we are concerned with sequentially implementable polychronous specification where computation is divided into a totally ordered sequence of logical instants. Data flow computation within an instant happens based on the implied data flow order. This order or data dependency often varies from one instant to another. Thus determining if there is an instant at which the data flow order forms a causal cycle is an important problem. In the current polychronous compilers, such as SIGNAL compiler and EmCodeSyn, this is solved without due effort, by rejecting any program which has a buffer-free structural cycle. However, a clocked dependency graph can be used to construct logical constraints representing the instants with a possible causal loop. The satisfiability of such constraints would imply that such a loop is realizable and hence the specification has a possible deadlock. The reachability of this instant with a given set of initial conditions would verify if the program should be rejected. In the past, the work on such constraints and their satisfiability has not been implemented even though for pure Boolean signals and clocks this could have been done using a satisfiability solver. With the advent to SAT modulo theory (SMT) solvers, this can now be extended to a more general class of specifications. Moreover, model checking on an abstraction of the specification can provide more information about the reachability of instants at which cyclic data dependency is realized. This paper presents an improved polychronous synthesis tool accepting a much larger class of specifications than could be done before. In our experimental results, we demonstrate the capabilities of our causality analysis methods and show that our synthesis tool performs better than previous strategies, including our own past work

    Information Representation on a Universal Neural Chip

    Get PDF

    Liquid Clocks - Refinement Types for Time-Dependent Stream Functions

    Get PDF
    The concept of liquid clocks introduced in this paper is a significant step towards a more precise compile-time framework for the analysis of synchronous and polychromous languages. Compiling languages such as Lustre or SIGNAL indeed involves a number of static analyses of programs before they can be synthesized into executable code, e.g., synchronicity class characterization, clock assignment, static scheduling or causality analysis. These analyses are often equivalent to undecidable problems, necessitating abstracting such programs to provide sound yet incomplete analyses. Such abstractions unfortunately often lead to the rejection of programs that could very well be synthesized into deterministic code, provided abstraction refinement steps could be applied for more accurate analysis. To reduce the false negatives occurring during the compilation process, we leverage recent advances in type theory -- with the definition of decidable classes of value-dependent type systems -- and formal verification, linked to the development of efficient SAT/SMT solvers, to provide a type-theoretic approach that considers all the above analyses as type inference problems. In order to simplify the exposition of our new approach in this paper, we define a refinement type system for a minimalistic, synchronous, stream-processing language to concisely represent, analyse, and verify logical and quantitative properties of programs expressed as stream-processing data-flow networks. Our type system provides a new framework to represent logical time (clocks) and scheduling properties, and to describe their relations with stream values and, possibly, other quantas. We show how to analyze synchronous stream processing programs (à la Lustre, Signal) to enable previously described analyzes involved in compiling such programs. We also prove the soundness of our type system and elaborate on the adaptability of this core framework by outlining its extensibility to specific models of computations and other quantas

    Evolvable Neuronal Paths: A Novel Basis for Information and Search in the Brain

    Get PDF
    We propose a previously unrecognized kind of informational entity in the brain that is capable of acting as the basis for unlimited hereditary variation in neuronal networks. This unit is a path of activity through a network of neurons, analogous to a path taken through a hidden Markov model. To prove in principle the capabilities of this new kind of informational substrate, we show how a population of paths can be used as the hereditary material for a neuronally implemented genetic algorithm, (the swiss-army knife of black-box optimization techniques) which we have proposed elsewhere could operate at somatic timescales in the brain. We compare this to the same genetic algorithm that uses a standard ‘genetic’ informational substrate, i.e. non-overlapping discrete genotypes, on a range of optimization problems. A path evolution algorithm (PEA) is defined as any algorithm that implements natural selection of paths in a network substrate. A PEA is a previously unrecognized type of natural selection that is well suited for implementation by biological neuronal networks with structural plasticity. The important similarities and differences between a standard genetic algorithm and a PEA are considered. Whilst most experiments are conducted on an abstract network model, at the conclusion of the paper a slightly more realistic neuronal implementation of a PEA is outlined based on Izhikevich spiking neurons. Finally, experimental predictions are made for the identification of such informational paths in the brain

    Experimental evidence for the preservation of U-Pb isotope ratios in mantle-recycled crustal zircon grains

    Get PDF
    Zircon of crustal origin found in mantle-derived rocks is of great interest because of the information it may provide about crust recycling and mantle dynamics. Consideration of this requires understanding of how mantle temperatures, notably higher than zircon crystallization temperatures, affected the recycled zircon grains, particularly their isotopic clocks. Since Pb2+ diffuses faster than U4+ and Th+4, it is generally believed that recycled zircon grains lose all radiogenic Pb after a few million years, thus limiting the time range over which they can be detected. Nonetheless, this might not be the case for zircon included in mantle minerals with low Pb2+ diffusivity and partitioning such as olivine and orthopyroxene because these may act as zircon sealants. Annealing experiments with natural zircon embedded in cristobalite (an effective zircon sealant) show that zircon grains do not lose Pb to their surroundings, although they may lose some Pb to molten inclusions. Diffusion tends to homogenize the Pb concentration in each grain changing the U-Pb and Th-Pb isotope ratios proportionally to the initial 206Pb, 207Pb and 208Pb concentration gradients (no gradient-no change) but in most cases the original age is still recognizable. It seems, therefore, that recycled crustal zircon grains can be detected, and even accurately dated, no matter how long they have dwelled in the mantle.This paper has been financed by the Spanish Grants CGL2013-40785-P and CGL2017-84469-P

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    A review of learning in biologically plausible spiking neural networks

    Get PDF
    Artificial neural networks have been used as a powerful processing tool in various areas such as pattern recognition, control, robotics, and bioinformatics. Their wide applicability has encouraged researchers to improve artificial neural networks by investigating the biological brain. Neurological research has significantly progressed in recent years and continues to reveal new characteristics of biological neurons. New technologies can now capture temporal changes in the internal activity of the brain in more detail and help clarify the relationship between brain activity and the perception of a given stimulus. This new knowledge has led to a new type of artificial neural network, the Spiking Neural Network (SNN), that draws more faithfully on biological properties to provide higher processing abilities. A review of recent developments in learning of spiking neurons is presented in this paper. First the biological background of SNN learning algorithms is reviewed. The important elements of a learning algorithm such as the neuron model, synaptic plasticity, information encoding and SNN topologies are then presented. Then, a critical review of the state-of-the-art learning algorithms for SNNs using single and multiple spikes is presented. Additionally, deep spiking neural networks are reviewed, and challenges and opportunities in the SNN field are discussed
    corecore