105 research outputs found

    Non-Volatile Memory Array Based Quantization- and Noise-Resilient LSTM Neural Networks

    Full text link
    In cloud and edge computing models, it is important that compute devices at the edge be as power efficient as possible. Long short-term memory (LSTM) neural networks have been widely used for natural language processing, time series prediction and many other sequential data tasks. Thus, for these applications there is increasing need for low-power accelerators for LSTM model inference at the edge. In order to reduce power dissipation due to data transfers within inference devices, there has been significant interest in accelerating vector-matrix multiplication (VMM) operations using non-volatile memory (NVM) weight arrays. In NVM array-based hardware, reduced bit-widths also significantly increases the power efficiency. In this paper, we focus on the application of quantization-aware training algorithm to LSTM models, and the benefits these models bring in terms of resilience against both quantization error and analog device noise. We have shown that only 4-bit NVM weights and 4-bit ADC/DACs are needed to produce equivalent LSTM network performance as floating-point baseline. Reasonable levels of ADC quantization noise and weight noise can be naturally tolerated within our NVMbased quantized LSTM network. Benchmark analysis of our proposed LSTM accelerator for inference has shown at least 2.4x better computing efficiency and 40x higher area efficiency than traditional digital approaches (GPU, FPGA, and ASIC). Some other novel approaches based on NVM promise to deliver higher computing efficiency (up to 4.7x) but require larger arrays with potential higher error rates.Comment: Published in: 2019 IEEE International Conference on Rebooting Computing (ICRC

    Towards a Theory for Bio - Cyber Physical Systems Modelling

    Get PDF
    International audienceCurrently, CyberPhysical Systems (CPS) represents a great challenge for automatic control and smart systems engineering on both theoretical and practical levels. Designing CPS requires approaches involving multidisciplinary competences. However they are designed to be autonomous, the CPS present a part of uncertainty, which requires interaction with human for engineering, monitoring, controlling, performing operational maintenance, etc. This human-CPS interaction led naturally to the human in-the-loop (HITL) concept. Nevertheless, this HITL concept , which stems from a reductionist point of view, exhibits limitations due to the different natures of the systems involved. As opposed to this classical approach, we propose, in this paper, a model of Bio-CPS (i.e. systems based on an integration of computational elements within biological systems) grounded on theoretical biology, physics and computer sciences and based on the key concept of human systems integration

    Social and ethical checkpoints for bottom-up synthetic biology, or protocells

    Get PDF
    An alternative to creating novel organisms through the traditional “top-down” approach to synthetic biology involves creating them from the “bottom up” by assembling them from non-living components; the products of this approach are called “protocells.” In this paper we describe how bottom-up and top-down synthetic biology differ, review the current state of protocell research and development, and examine the unique ethical, social, and regulatory issues raised by bottom-up synthetic biology. Protocells have not yet been developed, but many expect this to happen within the next five to ten years. Accordingly, we identify six key checkpoints in protocell development at which particular attention should be given to specific ethical, social and regulatory issues concerning bottom-up synthetic biology, and make ten recommendations for responsible protocell science that are tied to the achievement of these checkpoints

    Synthetic organisms and living machines: Positioning the products of synthetic biology at the borderline between living and non-living matter

    Get PDF
    The difference between a non-living machine such as a vacuum cleaner and a living organism as a lion seems to be obvious. The two types of entities differ in their material consistence, their origin, their development and their purpose. This apparently clear-cut borderline has previously been challenged by fictitious ideas of “artificial organism” and “living machines” as well as by progress in technology and breeding. The emergence of novel technologies such as artificial life, nanobiotechnology and synthetic biology are definitely blurring the boundary between our understanding of living and non-living matter. This essay discusses where, at the borderline between living and non-living matter, we can position the future products of synthetic biology that belong to the two hybrid entities “synthetic organisms” and “living machines” and how the approaching realization of such hybrid entities affects our understanding of organisms and machines. For this purpose we focus on the description of three different types of synthetic biology products and the aims assigned to their realization: (1) synthetic minimal cells aimed at by protocell synthetic biology, (2) chassis organisms strived for by synthetic genomics and (3) genetically engineered machines produced by bioengineering. We argue that in the case of synthetic biology the purpose is more decisive for the categorization of a product as an organism or a machine than its origin and development. This has certain ethical implications because the definition of an entity as machine seems to allow bypassing the discussion about the assignment and evaluation of instrumental and intrinsic values, which can be raised in the case of organisms

    Mutation Size Optimizes Speciation in an Evolutionary Model

    Get PDF
    The role of mutation rate in optimizing key features of evolutionary dynamics has recently been investigated in various computational models. Here, we address the related question of how maximum mutation size affects the formation of species in a simple computational evolutionary model. We find that the number of species is maximized for intermediate values of a mutation size parameter μ; the result is observed for evolving organisms on a randomly changing landscape as well as in a version of the model where negative feedback exists between the local population size and the fitness provided by the landscape. The same result is observed for various distributions of mutation values within the limits set by μ. When organisms with various values of μ compete against each other, those with intermediate μ values are found to survive. The surviving values of μ from these competition simulations, however, do not necessarily coincide with the values that maximize the number of species. These results suggest that various complex factors are involved in determining optimal mutation parameters for any population, and may also suggest approaches for building a computational bridge between the (micro) dynamics of mutations at the level of individual organisms and (macro) evolutionary dynamics at the species level

    Defining and simulating open-ended novelty: requirements, guidelines, and challenges

    Get PDF
    The open-endedness of a system is often defined as a continual production of novelty. Here we pin down this concept more fully by defining several types of novelty that a system may exhibit, classified as variation, innovation, and emergence. We then provide a meta-model for including levels of structure in a system’s model. From there, we define an architecture suitable for building simulations of open-ended novelty-generating systems and discuss how previously proposed systems fit into this framework. We discuss the design principles applicable to those systems and close with some challenges for the community
    corecore