35,282 research outputs found

    Performance analysis and optimization of asynchronous circuits

    Get PDF
    Journal ArticleAsynchronous/Self-timed circuits are beginning to attract renewed attention as promising means of dealing with the complexity of modern VLSI designs. However, there are very few analysis techniques or tools available for estimating the performance of asynchronous circuits. In this paper we adapt the theory of Generalized Timed Petri-nets (GTPN) for analyzing and comparing a wide variety of asynchronous circuits, ranging from purely control-oriented circuits such as cross-bar arbiters to large asynchronous systems with data dependent control such as asynchronous processors. Experiments with the GTPN analyzer are found to track the observed performance of actual asynchronous circuits, thereby offering empirical evidence towards the soundness of the modeling approach. Our main contribution is in demonstrating how a quantitative design methodology for asynchronous circuits can be developed based on Timed Petri-nets

    Performance analysis and optimization of asynchronous circuits

    Get PDF
    Journal ArticleAsynchronous/Self-timed circuits are beginning to attract renewed attention as promising means of dealing with the complexity of modern VZSI designs. Very few analysis techniques or tools are available for estimating their performance. In this paper we adapt the theory of Generalized Timed Petri-nets (GTPN) for analyzing and comparing asynchronous circuits ranging from purely control-oriented circuits to those with data dependent control. Experiments with the GTPN analyzer are found to track the observed performance of actual asynchronous circuits, thereby offering empirical evidence toward the soundness of the modeling approach

    Convolutional nets for reconstructing neural circuits from brain images acquired by serial section electron microscopy

    Full text link
    Neural circuits can be reconstructed from brain images acquired by serial section electron microscopy. Image analysis has been performed by manual labor for half a century, and efforts at automation date back almost as far. Convolutional nets were first applied to neuronal boundary detection a dozen years ago, and have now achieved impressive accuracy on clean images. Robust handling of image defects is a major outstanding challenge. Convolutional nets are also being employed for other tasks in neural circuit reconstruction: finding synapses and identifying synaptic partners, extending or pruning neuronal reconstructions, and aligning serial section images to create a 3D image stack. Computational systems are being engineered to handle petavoxel images of cubic millimeter brain volumes

    Synaptic Hyaluronan Synthesis and CD44-Mediated Signaling Coordinate Neural Circuit Development

    Get PDF
    The hyaluronan-based extracellular matrix is expressed throughout nervous system development and is well-known for the formation of perineuronal nets around inhibitory interneurons. Since perineuronal nets form postnatally, the role of hyaluronan in the initial formation of neural circuits remains unclear. Neural circuits emerge from the coordinated electrochemical signaling of excitatory and inhibitory synapses. Hyaluronan localizes to the synaptic cleft of developing excitatory synapses in both human cortical spheroids and the neonatal mouse brain and is diminished in the adult mouse brain. Given this developmental-specific synaptic localization, we sought to determine the mechanisms that regulate hyaluronan synthesis and signaling during synapse formation. We demonstrate that hyaluronan synthase-2, HAS2, is sufficient to increase hyaluronan levels in developing neural circuits of human cortical spheroids. This increased hyaluronan production reduces excitatory synaptogenesis, promotes inhibitory synaptogenesis, and suppresses action potential formation. The hyaluronan receptor, CD44, promotes hyaluronan retention and suppresses excitatory synaptogenesis through regulation of RhoGTPase signaling. Our results reveal mechanisms of hyaluronan synthesis, retention, and signaling in developing neural circuits, shedding light on how disease-associated hyaluronan alterations can contribute to synaptic defects.ECU Open Access Publishing Support Fun

    A structured approach for the engineering of biochemical network models, illustrated for signalling pathways

    Get PDF
    http://dx.doi.org/10.1093/bib/bbn026Quantitative models of biochemical networks (signal transduction cascades, metabolic pathways, gene regulatory circuits) are a central component of modern systems biology. Building and managing these complex models is a major challenge that can benefit from the application of formal methods adopted from theoretical computing science. Here we provide a general introduction to the field of formal modelling, which emphasizes the intuitive biochemical basis of the modelling process, but is also accessible for an audience with a background in computing science and/or model engineering. We show how signal transduction cascades can be modelled in a modular fashion, using both a qualitative approach { Qualitative Petri nets, and quantitative approaches { Continuous Petri Nets and Ordinary Differential Equations. We review the major elementary building blocks of a cellular signalling model, discuss which critical design decisions have to be made during model building, and present ..

    Sublogarithmic uniform Boolean proof nets

    Full text link
    Using a proofs-as-programs correspondence, Terui was able to compare two models of parallel computation: Boolean circuits and proof nets for multiplicative linear logic. Mogbil et. al. gave a logspace translation allowing us to compare their computational power as uniform complexity classes. This paper presents a novel translation in AC0 and focuses on a simpler restricted notion of uniform Boolean proof nets. We can then encode constant-depth circuits and compare complexity classes below logspace, which were out of reach with the previous translations.Comment: In Proceedings DICE 2011, arXiv:1201.034

    Optimizing Scrubbing by Netlist Analysis for FPGA Configuration Bit Classification and Floorplanning

    Full text link
    Existing scrubbing techniques for SEU mitigation on FPGAs do not guarantee an error-free operation after SEU recovering if the affected configuration bits do belong to feedback loops of the implemented circuits. In this paper, we a) provide a netlist-based circuit analysis technique to distinguish so-called critical configuration bits from essential bits in order to identify configuration bits which will need also state-restoring actions after a recovered SEU and which not. Furthermore, b) an alternative classification approach using fault injection is developed in order to compare both classification techniques. Moreover, c) we will propose a floorplanning approach for reducing the effective number of scrubbed frames and d), experimental results will give evidence that our optimization methodology not only allows to detect errors earlier but also to minimize the Mean-Time-To-Repair (MTTR) of a circuit considerably. In particular, we show that by using our approach, the MTTR for datapath-intensive circuits can be reduced by up to 48.5% in comparison to standard approaches

    The Parallelism Tradeoff: Limitations of Log-Precision Transformers

    Full text link
    Despite their omnipresence in modern NLP, characterizing the computational power of transformer neural nets remains an interesting open question. We prove that transformers whose arithmetic precision is logarithmic in the number of input tokens (and whose feedforward nets are computable using space linear in their input) can be simulated by constant-depth logspace-uniform threshold circuits. This provides insight on the power of transformers using known results in complexity theory. For example, if L≠P\mathsf L \neq \mathsf P (i.e., not all poly-time problems can be solved using logarithmic space), then transformers cannot even accurately solve linear equalities or check membership in an arbitrary context-free grammar with empty productions. Our result intuitively emerges from the transformer architecture's high parallelizability. We thus speculatively introduce the idea of a fundamental parallelism tradeoff: any model architecture as parallelizable as the transformer will obey limitations similar to it. Since parallelism is key to training models at massive scale, this suggests a potential inherent weakness of the scaling paradigm.Comment: Accepted at TACL. Formerly entitled "Log-Precision Transformers are Constant-Depth Threshold Circuits". Updated with minor corrections in Section 2 (Implications) on March 6, 2023. Update with minor edits to the proof of Lemma 3 on April 26, 202
    • …
    corecore