441 research outputs found

    Metastability-Containing Circuits

    Get PDF
    In digital circuits, metastability can cause deteriorated signals that neither are logical 0 or logical 1, breaking the abstraction of Boolean logic. Unfortunately, any way of reading a signal from an unsynchronized clock domain or performing an analog-to-digital conversion incurs the risk of a metastable upset; no digital circuit can deterministically avoid, resolve, or detect metastability (Marino, 1981). Synchronizers, the only traditional countermeasure, exponentially decrease the odds of maintained metastability over time. Trading synchronization delay for an increased probability to resolve metastability to logical 0 or 1, they do not guarantee success. We propose a fundamentally different approach: It is possible to contain metastability by fine-grained logical masking so that it cannot infect the entire circuit. This technique guarantees a limited degree of metastability in---and uncertainty about---the output. At the heart of our approach lies a time- and value-discrete model for metastability in synchronous clocked digital circuits. Metastability is propagated in a worst-case fashion, allowing to derive deterministic guarantees, without and unlike synchronizers. The proposed model permits positive results and passes the test of reproducing Marino's impossibility results. We fully classify which functions can be computed by circuits with standard registers. Regarding masking registers, we show that they become computationally strictly more powerful with each clock cycle, resulting in a non-trivial hierarchy of computable functions

    Distributed Computing in the Asynchronous LOCAL model

    Full text link
    The LOCAL model is among the main models for studying locality in the framework of distributed network computing. This model is however subject to pertinent criticisms, including the facts that all nodes wake up simultaneously, perform in lock steps, and are failure-free. We show that relaxing these hypotheses to some extent does not hurt local computing. In particular, we show that, for any construction task TT associated to a locally checkable labeling (LCL), if TT is solvable in tt rounds in the LOCAL model, then TT remains solvable in O(t)O(t) rounds in the asynchronous LOCAL model. This improves the result by Casta\~neda et al. [SSS 2016], which was restricted to 3-coloring the rings. More generally, the main contribution of this paper is to show that, perhaps surprisingly, asynchrony and failures in the computations do not restrict the power of the LOCAL model, as long as the communications remain synchronous and failure-free

    Satellite remote sensing facility for oceanograhic applications

    Get PDF
    The project organization, design process, and construction of a Remote Sensing Facility at Scripps Institution of Oceanography at LaJolla, California are described. The facility is capable of receiving, processing, and displaying oceanographic data received from satellites. Data are primarily imaging data representing the multispectral ocean emissions and reflectances, and are accumulated during 8 to 10 minute satellite passes over the California coast. The most important feature of the facility is the reception and processing of satellite data in real time, allowing investigators to direct ships to areas of interest for on-site verifications and experiments

    Optimal synchronization of ABD networks

    Get PDF
    We present in this paper a simple and efficient synchronizer algorithm for Asynchonous Bounded Delay Networks. In these networks each processor has a local clock, and the message delay is bounded by a known constant. The algorithm improves on an earlier synchronizer for this network model, presented by Cou et al. Moreover, using a mathematical model for this type of synchronizer, we show that the round time of new synchronizer is optimal

    Distributed synchronizers in network simulator (Ns) software

    Full text link
    Distributed algorithms are designed for systems consisting of many interconnected processors that communicate with one another by exchanging messages through communication links. Distributed algorithms are used on a wide range of applications, from a VLSI chip to LAN, to the Internet. The advantages of distributed systems include information exchange, resource sharing, replication, parallelization, and modularization; NS (Network Simulator) is an object-oriented, discrete event driven network simulator developed at USC/ISI written in C++ and OTCL. NS is primarily useful for simulating local and wide area networks. It produces one or more text-based output files that contain detailed simulation data. The data can be used for simulation analysis or as an input to a graphical simulation display tool, called Network Animator (NAM); There are two approaches to designing distributed algorithms. In synchronous algorithms, the operation of each process is done in a lock-step behavior, whereas in asynchronous algorithms, the processes take steps in an arbitrary order and at arbitrary relative speeds. Synchronous algorithms are easier to write and prove. However, asynchronous algorithms are easier to implement. Thus, an approach to designing distributed algorithms in asynchronous systems is to start with synchronous algorithms, then transform them into corresponding asynchronous versions by passing them through a special algorithm, called synchronizer. This allows one to use asynchronous systems to run the original synchronous algorithms. The synchronizer itself is an asynchronous algorithm; In this research, we experiment with different types of synchronizers. We implement them by considering two applications: leader election and breadth-first search algorithms. The algorithms are implemented on arbitrary networks. We compare the algorithms in terms of communication complexity. We also discuss the suitability of NS as a platform to implement synchronous and asynchronous algorithms

    Frame synchronization performance and analysis

    Get PDF
    The analysis used to generate the theoretical models showing the performance of the frame synchronizer is described for various frame lengths and marker lengths at various signal to noise ratios and bit error tolerances

    Asynchronous techniques for system-on-chip design

    Get PDF
    SoC design will require asynchronous techniques as the large parameter variations across the chip will make it impossible to control delays in clock networks and other global signals efficiently. Initially, SoCs will be globally asynchronous and locally synchronous (GALS). But the complexity of the numerous asynchronous/synchronous interfaces required in a GALS will eventually lead to entirely asynchronous solutions. This paper introduces the main design principles, methods, and building blocks for asynchronous VLSI systems, with an emphasis on communication and synchronization. Asynchronous circuits with the only delay assumption of isochronic forks are called quasi-delay-insensitive (QDI). QDI is used in the paper as the basis for asynchronous logic. The paper discusses asynchronous handshake protocols for communication and the notion of validity/neutrality tests, and completion tree. Basic building blocks for sequencing, storage, function evaluation, and buses are described, and two alternative methods for the implementation of an arbitrary computation are explained. Issues of arbitration, and synchronization play an important role in complex distributed systems and especially in GALS. The two main asynchronous/synchronous interfaces needed in GALS-one based on synchronizer, the other on stoppable clock-are described and analyzed

    Dynamic Load Balancing Based on Applications Global States Monitoring

    Get PDF
    8 pages à paraîtreInternational audienceThe paper presents how to use a special novel distributed program design framework with evolved global control mechanisms to assure processor load balancing during execution of application programs. The new framework supports a programmer with an API and GUI for automated graphical design of program execution control based on global application states monitoring. The framework provides highlevel distributed control primitives at process level and a special control infrastructure for global asynchronous execution control at thread level. Both kinds of control assume observations of current multicore processor performance and communication throughput enabled in the executive distributed system. Methods for designing processor load balancing control based on a system of program and system properties metrics and computational data migration between application executive processes is presented and assessed by experiments with execution of graph representations of distributed programs
    corecore