3,970 research outputs found

    Desynchronization: Synthesis of asynchronous circuits from synchronous specifications

    Get PDF
    Asynchronous implementation techniques, which measure logic delays at run time and activate registers accordingly, are inherently more robust than their synchronous counterparts, which estimate worst-case delays at design time, and constrain the clock cycle accordingly. De-synchronization is a new paradigm to automate the design of asynchronous circuits from synchronous specifications, thus permitting widespread adoption of asynchronicity, without requiring special design skills or tools. In this paper, we first of all study different protocols for de-synchronization and formally prove their correctness, using techniques originally developed for distributed deployment of synchronous language specifications. We also provide a taxonomy of existing protocols for asynchronous latch controllers, covering in particular the four-phase handshake protocols devised in the literature for micro-pipelines. We then propose a new controller which exhibits provably maximal concurrency, and analyze the performance of desynchronized circuits with respect to the original synchronous optimized implementation. We finally prove the feasibility and effectiveness of our approach, by showing its application to a set of real designs, including a complete implementation of the DLX microprocessor architectur

    On Time Synchronization Issues in Time-Sensitive Networks with Regulators and Nonideal Clocks

    Full text link
    Flow reshaping is used in time-sensitive networks (as in the context of IEEE TSN and IETF Detnet) in order to reduce burstiness inside the network and to support the computation of guaranteed latency bounds. This is performed using per-flow regulators (such as the Token Bucket Filter) or interleaved regulators (as with IEEE TSN Asynchronous Traffic Shaping). Both types of regulators are beneficial as they cancel the increase of burstiness due to multiplexing inside the network. It was demonstrated, by using network calculus, that they do not increase the worst-case latency. However, the properties of regulators were established assuming that time is perfect in all network nodes. In reality, nodes use local, imperfect clocks. Time-sensitive networks exist in two flavours: (1) in non-synchronized networks, local clocks run independently at every node and their deviations are not controlled and (2) in synchronized networks, the deviations of local clocks are kept within very small bounds using for example a synchronization protocol (such as PTP) or a satellite based geo-positioning system (such as GPS). We revisit the properties of regulators in both cases. In non-synchronized networks, we show that ignoring the timing inaccuracies can lead to network instability due to unbounded delay in per-flow or interleaved regulators. We propose and analyze two methods (rate and burst cascade, and asynchronous dual arrival-curve method) for avoiding this problem. In synchronized networks, we show that there is no instability with per-flow regulators but, surprisingly, interleaved regulators can lead to instability. To establish these results, we develop a new framework that captures industrial requirements on clocks in both non-synchronized and synchronized networks, and we develop a toolbox that extends network calculus to account for clock imperfections.Comment: ACM SIGMETRICS 2020 Boston, Massachusetts, USA June 8-12, 202

    Application of High-precision Timing Systems to Distributed Survey Systems

    Get PDF
    In any hydrographic survey system that consists of more than one computer, one of the most difficult integration problems is to ensure that all components maintain a coherent sense of time. Since virtually all modern survey systems are of this type, timekeeping and synchronized timestamping of data as it is created is of significant concern. This paper describes a method for resolving this problem based on the IEEE 1588 Precise Time Protocol (PTP) implemented by hardware devices, layered with some custom software called the Software Grandmaster (SWGM) algorithm. This combination of hardware and software maintains a coherent sense of time between multiple ethernet-connected computers, on the order of 100 ns (rms) in the best case, of the timebase established by the local GPS-receiver clock. We illustrate the performance of this techniques in a practical survey system using a Reson 7P sonar processor connected to a Reson 7125 Multibeam Echosounder (MBES), integrated with an Applanix POS/MV 320 V4 and a conventional data capture computer. Using the timing capabilities of the PTP hardware implementations, we show that the timepieces achieve mean (hardware based) synchronization and timestamping within 100-150 ns (rms), and that the data created at the Reson 7P without hardware timestamps has a latency variability of 28 µs (rms) due to software constraints within the capture system. This compares to 288 ms (rms) using Reson’s standard hybrid hardware/software solution, and 13.6 ms (rms) using a conventional single-oscillator timestamping model

    Time Synchronization in Wireless Sensor Networks

    Get PDF
    • …
    corecore