4,856 research outputs found

    Replacing the Ethernet access mechanism with the real-time access mechanism of Twentenet

    Get PDF
    The way in which a Local Area Network access mechanism (Medium Access Control protocol) designed for a specific type of physical service can be used on top of another type of physical service is discussed using a particular example. In the example, an Ethernet physical layer is used to provide service to the Twentenet real-time access mechanism. Relevant Ethernet and Twentenet concepts are explained, the approach taken is introduced, and problems encountered, along with the actual synthesis of both networks, are described

    Asynchronous techniques for system-on-chip design

    Get PDF
    SoC design will require asynchronous techniques as the large parameter variations across the chip will make it impossible to control delays in clock networks and other global signals efficiently. Initially, SoCs will be globally asynchronous and locally synchronous (GALS). But the complexity of the numerous asynchronous/synchronous interfaces required in a GALS will eventually lead to entirely asynchronous solutions. This paper introduces the main design principles, methods, and building blocks for asynchronous VLSI systems, with an emphasis on communication and synchronization. Asynchronous circuits with the only delay assumption of isochronic forks are called quasi-delay-insensitive (QDI). QDI is used in the paper as the basis for asynchronous logic. The paper discusses asynchronous handshake protocols for communication and the notion of validity/neutrality tests, and completion tree. Basic building blocks for sequencing, storage, function evaluation, and buses are described, and two alternative methods for the implementation of an arbitrary computation are explained. Issues of arbitration, and synchronization play an important role in complex distributed systems and especially in GALS. The two main asynchronous/synchronous interfaces needed in GALS-one based on synchronizer, the other on stoppable clock-are described and analyzed

    LPDQ: a self-scheduled TDMA MAC protocol for one-hop dynamic lowpower wireless networks

    Get PDF
    Current Medium Access Control (MAC) protocols for data collection scenarios with a large number of nodes that generate bursty traffic are based on Low-Power Listening (LPL) for network synchronization and Frame Slotted ALOHA (FSA) as the channel access mechanism. However, FSA has an efficiency bounded to 36.8% due to contention effects, which reduces packet throughput and increases energy consumption. In this paper, we target such scenarios by presenting Low-Power Distributed Queuing (LPDQ), a highly efficient and low-power MAC protocol. LPDQ is able to self-schedule data transmissions, acting as a FSA MAC under light traffic and seamlessly converging to a Time Division Multiple Access (TDMA) MAC under congestion. The paper presents the design principles and the implementation details of LPDQ using low-power commercial radio transceivers. Experiments demonstrate an efficiency close to 99% that is independent of the number of nodes and is fair in terms of resource allocation.Peer ReviewedPostprint (author’s final draft

    Architecture Design Space Exploration for Streaming Applications Through Timing Analysis

    Get PDF
    In this paper we compare the maximum achievable throughput of different memory organisations of the processing elements that constitute a multiprocessor system on chip. This is done by modelling the mapping of a task with input and output channels on a processing element as a homogeneous synchronous dataflow graph, and use maximum cycle mean analysis to derive the throughput. In a HiperLAN2 case study we show how these techniques can be used to derive the required clock frequency and communication latencies in order to meet the application's throughput requirement on a multiprocessor system on chip that has one of the investigated memory organisations

    The "MIND" Scalable PIM Architecture

    Get PDF
    MIND (Memory, Intelligence, and Network Device) is an advanced parallel computer architecture for high performance computing and scalable embedded processing. It is a Processor-in-Memory (PIM) architecture integrating both DRAM bit cells and CMOS logic devices on the same silicon die. MIND is multicore with multiple memory/processor nodes on each chip and supports global shared memory across systems of MIND components. MIND is distinguished from other PIM architectures in that it incorporates mechanisms for efficient support of a global parallel execution model based on the semantics of message-driven multithreaded split-transaction processing. MIND is designed to operate either in conjunction with other conventional microprocessors or in standalone arrays of like devices. It also incorporates mechanisms for fault tolerance, real time execution, and active power management. This paper describes the major elements and operational methods of the MIND architecture
    • …
    corecore