4,856 research outputs found
Replacing the Ethernet access mechanism with the real-time access mechanism of Twentenet
The way in which a Local Area Network access mechanism (Medium Access Control protocol) designed for a specific type of physical service can be used on top of another type of physical service is discussed using a particular example. In the example, an Ethernet physical layer is used to provide service to the Twentenet real-time access mechanism. Relevant Ethernet and Twentenet concepts are explained, the approach taken is introduced, and problems encountered, along with the actual synthesis of both networks, are described
Asynchronous techniques for system-on-chip design
SoC design will require asynchronous techniques as the large parameter variations across the chip will make it impossible to control delays in clock networks and other global signals efficiently. Initially, SoCs will be globally asynchronous and locally synchronous (GALS). But the complexity of the numerous asynchronous/synchronous interfaces required in a GALS will eventually lead to entirely asynchronous solutions. This paper introduces the main design principles, methods, and building blocks for asynchronous VLSI systems, with an emphasis on communication and synchronization. Asynchronous circuits with the only delay assumption of isochronic forks are called quasi-delay-insensitive (QDI). QDI is used in the paper as the basis for asynchronous logic. The paper discusses asynchronous handshake protocols for communication and the notion of validity/neutrality tests, and completion tree. Basic building blocks for sequencing, storage, function evaluation, and buses are described, and two alternative methods for the implementation of an arbitrary computation are explained. Issues of arbitration, and synchronization play an important role in complex distributed systems and especially in GALS. The two main asynchronous/synchronous interfaces needed in GALS-one based on synchronizer, the other on stoppable clock-are described and analyzed
Recommended from our members
Buying commercial law: Choice of law, choice of forum, and network externalities
Copyright @ 2009 Bryan DruzinThis paper applies network effect theory to transnational commercial law, arguing that commercial parties selecting law through choice of law and choice of forum clauses can be likened to consumers selecting a product, and thus equally susceptible to the effects of network externalities. The number of âconsumersâ who subscribe to the same legal norms is analogous to the number of consumers who use a product. As the number of âconsumersâ increases, so too does the inherent value of selecting that jurisdiction, inducing even more parties to âpurchaseâ that body of law. This is a network effect. I argue that transnational commercial law is ideally calibrated so as to generate a network effect. This stems from the inherent nature of commerce. The discussion distinguishes between two kinds of externalities, direct and indirect network externalities, concluding that network systems that possess both kinds of network externalities (as is the case with law-selection decisions in commercial contracts), are the best candidates to produce a robust network effect. I then examine how the twin ingredients of fluid interaction and frequent choice present in commerce precipitate a network effect; expansive interaction places a higher premium on the need for synchronization, and frequent opportunities to select law in the contracts of fresh commercial relationships allow for an incremental drift towards a specific jurisdiction. The paper ultimately concludes that, as a result, network externalities indeed play an influential role in the ascension of particular jurisdictions over others in law-selection decisions, an important conclusion as it points to an unrecognized influence underpinning the current development of transnational commercial law
LPDQ: a self-scheduled TDMA MAC protocol for one-hop dynamic lowpower wireless networks
Current Medium Access Control (MAC) protocols for data collection scenarios with a large number of nodes that generate bursty traffic are based on Low-Power Listening (LPL) for network synchronization and Frame Slotted ALOHA (FSA) as the channel access mechanism. However, FSA has an efficiency bounded to 36.8% due to contention effects, which reduces packet throughput and increases energy consumption. In this paper, we target such scenarios by presenting Low-Power Distributed Queuing (LPDQ), a highly efficient and low-power MAC protocol. LPDQ is able to self-schedule data transmissions, acting as a FSA MAC under light traffic and seamlessly converging to a Time Division Multiple Access (TDMA) MAC under congestion. The paper presents the design principles and the implementation details of LPDQ using low-power commercial radio transceivers. Experiments demonstrate an efficiency close to 99% that is independent of the number of nodes and is fair in terms of resource allocation.Peer ReviewedPostprint (authorâs final draft
Architecture Design Space Exploration for Streaming Applications Through Timing Analysis
In this paper we compare the maximum achievable throughput of different memory organisations of the processing elements that constitute a multiprocessor system on chip. This is done by modelling the mapping of a task with input and output channels on a processing element as a homogeneous synchronous dataflow graph, and use maximum cycle mean analysis to derive the throughput. In a HiperLAN2 case study we show how these techniques can be used to derive the required clock frequency and communication latencies in order to meet the application's throughput requirement on a multiprocessor system on chip that has one of the investigated memory organisations
The "MIND" Scalable PIM Architecture
MIND (Memory, Intelligence, and Network Device) is an advanced parallel computer architecture for high performance computing and scalable embedded processing. It is a
Processor-in-Memory (PIM) architecture integrating both DRAM bit cells and CMOS logic devices on the same silicon die. MIND is multicore with multiple memory/processor nodes on
each chip and supports global shared memory across systems of MIND components. MIND is distinguished from other PIM architectures in that it incorporates mechanisms for efficient support of a global parallel execution model based on the semantics of message-driven multithreaded split-transaction processing. MIND is designed to operate either in conjunction with other conventional microprocessors or in standalone arrays of like devices. It also incorporates mechanisms for fault tolerance, real time execution, and active power management. This paper describes the major elements and operational methods of the MIND
architecture
- âŚ