2,359 research outputs found

    Unifying mesh- and tree-based programmable interconnect

    Get PDF
    We examine the traditional, symmetric, Manhattan mesh design for field-programmable gate-array (FPGA) routing along with tree-of-meshes (ToM) and mesh-of-trees (MoT) based designs. All three networks can provide general routing for limited bisection designs (Rent's rule with p<1) and allow locality exploitation. They differ in their detailed topology and use of hierarchy. We show that all three have the same asymptotic wiring requirements. We bound this tightly by providing constructive mappings between routes in one network and routes in another. For example, we show that a (c,p) MoT design can be mapped to a (2c,p) linear population ToM and introduce a corner turn scheme which will make it possible to perform the reverse mapping from any (c,p) linear population ToM to a (2c,p) MoT augmented with a particular set of corner turn switches. One consequence of this latter mapping is a multilayer layout strategy for N-node, linear population ToM designs that requires only /spl Theta/(N) two-dimensional area for any p when given sufficient wiring layers. We further show upper and lower bounds for global mesh routes based on recursive bisection width and show these are within a constant factor of each other and within a constant factor of MoT and ToM layout area. In the process we identify the parameters and characteristics which make the networks different, making it clear there is a unified design continuum in which these networks are simply particular regions

    Intelligent optimization of Circuit placement on FPGA

    Get PDF
    Field programmable gate arrays (FPGAs) have revolutionized the way digital systems are designed and built over the past decade. With architectures capable of holding tens of millions of logic gates on the horizon and planned integration of configurable logic into system-on-chip platforms, the versatility of programmable devices expected to increase dramatically. Placement is one of the vital steps in mapping a design into FPGA in order to take best advantage of the resources and flexibility provided by it. Here, we propose to test techniques of Placement Optimization on MCNC Benchmark circuits. PSO (Particle Swarm Optimization) has been implemented on circuit netlist with bounding box as cost function. Alternate cost functions were also employed to verify efficiency of optimization. Furthermore, lazy descent was introduced into the algorithm to impede premature convergence. Different values of acceleration and weighing factors were used in the implementation and corresponding convergence results were analyzed. Keywords- FPGA Placement; Particle Swarm Optimization; MCNC Benchmarks Circuits; Bounding Box driven Placement

    Dynamic Scheduling, Allocation, and Compaction Scheme for Real-Time Tasks on FPGAs

    Get PDF
    Run-time reconfiguration (RTR) is a method of computing on reconfigurable logic, typically FPGAs, changing hardware configurations from phase to phase of a computation at run-time. Recent research has expanded from a focus on a single application at a time to encompass a view of the reconfigurable logic as a resource shared among multiple applications or users. In real-time system design, task deadlines play an important role. Real-time multi-tasking systems not only need to support sharing of the resources in space, but also need to guarantee execution of the tasks. At the operating system level, sharing logic gates, wires, and I/O pins among multiple tasks needs to be managed. From the high level standpoint, access to the resources needs to be scheduled according to task deadlines. This thesis describes a task allocator for scheduling, placing, and compacting tasks on a shared FPGA under real-time constraints. Our consideration of task deadlines is novel in the setting of handling multiple simultaneous tasks in RTR. Software simulations have been conducted to evaluate the performance of the proposed scheme. The results indicate significant improvement by decreasing the number of tasks rejected

    ToPoliNano: Nanoarchitectures Design Made Real

    Get PDF
    Many facts about emerging nanotechnologies are yet to be assessed. There are still major concerns, for instance, about maximum achievable device density, or about which architecture is best fit for a specific application. Growing complexity requires taking into account many aspects of technology, application and architecture at the same time. Researchers face problems that are not new per se, but are now subject to very different constraints, that need to be captured by design tools. Among the emerging nanotechnologies, two-dimensional nanowire based arrays represent promising nanostructures, especially for massively parallel computing architectures. Few attempts have been done, aimed at giving the possibility to explore architectural solutions, deriving information from extensive and reliable nanoarray characterization. Moreover, in the nanotechnology arena there is still not a clear winner, so it is important to be able to target different technologies, not to miss the next big thing. We present a tool, ToPoliNano, that enables such a multi-technological characterization in terms of logic behavior, power and timing performance, area and layout constraints, on the basis of specific technological and topological descriptions. This tool can aid the design process, beside providing a comprehensive simulation framework for DC and timing simulations, and detailed power analysis. Design and simulation results will be shown for nanoarray-based circuits. ToPoliNano is the first real design tool that tackles the top down design of a circuit based on emerging technologie

    A Modular Approach to Adaptive Reactive Streaming Systems

    Get PDF
    The latest generations of FPGA devices offer large resource counts that provide the headroom to implement large-scale and complex systems. However, there are increasing challenges for the designer, not just because of pure size and complexity, but also in harnessing effectively the flexibility and programmability of the FPGA. A central issue is the need to integrate modules from diverse sources to promote modular design and reuse. Further, the capability to perform dynamic partial reconfiguration (DPR) of FPGA devices means that implemented systems can be made reconfigurable, allowing components to be changed during operation. However, use of DPR typically requires low-level planning of the system implementation, adding to the design challenge. This dissertation presents ReShape: a high-level approach for designing systems by interconnecting modules, which gives a ‘plug and play’ look and feel to the designer, is supported by tools that carry out implementation and verification functions, and is carried through to support system reconfiguration during operation. The emphasis is on the inter-module connections and abstracting the communication patterns that are typical between modules – for example, the streaming of data that is common in many FPGA-based systems, or the reading and writing of data to and from memory modules. ShapeUp is also presented as the static precursor to ReShape. In both, the details of wiring and signaling are hidden from view, via metadata associated with individual modules. ReShape allows system reconfiguration at the module level, by supporting type checking of replacement modules and by managing the overall system implementation, via metadata associated with its FPGA floorplan. The methodology and tools have been implemented in a prototype for a broad domain-specific setting – networking systems – and have been validated on real telecommunications design projects

    Wafer-scale integration of semiconductor memory.

    Get PDF
    This work is directed towards a study of full-slice or "wafer-scale integrated" - semiconductor memory. Previous approaches to full slice technology are studied and critically compared. It is shown that a fault-tolerant, fixed-interconnection approach offers many advantages; such a technique forms the basis of the experimental work. The disadvantages of the conventional technology are reviewed to illustrate the potential improvements in cost, packing density and reliability obtainable with wafer-scale integration (W.S.l). Iterative chip arrays are modelled by a pseudorandom fault distribution; algorithms to control the linking of adjacent good - chips into linear chains are proposed and investigated by computer simulation. It is demonstrated that long chains may be produced at practicable yield levels. The on-chip control circuitry and the external control electronics required to implement one particular algorithm are described in relation to a TTL simulation of an array of 4 X 4 integrated circuit chips. A layout of the on-chip control logic is shown to require (in 40 dynamic MOS circuitry) an area equivalent to ~250 shift register stages -a reasonable overhead on large memories. Structures are proposed to extend the fixed-interconnection, fault-tolerant concept to parallel/serial organised memory - covering RAM, ROM and Associative Memory applications requiring up to~ 2M bits of storage. Potential problem areas in implementing W.S.I are discussed and it is concluded that current technology is capable of manufacturing such devices. A detailed cost comparison of the conventional and W.S.I approaches to large serial memories illustrates the potential savings available with wafer-scale integration. The problem of gaining industrial acceptance for W.S.I is discussed in relation to known and anticipated views- of new technology. The thesis concludes with suggestions for further work in the general field of wafer-scale integration

    Scaling silicon-based quantum computing using CMOS technology: State-of-the-art, Challenges and Perspectives

    Full text link
    Complementary metal-oxide semiconductor (CMOS) technology has radically reshaped the world by taking humanity to the digital age. Cramming more transistors into the same physical space has enabled an exponential increase in computational performance, a strategy that has been recently hampered by the increasing complexity and cost of miniaturization. To continue achieving significant gains in computing performance, new computing paradigms, such as quantum computing, must be developed. However, finding the optimal physical system to process quantum information, and scale it up to the large number of qubits necessary to build a general-purpose quantum computer, remains a significant challenge. Recent breakthroughs in nanodevice engineering have shown that qubits can now be manufactured in a similar fashion to silicon field-effect transistors, opening an opportunity to leverage the know-how of the CMOS industry to address the scaling challenge. In this article, we focus on the analysis of the scaling prospects of quantum computing systems based on CMOS technology.Comment: Comments welcom

    The Design of a System Architecture for Mobile Multimedia Computers

    Get PDF
    This chapter discusses the system architecture of a portable computer, called Mobile Digital Companion, which provides support for handling multimedia applications energy efficiently. Because battery life is limited and battery weight is an important factor for the size and the weight of the Mobile Digital Companion, energy management plays a crucial role in the architecture. As the Companion must remain usable in a variety of environments, it has to be flexible and adaptable to various operating conditions. The Mobile Digital Companion has an unconventional architecture that saves energy by using system decomposition at different levels of the architecture and exploits locality of reference with dedicated, optimised modules. The approach is based on dedicated functionality and the extensive use of energy reduction techniques at all levels of system design. The system has an architecture with a general-purpose processor accompanied by a set of heterogeneous autonomous programmable modules, each providing an energy efficient implementation of dedicated tasks. A reconfigurable internal communication network switch exploits locality of reference and eliminates wasteful data copies
    corecore