38 research outputs found

    Interconnect modeling and optimization in deep sub-micron technologies

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references.Interconnect will be a major bottleneck for deep sub-micron technologies in the years to come. This dissertation addresses the communication aspect from a power consumption and transmission speed perspective. A model for the energy consumption associated with data transmission through deep sub-micron technology buses is derived. The capacitive and inductive coupling between the bus lines as well as the distributed nature of the wires is taken into account. The model is used to estimate the power consumption of the bus as a function of the Transition Activity Matrix, a quantity generalizing the transition activity factors of the individual lines. An information theoretic framework has been developed to study the relation between speed (number of operations per time unit) and energy consumption per operation in the case of synchronous digital systems. The theory provides us with the fundamental minimum energy per input information bit that is required to process or communicate information at a certain rate. The minimum energy is a function of the information rate, and it is, in theory, asymptotically achievable using coding. This energy-information theory combined with the bus energy model result in the derivation of the fundamental performance limits of coding for low power in deep sub-micron buses. Although linear, block linear and differential coding schemes are favorable candidates for error correction, it is shown that they only increase power consumption in buses. Their resulting power consumption is related to structural properties of their generator matrices. In some cases the power is calculated exactly and in other cases bounds are derived.(cont.) Both provide intuition about how to re-structure a given linear (block linear, etc.) code so that the energy is minimized within the set of all equivalent codes. A large class of nonlinear coding schemes is examined that leads to significant power reduction. This class contains all encoding schemes that have the form of connected Finite State Machines. The deep sub-micron bus energy model is used to evaluate their power reduction properties. Mathematical analysis of this class of coding schemes has led to the derivation of two coding optimization algorithms. Both algorithms derive efficient coding schemes taking into account statistical properties of the data and the particular structure of the bus. This coding design approach is generally applicable to any discrete channel with transition costs. For power reduction, a charge recycling technique appropriate for deep sub-micron buses is developed. A detailed mathematical analysis provides the theoretical limits of power reduction. It is shown that for large buses power can be reduced by a factor of two. An efficient modular circuit implementation is presented that demonstrates the practicality of the technique and its significant net power reduction. Coding for speed on the bus is introduced. This novel idea is based on the fact that coupling between the lines in a deep sub-micron bus implies that different transitions require different amounts of time to complete. By allowing only "fast" transitions to take place, we can increase the clock frequency of the bus. The combinatorial capacity of such a constrained bus ...by Paul Peter P. Sotiriadis.Ph.D

    Domain specific high performance reconfigurable architecture for a communication platform

    Get PDF

    On Energy Efficient Computing Platforms

    Get PDF
    In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.Siirretty Doriast

    Lawrence Berkeley Laboratory Institutional Plan FY 1995--2000

    Full text link

    Topical Workshop on Electronics for Particle Physics

    Get PDF

    ALICE: Physics Performance Report, Volume I

    Get PDF
    ALICE is a general-purpose heavy-ion experiment designed to study the physics of strongly interacting matter and the quark-gluon plasma in nucleus-nucleus collisions at the LHC. It currently includes more than 900 physicists and senior engineers, from both nuclear and high-energy physics, from about 80 institutions in 28 countries. The experiment was approved in February 1997. The detailed design of the different detector systems has been laid down in a number of Technical Design Reports issued between mid-1998 and the end of 2001 and construction has started for most detectors. Since the last comprehensive information on detector and physics performance was published in the ALICE Technical Proposal in 1996, the detector as well as simulation, reconstruction and analysis software have undergone significant development. The Physics Performance Report (PPR) will give an updated and comprehensive summary of the current status and performance of the various ALICE subsystems, including updates to the Technical Design Reports, where appropriate, as well as a description of systems which have not been published in a Technical Design Report. The PPR will be published in two volumes. The current Volume I contains: 1. a short theoretical overview and an extensive reference list concerning the physics topics of interest to ALICE, 2. relevant experimental conditions at the LHC, 3. a short summary and update of the subsystem designs, and 4. a description of the offline framework and Monte Carlo generators. Volume II, which will be published separately, will contain detailed simulations of combined detector performance, event reconstruction, and analysis of a representative sample of relevant physics observables from global event characteristics to hard processes

    Development and Characterization of a DEPFET Pixel Prototype System for the ILC Vertex Detector

    Get PDF
    For the future TeV-scale linear collider ILC (International Linear Collider) a vertex detector of unprecedented performance is needed to fully exploit its physics potential. By incorporating a field effect transistor into a fully depleted sensor substrate the DEPFET (Depleted Field Effect Transistor) sensor combines radiation detection and in-pixel amplification. For the operation at a linear collider the excellent noise performance of DEPFET pixels allows building very thin detectors with a high spatial resolution and a low power consumption. With this thesis a prototype system consisting of a 64×12864 \times 128 pixels sensor, dedicated steering and readout ASICs and a data acquisition board has been developed and successfully operated in the laboratory and under realistic conditions in beam test environments at DESY and CERN. A DEPFET matrix has been successfully read out using the on-chip zero-suppression of the readout chip CURO 2. The results of the system characterization and beam test results are presented

    The ALICE experiment at the CERN LHC

    Get PDF
    ALICE (A Large Ion Collider Experiment) is a general-purpose, heavy-ion detector at the CERN LHC which focuses on QCD, the strong-interaction sector of the Standard Model. It is designed to address the physics of strongly interacting matter and the quark-gluon plasma at extreme values of energy density and temperature in nucleus-nucleus collisions. Besides running with Pb ions, the physics programme includes collisions with lighter ions, lower energy running and dedicated proton-nucleus runs. ALICE will also take data with proton beams at the top LHC energy to collect reference data for the heavy-ion programme and to address several QCD topics for which ALICE is complementary to the other LHC detectors. The ALICE detector has been built by a collaboration including currently over 1000 physicists and engineers from 105 Institutes in 30 countries. Its overall dimensions are 161626 m3 with a total weight of approximately 10 000 t. The experiment consists of 18 different detector systems each with its own specific technology choice and design constraints, driven both by the physics requirements and the experimental conditions expected at LHC. The most stringent design constraint is to cope with the extreme particle multiplicity anticipated in central Pb-Pb collisions. The different subsystems were optimized to provide high-momentum resolution as well as excellent Particle Identification (PID) over a broad range in momentum, up to the highest multiplicities predicted for LHC. This will allow for comprehensive studies of hadrons, electrons, muons, and photons produced in the collision of heavy nuclei. Most detector systems are scheduled to be installed and ready for data taking by mid-2008 when the LHC is scheduled to start operation, with the exception of parts of the Photon Spectrometer (PHOS), Transition Radiation Detector (TRD) and Electro Magnetic Calorimeter (EMCal). These detectors will be completed for the high-luminosity ion run expected in 2010. This paper describes in detail the detector components as installed for the first data taking in the summer of 2008

    Lawrence Berkeley Laboratory Institutional Plan FY 1987-1992

    Full text link

    Studies on Mobile Terminal Energy Consumption for LTE and Future 5G

    Get PDF
    corecore