563 research outputs found

    Implementation of arithmetic primitives using truly deep submicron technology (TDST)

    Get PDF
    The invention of the transistor in 1947 at Bell Laboratories revolutionised the electronics industry and created a powerful platform for emergence of new industries. The quest to increase the number of devices per chip over the last four decades has resulted in rapid transition from Small-Scale-Integration (SSI) and Large-Scale-lntegration (LSI), through to the Very-Large-Scale-Integration (VLSI) technologies, incorporating approximately 10 to 100 million devices per chip. The next phase in this evolution is the Ultra-Large-Scale-Integration (ULSI) aiming to realise new application domains currently not accessible to CMOS technology. Although technology is continuously evolving to produce smaller systems with minimised power dissipation, the IC industry is facing major challenges due to constraints on power density (W/cm2) and high dynamic (operating) and static (standby) power dissipation. Mobile multimedia communication and optical based technologies have rapidly become a significant area of research and development challenging a variety of technological fronts. The future emergence or 4G (4th Generation) wireless communications networks is further driving this development, requiring increasing levels of media rich content. The processing requirements for capture, conversion, compression, decompression, enhancement and display of higher quality multimedia, place heavy demands on current ULSI systems. This is also apparent for mobile applications and intelligent optical networks where silicon chip area and power dissipation become primary considerations. In addition to the requirements for very low power, compact size and real-time processing, the rapidly evolving nature of telecommunication networks means that flexible soft programmable systems capable of adaptation to support a number of different standards and/or roles become highly desirable. In order to fully realise the capabilities promised by the 4G and supporting intelligent networks, new enabling technologies arc needed to facilitate the next generation of personal communications devices. Most of the current solutions to meet these challenges are based on various implementations of conventional architectures. For decades, silicon has been the main platform of computing, however it is slow, bulky, runs too hot, and is too expensive. Thus, new approaches to architectures, driving multimedia and future telecommunications systems, are needed in order to extend the life cycle of silicon technology. The emergence of Truly Deep Submicron Technology (TDST) and related 3-D interconnection technologies have provided potential alternatives from conventional architectures to 3-D system solutions, through integration of IDST, Vertical Software Mapping and Intelligent Interconnect Technology (IIT). The concept of Soft-Chip Technology (SCT) entails integration of Soft• Processing Circuits with Soft-Configurable Circuits . This concept can effectively manipulate hardware primitives through vertical integration of control and data. Thus the notion of 3-D Soft-Chip emerges as a new design algorithm for content-rich multimedia, telecommunication and intelligent networking system applications. 3•D architectures (design algorithms used suitable for 3-D soft-chip technology), are driven by three factors. The first is development of new device technology (TDST) that can support new architectures with complexities of 100M to 1000M devices. The second is development of advanced wafer bonding techniques such as Indium bump and the more futuristic optical interconnects for 3-D soft-chip mapping. The third is related to improving the performance of silicon CMOS systems as devices continue to scale down in dimensions. One of the fundamental building blocks of any computer system is the arithmetic component. Optimum performance of the system is determined by the efficiency of each individual component, as well as the network as a whole entity. Development of configurable arithmetic primitives is the fundamental focus in 3-D architecture design where functionality can be implemented through soft configurable hardware elements. Therefore the ability to improve the performance capability of a system is of crucial importance for a successful design. Important factors that predict the efficiency of such arithmetic components are: • The propagation delay of the circuit, caused by the gate, diffusion and wire capacitances within !he circuit, minimised through transistor sizing. and • Power dissipation, which is generally based on node transition activity. [2] Although optimum performance of 3-D soft-chip systems is primarily established by the choice of basic primitives such as adders and multipliers, the interconnecting network also has significant degree of influence on !he efficiency of the system. 3-D superposition of devices can decrease interconnect delays by up to 60% compared to a similar planar architecture. This research is based on development and implementation of configurable arithmetic primitives, suitable to the 3-D architecture, and has these foci: • To develop a variety of arithmetic components such as adders and multipliers with particular emphasis on minimum area and compatible with 3-D soft-chip design paradigm. • To explore implementation of configurable distributed primitives for arithmetic processing. This entails optimisation of basic primitives, and using them as part of array processing. In this research the detailed designs of configurable arithmetic primitives are implemented using TDST O.l3µm (130nm) technology, utilising CAD software such as Mentor Graphics and Cadence in Custom design mode, carrying through design, simulation and verification steps

    Design Techniques for Energy-Quality Scalable Digital Systems

    Get PDF
    Energy efficiency is one of the key design goals in modern computing. Increasingly complex tasks are being executed in mobile devices and Internet of Things end-nodes, which are expected to operate for long time intervals, in the orders of months or years, with the limited energy budgets provided by small form-factor batteries. Fortunately, many of such tasks are error resilient, meaning that they can toler- ate some relaxation in the accuracy, precision or reliability of internal operations, without a significant impact on the overall output quality. The error resilience of an application may derive from a number of factors. The processing of analog sensor inputs measuring quantities from the physical world may not always require maximum precision, as the amount of information that can be extracted is limited by the presence of external noise. Outputs destined for human consumption may also contain small or occasional errors, thanks to the limited capabilities of our vision and hearing systems. Finally, some computational patterns commonly found in domains such as statistics, machine learning and operational research, naturally tend to reduce or eliminate errors. Energy-Quality (EQ) scalable digital systems systematically trade off the quality of computations with energy efficiency, by relaxing the precision, the accuracy, or the reliability of internal software and hardware components in exchange for energy reductions. This design paradigm is believed to offer one of the most promising solutions to the impelling need for low-energy computing. Despite these high expectations, the current state-of-the-art in EQ scalable design suffers from important shortcomings. First, the great majority of techniques proposed in literature focus only on processing hardware and software components. Nonetheless, for many real devices, processing contributes only to a small portion of the total energy consumption, which is dominated by other components (e.g. I/O, memory or data transfers). Second, in order to fulfill its promises and become diffused in commercial devices, EQ scalable design needs to achieve industrial level maturity. This involves moving from purely academic research based on high-level models and theoretical assumptions to engineered flows compatible with existing industry standards. Third, the time-varying nature of error tolerance, both among different applications and within a single task, should become more central in the proposed design methods. This involves designing “dynamic” systems in which the precision or reliability of operations (and consequently their energy consumption) can be dynamically tuned at runtime, rather than “static” solutions, in which the output quality is fixed at design-time. This thesis introduces several new EQ scalable design techniques for digital systems that take the previous observations into account. Besides processing, the proposed methods apply the principles of EQ scalable design also to interconnects and peripherals, which are often relevant contributors to the total energy in sensor nodes and mobile systems respectively. Regardless of the target component, the presented techniques pay special attention to the accurate evaluation of benefits and overheads deriving from EQ scalability, using industrial-level models, and on the integration with existing standard tools and protocols. Moreover, all the works presented in this thesis allow the dynamic reconfiguration of output quality and energy consumption. More specifically, the contribution of this thesis is divided in three parts. In a first body of work, the design of EQ scalable modules for processing hardware data paths is considered. Three design flows are presented, targeting different technologies and exploiting different ways to achieve EQ scalability, i.e. timing-induced errors and precision reduction. These works are inspired by previous approaches from the literature, namely Reduced-Precision Redundancy and Dynamic Accuracy Scaling, which are re-thought to make them compatible with standard Electronic Design Automation (EDA) tools and flows, providing solutions to overcome their main limitations. The second part of the thesis investigates the application of EQ scalable design to serial interconnects, which are the de facto standard for data exchanges between processing hardware and sensors. In this context, two novel bus encodings are proposed, called Approximate Differential Encoding and Serial-T0, that exploit the statistical characteristics of data produced by sensors to reduce the energy consumption on the bus at the cost of controlled data approximations. The two techniques achieve different results for data of different origins, but share the common features of allowing runtime reconfiguration of the allowed error and being compatible with standard serial bus protocols. Finally, the last part of the manuscript is devoted to the application of EQ scalable design principles to displays, which are often among the most energy- hungry components in mobile systems. The two proposals in this context leverage the emissive nature of Organic Light-Emitting Diode (OLED) displays to save energy by altering the displayed image, thus inducing an output quality reduction that depends on the amount of such alteration. The first technique implements an image-adaptive form of brightness scaling, whose outputs are optimized in terms of balance between power consumption and similarity with the input. The second approach achieves concurrent power reduction and image enhancement, by means of an adaptive polynomial transformation. Both solutions focus on minimizing the overheads associated with a real-time implementation of the transformations in software or hardware, so that these do not offset the savings in the display. For each of these three topics, results show that the aforementioned goal of building EQ scalable systems compatible with existing best practices and mature for being integrated in commercial devices can be effectively achieved. Moreover, they also show that very simple and similar principles can be applied to design EQ scalable versions of different system components (processing, peripherals and I/O), and to equip these components with knobs for the runtime reconfiguration of the energy versus quality tradeoff

    Development of Trigger and Control Systems for CMS

    Get PDF
    During the year of 2007, the Large Hadron Collider (LHC) and its four main detectors will begin operation with a view to answering the most pressing questions in particle physics. However before one can analyse the data produced to find the rare phenomena being looked for, both the detector and readout electronics must be thoroughly tested to ensure that the system will operate in a consistent way. The Compact Muon Solenoid (CMS) is one of the two general-purpose detectors at CERN. The tracking component of the design produces more data than any previous detector used in particle physics, with approximately ten million detector channels. The data from the detector is processed by the tracker Front End Driver (FED). The large data volume necessitated the development of a buffering and throttling system to prevent buffer overflow both on and off the detector. A critical component of this system is the APV emulator (APVe), which vetoes trigger decisions based on buffer status in the tracker. The commissioning of these components, along with a large part of the Timing, Trigger and Control (TTC) system is discussed, including the various modifications that were made to improve the robustness of the full system. Another key piece of the CMS electronics is the calorimeter trigger system, responsible for identifying âinteresting' physical events in a background of well-understood phenomena using calorimetric information. Calorimeter information is processed to identify various trigger objects by the Global Calorimeter Trigger (GCT). The first component of this system is the Source card, which has been developed to transfer data from the Regional Calorimeter Trigger (RCT) to the Leaf card, the processing engine of the GCT. The use of modern programmable logic with high speed optical links is discussed, emphasising its use for data concentration and the benefit it confers to the processing algorithms. Looking forward to Super-LHC, a possible addition to the CMS Level-1 trigger system is discussed, incorporating information from a new pixel detector with an alternative stacked geometry that allows the possibility of on-detector data rate reduction by means of a transverse momentum cut. A toy Monte Carlo was developed to study detector performance. Issues with high-speed reconstruction and the complications of on-detector data rate reduction are also discussed

    Exploration and Design of High Performance Variation Tolerant On-Chip Interconnects

    Get PDF
    Siirretty Doriast

    Adiabatic Approach for Low-Power Passive Near Field Communication Systems

    Get PDF
    This thesis tackles the need of ultra-low power electronics in the power limited passive Near Field Communication (NFC) systems. One of the techniques that has proven the potential of delivering low power operation is the Adiabatic Logic Technique. However, the low power benefits of the adiabatic circuits come with the challenges due to the absence of single opinion on the most energy efficient adiabatic logic family which constitute appropriate trade-offs between computation time, area and complexity based on the circuit and the power-clocking schemes. Therefore, five energy efficient adiabatic logic families working in single-phase, 2-phase and 4-phase power-clocking schemes were chosen. Since flip-flops are the basic building blocks of any sequential circuit and the existing flip-flops are MUX-based (having more transistors) design, therefore a novel single-phase, 2-phase and 4-phase reset based flip-flops were proposed. The performance of the multi-phase adiabatic families was evaluated and compared based on the design examples such as 2-bit ring counter, 3-bit Up-Down counter and 16-bit Cyclic Redundancy Check (CRC) circuit (benchmark circuit) based on ISO 14443-3A standard. Several trade-offs, design rules, and an appropriate range for the supply voltage scaling for multi-phase adiabatic logic are proposed. Furthermore, based on the NFC standard (ISO 14443-3A), data is frequently encoded using Manchester coding technique before transmitting it to the reader. Therefore, if Manchester encoding can be implemented using adiabatic logic technique, energy benefits are expected. However, adiabatic implementation of Manchester encoding presents a challenge. Therefore, a novel method for implementing Manchester encoding using adiabatic logic is proposed overcoming the challenges arising due to the AC power-clock. Other challenges that come with the dynamic nature of the adiabatic gates and the complexity of the 4-phase power-clocking scheme is in synchronizing the power-clock v phases and the time spent in designing, validation and debugging of errors. This requires a specific modelling approach to describe the adiabatic logic behaviour at the higher level of abstraction. However, describing adiabatic logic behaviour using Hardware Description Languages (HDLs) is a challenging problem due to the requirement of modelling the AC power-clock and the dual-rail inputs and outputs. Therefore, a VHDL-based modelling approach for the 4-phase adiabatic logic technique is developed for functional simulation, precise timing analysis and as an improvement over the previously described approaches

    Primitives and design of the intelligent pixel multimedia communicator

    Get PDF
    Communication systems arc an ever more essential component of our modern global society. Mobile communications systems are still in a state of rapid advancement and growth. Technology is constantly evolving at a rapid pace in ever more diverse areas and the emerging mobile multimedia based communication systems offer new challenges for both current and future technologies. To realise the full potential of mobile multimedia communication systems there is a need to explore new options to solve some of the fundamental problems facing the technology. In particular, the complexity of such a system within an infrastructure framework that is inherently limited by its power sources and has very restricted transmission bandwidth demands new methodologies and approaches

    A 65nm CMOS lossless bio-signal compression circuit with 250 femtoJoule performance per bit.

    Get PDF
    A 65nm CMOS integrated circuit implementation of a bio-physiological signal compression device is presented, reporting exceptionally low power, and extremely low silicon area cost, relative to state-of-the-art. A novel `xor-log2-sub-band' data compression scheme is evaluated, achieving modest compression, but with very low resource cost. With the intent to design the `simplest useful compression algorithm', the outcome is demonstrated to be very favourable where power must be saved by trading off compression effort against data storage capacity, or data transmission power, even where more complex algorithms can deliver higher compression ratios. A VLSI design and fabricated Integrated Circuit implementation are presented, and estimated performance gains and efficiency measures for various bio-medical use-cases are given. Power costs as low as 1.2 pJ per sample-bit are suggested for a 10kSa/s data-rate, whilst utilizing a power-gating scenario, and dropping to 250fJ/bit at continuous conversion data-rates of 5MSa/sec. This is achieved with a diminutive circuit area of 155 um2. Both power and area appear to be state-of-the-art in terms of compression versus resource cost, and this yields benefit for system optimization
    • …
    corecore