396 research outputs found

    Exploiting Adaptive Techniques to Improve Processor Energy Efficiency

    Get PDF
    Rapid device-miniaturization keeps on inducing challenges in building energy efficient microprocessors. As the size of the transistors continuously decreasing, more uncertainties emerge in their operations. On the other hand, integrating more and more transistors on a single chip accentuates the need to lower its supply-voltage. This dissertation investigates one of the primary device uncertainties - timing error, in microprocessor performance bottleneck in NTC era. Then it proposes various innovative techniques to exploit these opportunities to maintain processor energy efficiency, in the context of emerging challenges. Evaluated with the cross-layer methodology, the proposed approaches achieve substantial improvements in processor energy efficiency, compared to other start-of-art techniques

    Macro-Driven Circuit Design Methodology for High-Performance Datapaths

    Get PDF
    Datapath design is one of the most critical elements in the design of a high performance microprocessor. However datapath design is typically done manually, and is often custom style. This adversely impacts the overall productivity of the design team, as well as the quality of the design. In spite of this, very little automation has been available to the designers of high performance datapaths. In this paper we present a new "macrodriven " approach to the design of datapath circuits. Our approach, referred to as SMART (Smart Macro Design Advisor), is based on automatic generation of regular datapath components such as muxes, comparators, adders etc., which we refer to as datapath macros. The generated solution is based on designer provided constraints such as delay, load and slope, and is optimized for a designer provided cost metric such as power, area. Results on datapath circuits of a high-performance microprocessor show that this approach is very effective for both designer productivity as well as design quality

    Reclaiming Fault Resilience and Energy Efficiency With Enhanced Performance in Low Power Architectures

    Get PDF
    Rapid developments of the AI domain has revolutionized the computing industry by the introduction of state-of-art AI architectures. This growth is also accompanied by a massive increase in the power consumption. Near-Theshold Computing (NTC) has emerged as a viable solution by offering significant savings in power consumption paving the way for an energy efficient design paradigm. However, these benefits are accompanied by a deterioration in performance due to the severe process variation and slower transistor switching at Near-Threshold operation. These problems severely restrict the usage of Near-Threshold operation in commercial applications. In this work, a novel AI architecture, Tensor Processing Unit, operating at NTC is thoroughly investigated to tackle the issues hindering system performance. Research problems are demonstrated in a scientific manner and unique opportunities are explored to propose novel design methodologies

    Cross-Layer Optimization for Power-Efficient and Robust Digital Circuits and Systems

    Full text link
    With the increasing digital services demand, performance and power-efficiency become vital requirements for digital circuits and systems. However, the enabling CMOS technology scaling has been facing significant challenges of device uncertainties, such as process, voltage, and temperature variations. To ensure system reliability, worst-case corner assumptions are usually made in each design level. However, the over-pessimistic worst-case margin leads to unnecessary power waste and performance loss as high as 2.2x. Since optimizations are traditionally confined to each specific level, those safe margins can hardly be properly exploited. To tackle the challenge, it is therefore advised in this Ph.D. thesis to perform a cross-layer optimization for digital signal processing circuits and systems, to achieve a global balance of power consumption and output quality. To conclude, the traditional over-pessimistic worst-case approach leads to huge power waste. In contrast, the adaptive voltage scaling approach saves power (25% for the CORDIC application) by providing a just-needed supply voltage. The power saving is maximized (46% for CORDIC) when a more aggressive voltage over-scaling scheme is applied. These sparsely occurred circuit errors produced by aggressive voltage over-scaling are mitigated by higher level error resilient designs. For functions like FFT and CORDIC, smart error mitigation schemes were proposed to enhance reliability (soft-errors and timing-errors, respectively). Applications like Massive MIMO systems are robust against lower level errors, thanks to the intrinsically redundant antennas. This property makes it applicable to embrace digital hardware that trades quality for power savings.Comment: 190 page

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    Cross-Layer Approaches for an Aging-Aware Design of Nanoscale Microprocessors

    Get PDF
    Thanks to aggressive scaling of transistor dimensions, computers have revolutionized our life. However, the increasing unreliability of devices fabricated in nanoscale technologies emerged as a major threat for the future success of computers. In particular, accelerated transistor aging is of great importance, as it reduces the lifetime of digital systems. This thesis addresses this challenge by proposing new methods to model, analyze and mitigate aging at microarchitecture-level and above

    Design and Optimization for Resilient Energy Efficient Computing

    Get PDF
    Heutzutage sind moderne elektronische Systeme ein integraler Bestandteil unseres Alltags. Dies wurde unter anderem durch das exponentielle Wachstum der Integrationsdichte von integrierten Schaltkreisen ermöglicht zusammen mit einer Verbesserung der Energieeffizienz, welche in den letzten 50 Jahren stattfand, auch bekannt als Moore‘s Gesetz. In diesem Zusammenhang ist die Nachfrage von energieeffizienten digitalen Schaltkreisen enorm angestiegen, besonders in Anwendungsfeldern wie dem Internet of Things (IoT). Da der Leistungsverbrauch von Schaltkreisen stark mit der Versorgungsspannung verknĂŒpft ist, wurden effiziente Verfahren entwickelt, welche die Versorgungsspannung in den nahen Schwellenspannung-Bereich skalieren, zusammengefasst unter dem Begriff Near-Threshold-Computing (NTC). Mithilfe dieser Verfahren kann eine Erhöhung der Energieeffizienz von Schaltungen um eine ganze GrĂ¶ĂŸenordnung ermöglicht werden. Neben der verbesserten Energiebilanz ergeben sich jedoch zahlreiche Herausforderungen was den Schaltungsentwurf angeht. Zum Beispiel fĂŒhrt das Reduzieren der Versorgungsspannung in den nahen Schwellenspannungsbereich zu einer verzehnfachten Erhöhung der SensibilitĂ€t der Schaltkreise gegenĂŒber Prozessvariation, Spannungsfluktuationen und TemperaturverĂ€nderungen. Die EinflĂŒsse dieser Variationen reduzieren die ZuverlĂ€ssigkeit von NTC Schaltkreisen und sind ihr grĂ¶ĂŸtes Hindernis bezĂŒglich einer umfassenden Nutzung. Traditionelle AnsĂ€tze und Methoden aus dem nominalen Spannungsbereich zur Kompensation von VariabilitĂ€t können nicht effizient angewandt werden, da die starken Performance-Variationen und SensitivitĂ€ten im nahen Schwellenspannungsbereich dessen KapazitĂ€ten ĂŒbersteigen. Aus diesem Grund sind neue Entwurfsparadigmen und Entwurfsautomatisierungskonzepte fĂŒr die Anwendung von NTC erforderlich. Das Ziel dieser Arbeit ist die zuvor erwĂ€hnten Probleme durch die Bereitstellung von ganzheitlichen Methoden zum Design von NTC Schaltkreisen sowie dessen Entwurfsautomatisierung anzugehen, welche insbesondere auf der Schaltungs- sowie Logik-Ebene angewandt werden. Dabei werden tiefgehende Analysen der ZuverlĂ€ssigkeit von NTC Systemen miteinbezogen und Optimierungsmethoden werden vorgeschlagen welche die ZuverlĂ€ssigkeit, Performance und Energieeffizienz verbessern. Die BeitrĂ€ge dieser Arbeit sind wie folgt: Schaltungssynthese und Timing Closure unter Einbezug von Variationen: Das Einhalten von Anforderungen an das zeitliche Verhalten und ZuverlĂ€ssigkeit von NTC ist eine anspruchsvolle Aufgabe. Die Auswirkungen von VariabilitĂ€t kommen bei starken Performance-Schwankungen, welche zu teuren zeitlichen Sicherheitsmargen fĂŒhren, oder sich in Hold-Time VerstĂ¶ĂŸen ausdrĂŒcken, verursacht durch funktionale Störungen, zum Vorschein. Die konventionellen AnsĂ€tze beschrĂ€nken sich dabei alleine auf die Erhöhung von zeitlichen Sicherheitsmargen. Dies ist jedoch sehr ineffizient fĂŒr NTC, wegen dem starken Ausmaß an Variationen und den erhöhten Leckströmen. In dieser Arbeit wird ein Konzept zur Synthese und Timing Closure von Schaltkreisen unter Variationen vorgestellt, welches sowohl die SensitivitĂ€t gegenĂŒber Variationen reduziert als auch die Energieeffizienz, Performance und ZuverlĂ€ssigkeit verbessert und zugleich den Mehraufwand von Timing Closures [1, 2] verringert. Simulationsergebnisse belegen, dass unser vorgeschlagener Ansatz die Verzögerungszeit um 87% reduziert und die Performance und Energieeffizienz um 25% beziehungsweise 7.4% verbessert, zu Kosten eines erhöhten FlĂ€chenbedarfs von 4.8%. SchichtĂŒbergreifende ZuverlĂ€ssigkeits-, Energieeffizienz- und Performance-Optimierung von Datenpfaden: SchichtĂŒbergreifende Analyse von Prozessor-Datenpfaden, welche den ganzen Weg spannen vom Kompilierer zum Schaltungsentwurf, kann potenzielle OptimierungsansĂ€tze aufzeigen. Ein Datenpfad ist eine Kombination von mehreren funktionalen Einheiten, welche diverse Instruktionen verarbeiten können. Unsere Analyse zeigt, dass die AusfĂŒhrungszeiten von Instruktionen bei niedrigen Versorgungsspannungen stark variieren, weshalb eine Klassifikation in schnelle und langsame Instruktionen vorgenommen werden kann. Des Weiteren können funktionale Instruktionen als hĂ€ufig und selten genutzte Instruktionen kategorisiert werden. Diese Arbeit stellt eine Multi-Zyklen-Instruktionen-Methode vor, welche die Energieeffizienz und Belastbarkeit von funktionalen Einheiten erhöhen kann [3]. ZusĂ€tzlich stellen wir einen Partitionsalgorithmus vor, welcher ein fein-granulares Power-gating von selten genutzten Einheiten ermöglicht [4] durch Partition von einzelnen funktionalen Einheiten in mehrere kleinere Einheiten. Die vorgeschlagenen Methoden verbessern das zeitliche Schaltungsverhalten signifikant, und begrenzen zugleich die Leckströme betrĂ€chtlich, durch Einsatz einer Kombination von Schaltungs-Redesign- und Code-Replacement-Techniken. Simulationsresultate zeigen, dass die entwickelten Methoden die Performance und Energieeffizienz von arithmetisch-logischen Einheiten (ALU) um 19% beziehungsweise 43% verbessern. Des Weiteren kann der Zuwachs in Performance der optimierten Schaltungen in eine Verbesserung der ZuverlĂ€ssigkeit umgewandelt werden [5, 6]. Post-Fabrication und Laufzeit-Tuning: Prozess- und Laufzeitvariationen haben einen starken Einfluss auf den Minimum Energy Point (MEP) von NTC-Schaltungen, welcher mit der energieeffizientesten Versorgungsspannung assoziiert ist. Es ist ein besonderes Anliegen, die NTC-Schaltung nach der Herstellung (post-fabrication) so zu kalibrieren, dass sich die Schaltung im MEP-Zustand befindet, um die beste Energieeffizient zu erreichen. In dieser Arbeit, werden Post-Fabrication und Laufzeit-Tuning vorgeschlagen, welche die Schaltung basierend auf Geschwindigkeits- und Leistungsverbrauch-Messungen nach der Herstellung auf den MEP kalibrieren. Die vorgestellten Techniken ermitteln den MEP per Chip-Basis um den Einfluss von Prozessvariationen mit einzubeziehen und dynamisch die Versorgungsspannung und Frequenz zu adaptieren um zeitabhĂ€ngige Variationen wie Workload und Temperatur zu adressieren. Zu diesem Zweck wird in die Firmware eines Chips ein Regression-Modell integriert, welches den MEP basierend auf Workload- und Temperatur-Messungen zur Laufzeit extrahiert. Das Regressions-Modell ist fĂŒr jeden Chip einzigartig und basiert lediglich auf Post-Fabrication-Messungen. Simulationsergebnisse zeigen das der entwickelte Ansatz eine sehr hohe prognostische Treffsicherheit und Energieeffizienz hat, Ă€hnlich zu hardware-implementierten Methoden, jedoch ohne hardware-seitigen Mehraufwand [7, 8]. Selektierte Flip-Flop Optimierung: Ultra-Low-Voltage Schaltungen mĂŒssen im nominalen Versorgungsspannungs-Mode arbeiten um zeitliche Anforderungen von laufenden Anwendungen zu erfĂŒllen. In diesem Fall ist die Schaltung von starken Alterungsprozessen betroffen, welche die Transistoren durch Erhöhung der Schwellenspannungen degradieren. Unsere tiefgehenden Analysen haben gezeigt das gewisse Flip-Flop-Architekturen von diesen Alterungserscheinungen beeinflusst werden indem fĂ€lschlicherweise konstante Werte ( \u270\u27 oder \u271\u27) fĂŒr eine lange Zeit gespeichert sind. Im Vergleich zu anderen Komponenten sind Flip-Flops sensitiver zu Alterungsprozessen und versagen unter anderem dabei einen neuen Wert innerhalb des vorgegebenen zeitlichen Rahmens zu ĂŒbernehmen. Außerdem kann auch ein geringfĂŒgiger Spannungsabfall zu diesen zeitlichen VerstĂ¶ĂŸen fĂŒhren, falls die betreffenden gealterten Flip-Flops zum kritischen Pfad zuzuordnen sind. In dieser Arbeit wird eine selektiver Flip-Flop-Optimierungsmethode vorgestellt, welche die Schaltungen bezĂŒglich Robustheit gegen statische Alterung und Spannungsabfall optimieren. Dabei werden zuerst optimierte robuste Flip-Flops generiert und diese dann anschließend in die Standard-Zellen-Bibliotheken integriert. Flip-Flops, die in der Schaltung zum kritischen Pfad gehören und Alterung sowie Spannungsabfall erfahren, werden durch die optimierten robusten Versionen ersetzt, um das Zeitverhalten und die ZuverlĂ€ssigkeit der Schaltung zu verbessern [9, 10]. Simulationsergebnisse zeigen, dass die erwartete Lebenszeit eines Prozessors um 37% verbessert werden kann, wĂ€hrend Leckströme um nur 0.1% erhöht werden. WĂ€hrend NTC das Potenzial hat große Energieeffizienz zu ermöglichen, ist der Einsatz in neue Anwendungsfeldern wie IoT wegen den zuvor erwĂ€hnten Problemen bezĂŒglich der hohen SensitivitĂ€t gegenĂŒber Variationen und deshalb mangelnder ZuverlĂ€ssigkeit, noch nicht durchsetzbar. In dieser Dissertation und in noch nicht publizierten Werken [11–17], stellen wir Lösungen zu diesen Problemen vor, die eine Integration von NTC in heutige Systeme ermöglichen

    Risks in Project Finance Initiatives: Current Trends and Future Directions

    Get PDF
    This thesis is an analysis of Public Private Partnership (PPP). PPP refers to the provision of public assets and service through the participation of the government, the private sector and the consumers. The purpose is to analyze the main risks involved in a PPP initiative and to understand how they affect its capital structure. To this aim, different datasets have been analyzed in order to trace consistent and coherent lessons. After that, this thesis aims at proposing PPP models for innovative project in the Smart City context, based on the assumption that innovative financial schemes can fit innovative projects. In particular several PPPs for Smart City projects have been analyzed and a Project Finance contract for the replacement of traffic light lamps has been proposed so that the applicability of the financial tool in new fields of application have been tested. The success of a PF initiative is strictly related to a careful analysis of all the risks associated with the project. As a matter of fact, many risks could occur during the life of the project and they can significantly affect its outcomes. For this reason risks have been categorized in sources and associated to their indicators. For each indicator, several parameters have been identified. The main sources of risk are Country, Financial, Market and Construction. First a dataset of worldwide toll road has been analyzed. The private concessionaire that constructs the infrastructure collects revenues generated by users, and the infrastructure by itself represents a solid collateral. The analysis highlights that inflation rate, the investment size, the construction duration, the financial strength of the Special Purpose Vehicle and the number of partners has a significant influence with the share of the equity into the total investment. This study might help the purpose of providing better opportunities for sponsors to improve the equity profitability and for lending agencies to better handle with risks associated with the debt supply. The analysis has been then focused on the British market, which is one of the most important ones and wherein the Project Finance is actually developed and the legislative context is well defined. Based on the idea that the Unitary Charge (UC) periodically corresponded by the public authority (and in turn the capital structure) is associated with the project risk profile, the study investigates risks that might have significant impacts on the UC of a PF hospital project. The study demonstrates that it is possible to achieve a higher level of Value for Money (VFM) in PF hospital projects within a good economic and political environment. In Italy the PF market has rapidly grown, in light of the need for the public sector to find a feasible way to construct or renovate infrastructures in a context of scarce public finance. Based on past projects developed in Italy an empirical analysis has been carried out in order to identify the main aspects that can impact on the success of a PF initiative. As a matter of fact, evidence has shown that not all the projects appear to fit for PF, and often a project fails to go further out of the early contract procurement phases. Therefore, there is the need to understand what are the key factors affecting the construction of a PF initiative. The study shows that large-sized projects developed in wealthy conditions in terms of political and economic stability and levels of GDP, have good chances to be constructed, especially whenever the time is given to parties to negotiate the contract provision. The analysis provides with a hint for policy makers to learn that PF is a valuable system to be used in stable and developed environments for large projects with little time pressure. PF mechanism was launched in Italy in 1999 and after ten years the Italian market is the second largest one in Europe, especially in the healthcare sector. However, financially freestanding privately-funded PF hospitals are rare and the capital structure of most projects requires a considerable share of public funding. The most proper amount of money invested by the public authority should cover the non-self-financing (and therefore the riskier ones) of the investment costs, but it often happens that the level of public contribution exceeds this limit. The results of the analysis highlight that the financial strength of the SPV, the number of services that are granted to the private partners, the level of borrowing of the public authority, and the duration of the concession period appear to be significant factors of the public fraction of financing required to deliver the project. These results originate some important considerations about the relevance of risks in the development of PF initiatives. PF better fits in stable politic and economic environments, but at the same time it is largely adopted in emerging countries with large demand of new infrastructures and high level of risk (in terms of level of transparency, corruptions, currency exchange). From a financial perspective a robust SPV is likely to better deal with the project with a positive impact on the capital structure. The market risk is associated with the number of customers that exploit the facility and the number of services that privates manage. If the demand of services is not enough to generate sustainable profits, the public party could reimburse an additional fee, in order to cover for this risk. The project risk is mainly related to the complexity of the project in terms of the number of partners in the SPV and the investment size. The last part of the thesis is associated to the future and potential scenarios associated to PF and more in general to PPP scheme. In fact the aim is to propose PPP models in the smart city (SC) arena, a promising field of innovation and investments. SC appears to be as a new paradigm to carry out innovation that marks a shift between traditional way of completing technology-push processes and the new approach based on the user’s needs. In this political and economic scenario, the PPP seems to be a solution for the development of smart projects and the design of PPP models should become an integral part of the SC agendas. As demonstrated in the development of traditional infrastructure, the involvement of privates allows to manage more efficiently the project.The analysis shows that PF is more applicable in case of projects with tangible assets, and the main strength of this scheme is the clear separation between the cash flows of the SPV and the cash flows of the investors. On the contrary, PF is more expensive in terms of contractual and transactional costs. These aspects related to PF have fostered to develop a proposal model for the application of this financial scheme in the Municipality of Torino. In particular the project is based on the replacement of the traditional lamps of the traffic lights with new ones exploiting the innovative LED technology that is supposed to guarantee savings in terms of energy consumption and maintenance cost. The project has proved to be bankable and profitable if the Public Authority corresponds a fee that includes both the availability of the lamps and the maintenance costs for ten years. On the contrary the project is only bankable and profits are not guaranteed if the fee paid by the public is only associated with the availability. The findings have validated the applicability of PF even in case of projects without assets systems as collateral and with small-medium investments size. Some first general guidelines for the policy maker are provided in order to foster the development of SC initiative even in a period of financial public shortage. Project Finance, and more in general the PPP, can be the engine of an efficient exploitation of the potentiality offered by the SC

    \u3ci\u3eThe Conference Proceedings of the 2003 Air Transport Research Society (ATRS) World Conference, Volume 1\u3c/i\u3e

    Get PDF
    UNOAI Report 03-5https://digitalcommons.unomaha.edu/facultybooks/1131/thumbnail.jp

    The Vanishing Public Domain: Antibiotic Resistance, Pharmaceutical Innovation and Intellectual Property Law

    Get PDF
    Penicillin and other antibiotics were the original wonder drugs and laid the foundation of the modern pharmaceutical industry. Human health significantly improved with the introduction of antibiotics. By 1967, the U.S. Surgeon General declared victory over infectious diseases in the United States. But pride goes before a fall. The evolutionary pressure of antibiotic use selects for resistant strains. Effective drugs should be used. But when they are used, no matter how carefully, evolutionary pressure for resistance is created. The problem is not limited to antibiotics. Variants of the human immunodeficiency virus (HIV) develop resistance to anti-retroviral drugs. Antifungal agents face similar challenges. Even cancer cells may develop resistance to pharmaceuticals. Tens of thousands of Americans are dying every year from drug-resistant infections. Some pharmaceutical knowledge is therefore exhaustible, and after patent expiration the public domain may receive a drug which is no longer useful. For these drugs, the public domain vanishes
    • 

    corecore