13 research outputs found

    Fine Grain Incremental Rescheduling via Architectural Retiming".

    Get PDF
    Abstract With the decreasing feature sizes during VLSI fabrication and the dominance of interconnect delay over that of gates, control logic and wiring no longer have a negligible impact on delay and area. The need thus arises for developing techniques and tools to redesign incrementally to eliminate performance b ottlenecks. Such a redesign e ort corresponds to incrementally modifying an existing schedule obtained via high-level synthesis. In this paper we demonstrate that applying architectural retiming, a technique for pipelining latencyconstrained circuits, results in incrementally modifying an existing schedule. Architectural retiming reschedules ne grain operations ones that have a delay equal to or less than one clock cycle to occur in earlier time steps, while modifying the design to preserve its correctness

    Enhancing Power Efficient Design Techniques in Deep Submicron Era

    Get PDF
    Excessive power dissipation has been one of the major bottlenecks for design and manufacture in the past couple of decades. Power efficient design has become more and more challenging when technology scales down to the deep submicron era that features the dominance of leakage, the manufacture variation, the on-chip temperature variation and higher reliability requirements, among others. Most of the computer aided design (CAD) tools and algorithms currently used in industry were developed in the pre deep submicron era and did not consider the new features explicitly and adequately. Recent research advances in deep submicron design, such as the mechanisms of leakage, the source and characterization of manufacture variation, the cause and models of on-chip temperature variation, provide us the opportunity to incorporate these important issues in power efficient design. We explore this opportunity in this dissertation by demonstrating that significant power reduction can be achieved with only minor modification to the existing CAD tools and algorithms. First, we consider peak current, which has become critical for circuit's reliability in deep submicron design. Traditional low power design techniques focus on the reduction of average power. We propose to reduce peak current while keeping the overhead on average power as small as possible. Second, dual Vt technique and gate sizing have been used simultaneously for leakage savings. However, this approach becomes less effective in deep submicron design. We propose to use the newly developed process-induced mechanical stress to enhance its performance. Finally, in deep submicron design, the impact of on-chip temperature variation on leakage and performance becomes more and more significant. We propose a temperature-aware dual Vt approach to alleviate hot spots and achieve further leakage reduction. We also consider this leakage-temperature dependency in the dynamic voltage scaling approach and discover that a commonly accepted result is incorrect for the current technology. We conduct extensive experiments with popular design benchmarks, using the latest industry CAD tools and design libraries. The results show that our proposed enhancements are promising in power saving and are practical to solve the low power design challenges in deep submicron era

    Correct-by-construction microarchitectural pipelining

    Get PDF
    This paper presents a method for correct-by-construction microarchitectural pipelining that handles cyclic systems with dependencies between iterations. Our method combines previously known bypass and retiming transformations with a few transformations valid only for elastic systems with early evaluation (namely, empty FIFO insertion, FIFO capacity sizing, insertion of anti-tokens, and introducing early evaluation multiplexors). By converting the design to a synchronous elastic form and then applying this extended set of transformations, one can pipeline a functional specification with an automatically generated distributed controller that implements stalling logic resolving data hazards off the critical path of the design. We have developed an interactive toolkit for exploring elastic microarchitectural transformations. The method is illustrated by pipelining a few simple examples of instruction set architecture ISA specifications.Peer ReviewedPostprint (published version

    Approximate logic circuits: Theory and applications

    Get PDF
    CMOS technology scaling, the process of shrinking transistor dimensions based on Moore's law, has been the thrust behind increasingly powerful integrated circuits for over half a century. As dimensions are scaled to few tens of nanometers, process and environmental variations can significantly alter transistor characteristics, thus degrading reliability and reducing performance gains in CMOS designs with technology scaling. Although design solutions proposed in recent years to improve reliability of CMOS designs are power-efficient, the performance penalty associated with these solutions further reduces performance gains with technology scaling, and hence these solutions are not well-suited for high-performance designs. This thesis proposes approximate logic circuits as a new logic synthesis paradigm for reliable, high-performance computing systems. Given a specification, an approximate logic circuit is functionally equivalent to the given specification for a "significant" portion of the input space, but has a smaller delay and power as compared to a circuit implementation of the original specification. This contributions of this thesis include (i) a general theory of approximation and efficient algorithms for automated synthesis of approximations for unrestricted random logic circuits, (ii) logic design solutions based on approximate circuits to improve reliability of designs with negligible performance penalty, and (iii) efficient decomposition algorithms based on approxiiii mate circuits to improve performance of designs during logic synthesis. This thesis concludes with other potential applications of approximate circuits and identifies. open problems in logic decomposition and approximate circuit synthesis

    Guarded Evaluation: An Algorithm for Dynamic Power Reduction in FPGAs

    Get PDF
    Guarded evaluation is a power reduction technique that involves identifying sub-circuits (within a larger circuit) whose inputs can be held constant (guarded) at specific times during circuit operation, thereby reducing switching activity and lowering dynamic power. The concept is rooted in the property that under certain conditions, some signals within digital designs are not "observable" at design outputs, making the circuitry that generates such signals a candidate for guarding. Guarded evaluation has been demonstrated successfully for custom ASICs; in this work, we apply the technique to FPGAs. In ASICs, guarded evaluation entails adding additional hardware to the design, increasing silicon area and cost. Here, we apply the technique in a way that imposes minimal area overhead by leveraging existing unused circuitry within the FPGA. The LUT functionality is modified to incorporate the guards and reduce toggle rates. The primary challenge in guarded evaluation is in determining the specific conditions under which a sub-circuit's inputs can be held constant without impacting the larger circuit's functional correctness. We propose a simple solution to this problem based on discovering gating inputs using "non-inverting paths" and trimming inputs using "partial non-inverting paths" in the circuit's AND-Inverter graph representation. Experimental results show that guarded evaluation can reduce switching activity by as much as 32% for FPGAs with 6-LUT architectures and 25% for 4-LUT architectures, on average, and can reduce power consumption in the FPGA interconnect by 29% for 6-LUTs and 27% for 4-LUTs. A clustered architecture with four LUTs to a cluster and ten LUTs to a cluster produced the best power reduction results. We implement guarded evaluation at various stages of the FPGA CAD flow and analyze the reductions. We implement the algorithm as post technology mapping, post packing and post placement optimizations. Guarded Evaluation as a post technology mapping algorithm inserted the most number of guards and hence achieved the highest activity and interconnect reduction. However, guarding signals come with a cost of increased fanout and stress on routing resources. Packing and placement provides the algorithm with additional information of the circuit which is leveraged to insert high quality guards with minimal impact on routing. Experimental results show that post-packing and post-placement methods have comparable reductions to post-mapping with considerably lesser impact on the critical path delay and routability of the circuit

    Example Based Caricature Synthesis

    Get PDF
    The likeness of a caricature to the original face image is an essential and often overlooked part of caricature production. In this paper we present an example based caricature synthesis technique, consisting of shape exaggeration, relationship exaggeration, and optimization for likeness. Rather than relying on a large training set of caricature face pairs, our shape exaggeration step is based on only one or a small number of examples of facial features. The relationship exaggeration step introduces two definitions which facilitate global facial feature synthesis. The first is the T-Shape rule, which describes the relative relationship between the facial elements in an intuitive manner. The second is the so called proportions, which characterizes the facial features in a proportion form. Finally we introduce a similarity metric as the likeness metric based on the Modified Hausdorff Distance (MHD) which allows us to optimize the configuration of facial elements, maximizing likeness while satisfying a number of constraints. The effectiveness of our algorithm is demonstrated with experimental results

    Broadening the Scope of Multi-Objective Optimizations in Physical Synthesis of Integrated Circuits.

    Full text link
    In modern VLSI design, physical synthesis tools are primarily responsible for satisfying chip-performance constraints by invoking a broad range of circuit optimizations, such as buffer insertion, logic restructuring, gate sizing and relocation. This process is known as timing closure. Our research seeks more powerful and efficient optimizations to improve the state of the art in modern chip design. In particular, we integrate timing-driven relocation, retiming, logic cloning, buffer insertion and gate sizing in novel ways to create powerful circuit transformations that help satisfy setup-time constraints. State-of-the-art physical synthesis optimizations are typically applied at two scales: i) global algorithms that affect the entire netlist and ii) local transformations that focus on a handful of gates or interconnections. The scale of modern chip designs dictates that only near-linear-time optimization algorithms can be applied at the global scope — typically limited to wirelength-driven placement and legalization. Localized transformations can rely on more time-consuming optimizations with accurate delay models. Few techniques bridge the gap between fully-global and localized optimizations. This dissertation broadens the scope of physical synthesis optimization to include accurate transformations operating between the global and local scales. In particular, we integrate groups of related transformations to break circular dependencies and increase the number of circuit elements that can be jointly optimized to escape local minima. Integrated transformations in this dissertation are developed by identifying and removing obstacles to successful optimizations. Integration is achieved through mapping multiple operations to rigorous mathematical optimization problems that can be solved simultaneously. We achieve computational scalability in our techniques by leveraging analytical delay models and focusing optimization efforts on carefully selected regions of the chip. In this regard, we make extensive use of a linear interconnect-delay model that accounts for the impact of subsequent repeated insertion. Our integrated transformations are evaluated on high-performance circuits with over 100,000 gates. Integrated optimization techniques described in this dissertation ensure graceful timing-closure process and impact nearly every aspect of a typical physical synthesis flow. They have been validated in EDA tools used at IBM for physical synthesis of high-performance CPU and ASIC designs, where they significantly improved chip performance.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78744/1/iamyou_1.pd

    Design and Optimization for Resilient Energy Efficient Computing

    Get PDF
    Heutzutage sind moderne elektronische Systeme ein integraler Bestandteil unseres Alltags. Dies wurde unter anderem durch das exponentielle Wachstum der Integrationsdichte von integrierten Schaltkreisen ermöglicht zusammen mit einer Verbesserung der Energieeffizienz, welche in den letzten 50 Jahren stattfand, auch bekannt als Moore‘s Gesetz. In diesem Zusammenhang ist die Nachfrage von energieeffizienten digitalen Schaltkreisen enorm angestiegen, besonders in Anwendungsfeldern wie dem Internet of Things (IoT). Da der Leistungsverbrauch von Schaltkreisen stark mit der Versorgungsspannung verknĂŒpft ist, wurden effiziente Verfahren entwickelt, welche die Versorgungsspannung in den nahen Schwellenspannung-Bereich skalieren, zusammengefasst unter dem Begriff Near-Threshold-Computing (NTC). Mithilfe dieser Verfahren kann eine Erhöhung der Energieeffizienz von Schaltungen um eine ganze GrĂ¶ĂŸenordnung ermöglicht werden. Neben der verbesserten Energiebilanz ergeben sich jedoch zahlreiche Herausforderungen was den Schaltungsentwurf angeht. Zum Beispiel fĂŒhrt das Reduzieren der Versorgungsspannung in den nahen Schwellenspannungsbereich zu einer verzehnfachten Erhöhung der SensibilitĂ€t der Schaltkreise gegenĂŒber Prozessvariation, Spannungsfluktuationen und TemperaturverĂ€nderungen. Die EinflĂŒsse dieser Variationen reduzieren die ZuverlĂ€ssigkeit von NTC Schaltkreisen und sind ihr grĂ¶ĂŸtes Hindernis bezĂŒglich einer umfassenden Nutzung. Traditionelle AnsĂ€tze und Methoden aus dem nominalen Spannungsbereich zur Kompensation von VariabilitĂ€t können nicht effizient angewandt werden, da die starken Performance-Variationen und SensitivitĂ€ten im nahen Schwellenspannungsbereich dessen KapazitĂ€ten ĂŒbersteigen. Aus diesem Grund sind neue Entwurfsparadigmen und Entwurfsautomatisierungskonzepte fĂŒr die Anwendung von NTC erforderlich. Das Ziel dieser Arbeit ist die zuvor erwĂ€hnten Probleme durch die Bereitstellung von ganzheitlichen Methoden zum Design von NTC Schaltkreisen sowie dessen Entwurfsautomatisierung anzugehen, welche insbesondere auf der Schaltungs- sowie Logik-Ebene angewandt werden. Dabei werden tiefgehende Analysen der ZuverlĂ€ssigkeit von NTC Systemen miteinbezogen und Optimierungsmethoden werden vorgeschlagen welche die ZuverlĂ€ssigkeit, Performance und Energieeffizienz verbessern. Die BeitrĂ€ge dieser Arbeit sind wie folgt: Schaltungssynthese und Timing Closure unter Einbezug von Variationen: Das Einhalten von Anforderungen an das zeitliche Verhalten und ZuverlĂ€ssigkeit von NTC ist eine anspruchsvolle Aufgabe. Die Auswirkungen von VariabilitĂ€t kommen bei starken Performance-Schwankungen, welche zu teuren zeitlichen Sicherheitsmargen fĂŒhren, oder sich in Hold-Time VerstĂ¶ĂŸen ausdrĂŒcken, verursacht durch funktionale Störungen, zum Vorschein. Die konventionellen AnsĂ€tze beschrĂ€nken sich dabei alleine auf die Erhöhung von zeitlichen Sicherheitsmargen. Dies ist jedoch sehr ineffizient fĂŒr NTC, wegen dem starken Ausmaß an Variationen und den erhöhten Leckströmen. In dieser Arbeit wird ein Konzept zur Synthese und Timing Closure von Schaltkreisen unter Variationen vorgestellt, welches sowohl die SensitivitĂ€t gegenĂŒber Variationen reduziert als auch die Energieeffizienz, Performance und ZuverlĂ€ssigkeit verbessert und zugleich den Mehraufwand von Timing Closures [1, 2] verringert. Simulationsergebnisse belegen, dass unser vorgeschlagener Ansatz die Verzögerungszeit um 87% reduziert und die Performance und Energieeffizienz um 25% beziehungsweise 7.4% verbessert, zu Kosten eines erhöhten FlĂ€chenbedarfs von 4.8%. SchichtĂŒbergreifende ZuverlĂ€ssigkeits-, Energieeffizienz- und Performance-Optimierung von Datenpfaden: SchichtĂŒbergreifende Analyse von Prozessor-Datenpfaden, welche den ganzen Weg spannen vom Kompilierer zum Schaltungsentwurf, kann potenzielle OptimierungsansĂ€tze aufzeigen. Ein Datenpfad ist eine Kombination von mehreren funktionalen Einheiten, welche diverse Instruktionen verarbeiten können. Unsere Analyse zeigt, dass die AusfĂŒhrungszeiten von Instruktionen bei niedrigen Versorgungsspannungen stark variieren, weshalb eine Klassifikation in schnelle und langsame Instruktionen vorgenommen werden kann. Des Weiteren können funktionale Instruktionen als hĂ€ufig und selten genutzte Instruktionen kategorisiert werden. Diese Arbeit stellt eine Multi-Zyklen-Instruktionen-Methode vor, welche die Energieeffizienz und Belastbarkeit von funktionalen Einheiten erhöhen kann [3]. ZusĂ€tzlich stellen wir einen Partitionsalgorithmus vor, welcher ein fein-granulares Power-gating von selten genutzten Einheiten ermöglicht [4] durch Partition von einzelnen funktionalen Einheiten in mehrere kleinere Einheiten. Die vorgeschlagenen Methoden verbessern das zeitliche Schaltungsverhalten signifikant, und begrenzen zugleich die Leckströme betrĂ€chtlich, durch Einsatz einer Kombination von Schaltungs-Redesign- und Code-Replacement-Techniken. Simulationsresultate zeigen, dass die entwickelten Methoden die Performance und Energieeffizienz von arithmetisch-logischen Einheiten (ALU) um 19% beziehungsweise 43% verbessern. Des Weiteren kann der Zuwachs in Performance der optimierten Schaltungen in eine Verbesserung der ZuverlĂ€ssigkeit umgewandelt werden [5, 6]. Post-Fabrication und Laufzeit-Tuning: Prozess- und Laufzeitvariationen haben einen starken Einfluss auf den Minimum Energy Point (MEP) von NTC-Schaltungen, welcher mit der energieeffizientesten Versorgungsspannung assoziiert ist. Es ist ein besonderes Anliegen, die NTC-Schaltung nach der Herstellung (post-fabrication) so zu kalibrieren, dass sich die Schaltung im MEP-Zustand befindet, um die beste Energieeffizient zu erreichen. In dieser Arbeit, werden Post-Fabrication und Laufzeit-Tuning vorgeschlagen, welche die Schaltung basierend auf Geschwindigkeits- und Leistungsverbrauch-Messungen nach der Herstellung auf den MEP kalibrieren. Die vorgestellten Techniken ermitteln den MEP per Chip-Basis um den Einfluss von Prozessvariationen mit einzubeziehen und dynamisch die Versorgungsspannung und Frequenz zu adaptieren um zeitabhĂ€ngige Variationen wie Workload und Temperatur zu adressieren. Zu diesem Zweck wird in die Firmware eines Chips ein Regression-Modell integriert, welches den MEP basierend auf Workload- und Temperatur-Messungen zur Laufzeit extrahiert. Das Regressions-Modell ist fĂŒr jeden Chip einzigartig und basiert lediglich auf Post-Fabrication-Messungen. Simulationsergebnisse zeigen das der entwickelte Ansatz eine sehr hohe prognostische Treffsicherheit und Energieeffizienz hat, Ă€hnlich zu hardware-implementierten Methoden, jedoch ohne hardware-seitigen Mehraufwand [7, 8]. Selektierte Flip-Flop Optimierung: Ultra-Low-Voltage Schaltungen mĂŒssen im nominalen Versorgungsspannungs-Mode arbeiten um zeitliche Anforderungen von laufenden Anwendungen zu erfĂŒllen. In diesem Fall ist die Schaltung von starken Alterungsprozessen betroffen, welche die Transistoren durch Erhöhung der Schwellenspannungen degradieren. Unsere tiefgehenden Analysen haben gezeigt das gewisse Flip-Flop-Architekturen von diesen Alterungserscheinungen beeinflusst werden indem fĂ€lschlicherweise konstante Werte ( \u270\u27 oder \u271\u27) fĂŒr eine lange Zeit gespeichert sind. Im Vergleich zu anderen Komponenten sind Flip-Flops sensitiver zu Alterungsprozessen und versagen unter anderem dabei einen neuen Wert innerhalb des vorgegebenen zeitlichen Rahmens zu ĂŒbernehmen. Außerdem kann auch ein geringfĂŒgiger Spannungsabfall zu diesen zeitlichen VerstĂ¶ĂŸen fĂŒhren, falls die betreffenden gealterten Flip-Flops zum kritischen Pfad zuzuordnen sind. In dieser Arbeit wird eine selektiver Flip-Flop-Optimierungsmethode vorgestellt, welche die Schaltungen bezĂŒglich Robustheit gegen statische Alterung und Spannungsabfall optimieren. Dabei werden zuerst optimierte robuste Flip-Flops generiert und diese dann anschließend in die Standard-Zellen-Bibliotheken integriert. Flip-Flops, die in der Schaltung zum kritischen Pfad gehören und Alterung sowie Spannungsabfall erfahren, werden durch die optimierten robusten Versionen ersetzt, um das Zeitverhalten und die ZuverlĂ€ssigkeit der Schaltung zu verbessern [9, 10]. Simulationsergebnisse zeigen, dass die erwartete Lebenszeit eines Prozessors um 37% verbessert werden kann, wĂ€hrend Leckströme um nur 0.1% erhöht werden. WĂ€hrend NTC das Potenzial hat große Energieeffizienz zu ermöglichen, ist der Einsatz in neue Anwendungsfeldern wie IoT wegen den zuvor erwĂ€hnten Problemen bezĂŒglich der hohen SensitivitĂ€t gegenĂŒber Variationen und deshalb mangelnder ZuverlĂ€ssigkeit, noch nicht durchsetzbar. In dieser Dissertation und in noch nicht publizierten Werken [11–17], stellen wir Lösungen zu diesen Problemen vor, die eine Integration von NTC in heutige Systeme ermöglichen

    Design and Optimization Methods for Pin-Limited and Cyberphysical Digital Microfluidic Biochips

    Get PDF
    <p>Microfluidic biochips have now come of age, with applications to biomolecular recognition for high-throughput DNA sequencing, immunoassays, and point-of-care clinical diagnostics. In particular, digital microfluidic biochips, which use electrowetting-on-dielectric to manipulate discrete droplets (or "packets of biochemical payload") of picoliter volumes under clock control, are especially promising. The potential applications of biochips include real-time analysis for biochemical reagents, clinical diagnostics, flash chemistry, and on-chip DNA sequencing. The ease of reconfigurability and software-based control in digital microfluidics has motivated research on various aspects of automated chip design and optimization.</p><p>This thesis research is focused on facilitating advances in on-chip bioassays, enhancing the automated use of digital microfluidic biochips, and developing an "intelligent" microfluidic system that has the capability of making on-line re-synthesis while a bioassay is being executed. This thesis includes the concept of a "cyberphysical microfluidic biochip" based on the digital microfluidics hardware platform and on-chip sensing technique. In such a biochip, the control software, on-chip sensing, and the microfluidic operations are tightly coupled. The status of the droplets is dynamically monitored by on-chip sensors. If an error is detected, the control software performs dynamic re-synthesis procedure and error recovery.</p><p>In order to minimize the size and cost of the system, a hardware-assisted error-recovery method, which relies on an error dictionary for rapid error recovery, is also presented. The error-recovery procedure is controlled by a finite-state-machine implemented on a field-programmable gate array (FPGA) instead of a software running on a separate computer. Each state of the FSM represents a possible error that may occur on the biochip; for each of these errors, the corresponding sequence of error-recovery signals is stored inside the memory of the FPGA before the bioassay is conducted. When an error occurs, the FSM transitions from one state to another, and the corresponding control signals are updated. Therefore, by using inexpensive FPGA, a portable cyberphysical system can be implemented.</p><p>In addition to errors in fluid-handling operations, bioassay outcomes can also be erroneous due the uncertainty in the completion time for fluidic operations. Due to the inherent randomness of biochemical reactions, the time required to complete each step of the bioassay is a random variable. To address this issue, a new "operation-interdependence-aware" synthesis algorithm is proposed in this thesis. The start and stop time of each operation are dynamically determined based on feedback from the on-chip sensors. Unlike previous synthesis algorithms that execute bioassays based on pre-determined start and end times of each operation, the proposed method facilitates "self-adaptive" bioassays on cyberphysical microfluidic biochips.</p><p>Another design problem addressed in this thesis is the development of a layout-design algorithm that can minimize the interference between devices on a biochip. A probabilistic model for the polymerase chain reaction (PCR) has been developed; based on the model, the control software can make on-line decisions regarding the number of thermal cycles that must be performed during PCR. Therefore, PCR can be controlled more precisely using cyberphysical integration.</p><p>To reduce the fabrication cost of biochips, yet maintain application flexibility, the concept of a "general-purpose pin-limited biochip" is proposed. Using a graph model for pin-assignment, we develop the theoretical basis and a heuristic algorithm to generate optimized pin-assignment configurations. The associated scheduling algorithm for on-chip biochemistry synthesis has also been developed. Based on the theoretical framework, a complete design flow for pin-limited cyberphysical microfluidic biochips is presented.</p><p>In summary, this thesis research has led to an algorithmic infrastructure and optimization tools for cyberphysical system design and technology demonstrations. The results of this thesis research are expected to enable the hardware/software co-design of a new class of digital microfluidic biochips with tight coupling between microfluidics, sensors, and control software.</p>Dissertatio
    corecore