27 research outputs found

    Load-independent characterization of trade-off fronts for operational amplifiers

    Get PDF
    Abstract—In emerging design methodologies for analog integrated circuits, the use of performance trade-off fronts, also known as Pareto fronts, is a keystone to overcome the limitations of the traditional top-down methodologies. However, most techniques reported so far to generate the front neglect the effect of the surrounding circuitry (such as the output load impedance) on the Pareto-front, thereby making it only valid for the context where the front was generated. This strongly limits its use in hierarchical analog synthesis because of the heavy dependence of key performances on the surrounding circuitry, but, more importantly, because this circuitry remains unknown until the synthesis process. We will address this problem by proposing a new technique to generate the trade-off fronts that is independent of the load that the circuit has to drive. This idea is exploited for a commonly used circuit, the operational amplifier, and experimental results show that this is a promising approach to solve the issue

    Multi-objective optimization of RF circuit blocks via surrogate models and NBI and SPEA2 methods

    Get PDF
    Multi-objective optimization techniques can be categorized globally into deterministic and evolutionary methods. Examples of such methods are the Normal Boundary Intersection (NBI) method and the Strength Pareto Evolutionary Algorithm (SPEA2), respectively. With both methods one explores trade-offs between conflicting performances. Surrogate models can replace expensive circuit simulations so enabling faster computation of circuit performances. As surrogate models of behavioral parameters and performance outcomes, we consider look-up tables with interpolation and Neural Network models

    運動計画をフィードバックループに含むヒューマノイドロボットの多点接触全身制御のための計算基盤

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 中村 仁彦, 東京大学教授 下山 勲, 東京大学教授 稲葉 雅幸, 東京大学教授 國吉 康夫, 東京大学准教授 高野 渉, LAAS-CNRSSenior Researcher LAUMOND Jean-PaulUniversity of Tokyo(東京大学

    Hierarchical Memory Size Estimation for Loop Transformation and Data Memory Platform Optimization

    Get PDF
    In today’s embedded systems, the memory hierarchy is rapidly becoming a major bottleneck in terms of power, performance and area, due to the very large amount of (memory related) data need to be transferred and stored (temporarily). This is especially the case for portable multi-media applications systems. These applications are characterized by deep loop nests and multi-dimensional arrays at the high level. Due to the dramatically increasing size and complexity of system-on-a-chip (SoC) designs and stringent time-to-market requirement, the methodology and tools for chip design must be raised to the system level. Early analysis tools are particularly critical in enabling SoC designers to take full advantage of the many architectural options available. For memory optimization, the early high level techniques aim either to design an optimal memory platform for a given application or to optimize the application code in order to take advantage of the memory platform features, or even both. Loop transformation is such an important high level optimization technique. It modifies the execution order of loops and statements without changing the application functionality. Existing loop transformation algorithms are all performed based either on reduction of data access lifetime and on improvement in data locality and regularity to steer selection of loop transformations. These are, however, very abstract cost functions which do not represent the exact memory size requirement of the arrays and how the data will be mapped onto the memory platform later on. Existing algorithms all result in one final loop transformation solution. As different loop transformations may result in optimal utilization for different memory platform instances, ad-hoc decisions at this stage without estimating their impact on the actual hierarchy utilization can lead to a final sub-optimal solution. An evaluation of later design stages’ effort is hence required. On the other hand, there usually exist a huge number of loop transformation possibilities, the estimation is required to be performed repeatedly and its computation time of the estimation technique also becomes critical to make it useful during the loop transformation search space exploration. This dissertation proposes a memory footprint estimation methodology. An intra-array memory footprint estimation is performed first followed by an interarray estimation. In order to achieve a fast estimate to make it useful repeatedly during the early high level search space exploration, several techniques have been introduced. A fast intra-array memory footprint estimation is performed at the iteration domain based on the maximal lifetime of data accesses, which is defined by the maximal dependency vector. Two approaches, an ILP formulation and vertexes approach, have been introduced for achieving a fast maximal dependency vector calculation. The fast inter-array estimation has been achieved based on several Hanoi tower based approaches. A hierarchical memory size estimation methodology has also been proposed in this dissertation. It estimates the influence of any given sequence of loop transformation instances on the mapping of application data onto a hierarchical memory platform. As the exact memory platform instantiation is often not yet defined at this high level design stage, a platform independent estimation is introduced with a Pareto curve output for each loop transformation instance. It can steer the designer or an automatic steering tool to select all the interesting loop transformation instances that might later lead to low power data mapping for any of the many possible memory hierarchy instances. This is useful when the memory platform is not defined yet, or for a given memory hierarchy instance. It also allows to find the most appropriate low power memory hierarchy instance by performing an early power estimation of different memory hierarchy instances. Initially the source code is used as input for estimation, resulting in an initial approach. However, performing the estimation repeatedly from the source code is too slow for the large loop transformation search space exploration. An incremental approach, based on local updating of the previous result, is thus introduced to handle sequences of different loop transformations. Several advanced techniques have also been used on these two approaches in order to perform a fast estimation, such as bounding box geometrical model based data reuse analysis, platform independent memory hierarchy layer assignment estimation, fast intra- and inter-array memory footprint estimation. The feasibility and usefulness of the methodologies are substantiated using several representative real-life application demonstrators. It shows for instance that the fast memory footprint estimation can be two order of magnitude faster than compared techniques while still achieving fairly accurate estimation result. For hierarchical memory size estimation methodology, the initial approach is two order of magnitude faster than the compared technique and the incremental approach is another two order of magnitude faster than the initial approach, which can just take a few milliseconds. The fast computation time of the incremental approach make it feasible to be used repeatedly during the loop transformation exploration over a very large number of possibilities. Furthermore, prototype CAD tools has been developed that includes mast parts of the methodologies

    Lossy Polynomial Datapath Synthesis

    No full text
    The design of the compute elements of hardware, its datapath, plays a crucial role in determining the speed, area and power consumption of a device. The building blocks of datapath are polynomial in nature. Research into the implementation of adders and multipliers has a long history and developments in this area will continue. Despite such efficient building block implementations, correctly determining the necessary precision of each building block within a design is a challenge. It is typical that standard or uniform precisions are chosen, such as the IEEE floating point precisions. The hardware quality of the datapath is inextricably linked to the precisions of which it is composed. There is, however, another essential element that determines hardware quality, namely that of the accuracy of the components. If one were to implement each of the official IEEE rounding modes, significant differences in hardware quality would be found. But in the same fashion that standard precisions may be unnecessarily chosen, it is typical that components may be constructed to return one of these correctly rounded results, where in fact such accuracy is far from necessary. Unfortunately if a lesser accuracy is permissible then the techniques that exist to reduce hardware implementation cost by exploiting such freedom invariably produce an error with extremely difficult to determine properties. This thesis addresses the problem of how to construct hardware to efficiently implement fixed and floating-point polynomials while exploiting a global error freedom. This is a form of lossy synthesis. The fixed-point contributions include resource minimisation when implementing mutually exclusive polynomials, the construction of minimal lossy components with guaranteed worst case error and a technique for efficient composition of such components. Contributions are also made to how a floating-point polynomial can be implemented with guaranteed relative error.Open Acces

    Decision procedures for linear arithmetic

    Get PDF
    In this thesis, we present new decision procedures for linear arithmetic in the context of SMT solvers and theorem provers: 1) CutSat++, a calculus for linear integer arithmetic that combines techniques from SAT solving and quantifier elimination in order to be sound, terminating, and complete. 2) The largest cube test and the unit cube test, two sound (although incomplete) tests that find integer and mixed solutions in polynomial time. The tests are especially efficient on absolutely unbounded constraint systems, which are difficult to handle for many other decision procedures. 3) Techniques for the investigation of equalities implied by a constraint system. Moreover, we present several applications for these techniques. 4) The Double-Bounded reduction and the Mixed-Echelon-Hermite transformation, two transformations that reduce any constraint system in polynomial time to an equisatisfiable constraint system that is bounded. The transformations are beneficial because they turn branch-and-bound into a complete and efficient decision procedure for unbounded constraint systems. We have implemented the above decision procedures (except for Cut- Sat++) as part of our linear arithmetic theory solver SPASS-IQ and as part of our CDCL(LA) solver SPASS-SATT. We also present various benchmark evaluations that confirm the practical efficiency of our new decision procedures.In dieser Arbeit präsentieren wir neue Entscheidungsprozeduren für lineare Arithmetik im Kontext von SMT-Solvern und Theorembeweisern: 1) CutSat++, ein korrekter und vollständiger Kalkül für ganzzahlige lineare Arithmetik, der Techniken zur Entscheidung von Aussagenlogik mit Techniken aus der Quantorenelimination vereint. 2) Der Größte-Würfeltest und der Einheitswürfeltest, zwei korrekte (wenn auch unvollständige) Tests, die in polynomieller Zeit (gemischt-)ganzzahlige Lösungen finden. Die Tests sind besonders effizient auf vollständig unbegrenzten Systemen, welche für viele andere Entscheidungsprozeduren schwer sind. 3) Techniken zur Ermittlung von Gleichungen, die von einem linearen Ungleichungssystem impliziert werden. Des Weiteren präsentieren wir mehrere Anwendungsmöglichkeiten für diese Techniken. 4) Die Beidseitig-Begrenzte-Reduktion und die Gemischte-Echelon-Hermitesche- Transformation, die ein Ungleichungssystem in polynomieller Zeit auf ein erfüllbarkeitsäquivalentes System reduzieren, das begrenzt ist. Vereint verwandeln die Transformationen Branch-and-Bound in eine vollständige und effiziente Entscheidungsprozedur für unbeschränkte Ungleichungssysteme. Wir haben diese Techniken (ausgenommen CutSat++) in SPASS-IQ (unserem theory solver für lineare Arithmetik) und in SPASS-SATT (unserem CDCL(LA) solver) implementiert. Basierend darauf präsentieren wir Benchmark-Evaluationen, die die Effizienz unserer Entscheidungsprozeduren bestätigen

    Diseño de circuitos analógicos y de señal mixta con consideraciones de diseño físico y variabilidad

    Get PDF
    Advances in microelectronic technology has been based on an increasing capacity to integrate transistors, moving this industry to the nanoelectronics realm in recent years. Moore’s Law [1] has predicted (and somehow governed) the growth of the capacity to integrate transistors in a single IC. Nevertheless, while this capacity has grown steadily, the increasing number of design tasks that are involved in the creation of the integrated circuit and their complexity has led to a phenomenon known as the ``design gap´´. This is the difference between what can theoretically be integrated and what can practically be designed. Since the early 2000s, the International Technology Roadmap of Semiconductors (ITRS) reports, published by the Semiconductor Industry Association (SIA), alert about the necessity to limit the growth of the design cost by increasing the productivity of the designer to continue the semiconductor industry’s growth. Design automation arises as a key element to close this ”design gap”. In this sense, electronic design automation (EDA) tools have reached a level of maturity for digital circuits that is far behind the EDA tools that are made for analog circuit design automation. While digital circuits rely, in general, on two stable operation states (which brings inherent robustness against numerous imperfections and interferences, leading to few design constraints like area, speed or power consumption), analog signal processing, on the other hand, demands compliance with lots of constraints (e.g., matching, noise, robustness, ...). The triumph of digital CMOS circuits, thanks to their mentioned robustness, has, ultimately, facilitated the way that circuits can be processed by algorithms, abstraction levels and description languages, as well as how the design information traverse the hierarchical levels of a digital system. The field of analog design automation faces many more difficulties due to the many sources of perturbation, such as the well-know process variability, and the difficulty in treating these systematically, like digital tools can do. In this Thesis, different design flows are proposed, focusing on new design methodologies for analog circuits, thus, trying to close the ”gap” between digital and analog EDA tools. In this chapter, the most important sources for perturbations and their impact on the analog design process are discussed in Section 1.2. The traditional analog design flow is discussed in 1.3. Emerging design methodologies that try to reduce the ”design gap” are presented in Section 1.4 where the key concept of Pareto-Optimal Front (POF) is explained. This concept, brought from the field of economics, models the analog circuit performances into a set of solutions that show the optimal trade-offs among conflicting circuit performances (e.g. DC-gain and unity-gain frequency). Finally, the goals of this thesis are presented in Section 1.5
    corecore