24,609 research outputs found

    Architectural level delay and leakage power modelling of manufacturing process variation

    Get PDF
    PhD ThesisThe effect of manufacturing process variations has become a major issue regarding the estimation of circuit delay and power dissipation, and will gain more importance in the future as device scaling continues in order to satisfy market place demands for circuits with greater performance and functionality per unit area. Statistical modelling and analysis approaches have been widely used to reflect the effects of a variety of variational process parameters on system performance factor which will be described as probability density functions (PDFs). At present most of the investigations into statistical models has been limited to small circuits such as a logic gate. However, the massive size of present day electronic systems precludes the use of design techniques which consider a system to comprise these basic gates, as this level of design is very inefficient and error prone. This thesis proposes a methodology to bring the effects of process variation from transistor level up to architectural level in terms of circuit delay and leakage power dissipation. Using a first order canonical model and statistical analysis approach, a statistical cell library has been built which comprises not only the basic gate cell models, but also more complex functional blocks such as registers, FIFOs, counters, ALUs etc. Furthermore, other sensitive factors to the overall system performance, such as input signal slope, output load capacitance, different signal switching cases and transition types are also taken into account for each cell in the library, which makes it adaptive to an incremental circuit design. The proposed methodology enables an efficient analysis of process variation effects on system performance with significantly reduced computation time compared to the Monte Carlo simulation approach. As a demonstration vehicle for this technique, the delay and leakage power distributions of a 2-stage asynchronous micropipeline circuit has been simulated using this cell library. The experimental results show that the proposed method can predict the delay and leakage power distribution with less than 5% error and at least 50,000 times faster computation time compare to 5000-sample SPICE based Monte Carlo simulation. The methodology presented here for modelling process variability plays a significant role in Design for Manufacturability (DFM) by quantifying the direct impact of process variations on system performance. The advantages of being able to undertake this analysis at a high level of abstraction and thus early in the design cycle are two fold. First, if the predicted effects of process variation render the circuit performance to be outwith specification, design modifications can be readily incorporated to rectify the situation. Second, knowing what the acceptable limits of process variation are to maintain design performance within its specification, informed choices can be made regarding the implementation technology and manufacturer selected to fabricate the design

    Approximate Computing Strategies for Low-Overhead Fault Tolerance in Safety-Critical Applications

    Get PDF
    This work studies the reliability of embedded systems with approximate computing on software and hardware designs. It presents approximate computing methods and proposes approximate fault tolerance techniques applied to programmable hardware and embedded software to provide reliability at low computational costs. The objective of this thesis is the development of fault tolerance techniques based on approximate computing and proving that approximate computing can be applied to most safety-critical systems. It starts with an experimental analysis of the reliability of embedded systems used at safety-critical projects. Results show that the reliability of single-core systems, and types of errors they are sensitive to, differ from multicore processing systems. The usage of an operating system and two different parallel programming APIs are also evaluated. Fault injection experiment results show that embedded Linux has a critical impact on the system’s reliability and the types of errors to which it is most sensitive. Traditional fault tolerance techniques and parallel variants of them are evaluated for their fault-masking capability on multicore systems. The work shows that parallel fault tolerance can indeed not only improve execution time but also fault-masking. Lastly, an approximate parallel fault tolerance technique is proposed, where the system abandons faulty execution tasks. This first approximate computing approach to fault tolerance in parallel processing systems was able to improve the reliability and the fault-masking capability of the techniques, significantly reducing errors that would cause system crashes. Inspired by the conflict between the improvements provided by approximate computing and the safety-critical systems requirements, this work presents an analysis of the applicability of approximate computing techniques on critical systems. The proposed techniques are tested under simulation, emulation, and laser fault injection experiments. Results show that approximate computing algorithms do have a particular behavior, different from traditional algorithms. The approximation techniques presented and proposed in this work are also used to develop fault tolerance techniques. Results show that those new approximate fault tolerance techniques are less costly than traditional ones and able to achieve almost the same level of error masking.Este trabalho estuda a confiabilidade de sistemas embarcados com computação aproximada em software e projetos de hardware. Ele apresenta mĂ©todos de computação aproximada e tĂ©cnicas aproximadas para tolerĂąncia a falhas em hardware programĂĄvel e software embarcado que provĂȘem alta confiabilidade a baixos custos computacionais. O objetivo desta tese Ă© o desenvolvimento de tĂ©cnicas de tolerĂąncia a falhas baseadas em computação aproximada e provar que este paradigma pode ser usado em sistemas crĂ­ticos. O texto começa com uma anĂĄlise da confiabilidade de sistemas embarcados usados em sistemas de tolerĂąncia crĂ­tica. Os resultados mostram que a resiliĂȘncia de sistemas singlecore, e os tipos de erros aos quais eles sĂŁo mais sensĂ­veis, Ă© diferente dos multi-core. O uso de sistemas operacionais tambĂ©m Ă© analisado, assim como duas APIs de programação paralela. Experimentos de injeção de falhas mostram que o uso de Linux embarcado tem um forte impacto na confiabilidade do sistema. TĂ©cnicas tradicionais de tolerĂąncia a falhas e variaçÔes paralelas das mesmas sĂŁo avaliadas. O trabalho mostra que tĂ©cnicas de tolerĂąncia a falhas paralelas podem de fato melhorar nĂŁo apenas o tempo de execução da aplicação, mas tambĂ©m seu mascaramento de erros. Por fim, uma tĂ©cnica de tolerĂąncia a falhas paralela aproximada Ă© proposta, onde o sistema abandona instĂąncias de execuçÔes que apresentam falhas. Esta primeira experiĂȘncia com computação aproximada foi capaz de melhorar a confiabilidade das tĂ©cnicas previamente apresentadas, reduzindo significativamente a ocorrĂȘncia de erros que provocam um crash total do sistema. Inspirado pelo conflito entre as melhorias trazidas pela computação aproximada e os requisitos dos sistemas crĂ­ticos, este trabalho apresenta uma anĂĄlise da aplicabilidade de computação aproximada nestes sistemas. As tĂ©cnicas propostas sĂŁo testadas sob experimentos de injeção de falhas por simulação, emulação e laser. Os resultados destes experimentos mostram que algoritmos aproximados possuem um comportamento particular que lhes Ă© inerente, diferente dos tradicionais. As tĂ©cnicas de aproximação apresentadas e propostas no trabalho sĂŁo tambĂ©m utilizadas para o desenvolvimento de tĂ©cnicas de tolerĂąncia a falhas aproximadas. Estas novas tĂ©cnicas possuem um custo menor que as tradicionais e sĂŁo capazes de atingir o mesmo nĂ­vel de mascaramento de erros

    The cleanroom case study in the Software Engineering Laboratory: Project description and early analysis

    Get PDF
    This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage

    Regulating Ex Post: How Law Can Address the Inevitability of Financial Failure

    Get PDF
    Unlike many other areas of regulation, financial regulation operates in the context of a complex interdependent system. The interconnections among firms, markets, and legal rules have implications for financial regulatory policy, especially the choice between ex ante regulation aimed at preventing financial failure and ex post regulation aimed at responding to that failure. Regulatory theory has paid relatively little attention to this distinction. Were regulation to consist solely of duty-imposing norms, such neglect might be defensible. In the context of a system, however, regulation can also take the form of interventions aimed at mitigating the potentially systemic consequences of a financial failure. We show that this dual role of financial regulation implies that ex ante regulation and ex post regulation should be balanced in setting financial regulatory policy, and we offer guidelines for achieving that balance

    Circuit Synthesis of Electrochemical Supercapacitor Models

    Full text link
    This paper is concerned with the synthesis of RC electrical circuits from physics-based supercapacitor models describing conservation and diffusion relationships. The proposed synthesis procedure uses model discretisation, linearisation, balanced model order reduction and passive network synthesis to form the circuits. Circuits with different topologies are synthesized from several physical models. This work will give greater understanding to the physical interpretation of electrical circuits and will enable the development of more generalised circuits, since the synthesized impedance functions are generated by considering the physics, not from experimental fitting which may ignore certain dynamics

    Summary of photovoltaic system performance models

    Get PDF
    A detailed overview of photovoltaics (PV) performance modeling capabilities developed for analyzing PV system and component design and policy issues is provided. A set of 10 performance models are selected which span a representative range of capabilities from generalized first order calculations to highly specialized electrical network simulations. A set of performance modeling topics and characteristics is defined and used to examine some of the major issues associated with photovoltaic performance modeling. Each of the models is described in the context of these topics and characteristics to assess its purpose, approach, and level of detail. The issues are discussed in terms of the range of model capabilities available and summarized in tabular form for quick reference. The models are grouped into categories to illustrate their purposes and perspectives

    Product assurance technology for custom LSI/VLSI electronics

    Get PDF
    The technology for obtaining custom integrated circuits from CMOS-bulk silicon foundries using a universal set of layout rules is presented. The technical efforts were guided by the requirement to develop a 3 micron CMOS test chip for the Combined Release and Radiation Effects Satellite (CRRES). This chip contains both analog and digital circuits. The development employed all the elements required to obtain custom circuits from silicon foundries, including circuit design, foundry interfacing, circuit test, and circuit qualification

    Inflation Targeting Macroeconomic Distortions and the Policy Reaction Function

    Get PDF
    The paper examines the evolution of monetary policy design in Australia over the past quarter of a century culminating recently in the adoption of an inflation targeting approach through the institutional mechanism of CBI (Central Bank Independence). Cross-country empirics have repeatedly confirmed the stylized fact that high CBI delivers low inflation. This study covers new ground by using time-series techniques to test the nexus between CBI and inflation using Australian quarterly time-series data for the sample period 1973Q3-1998Q4. The theoretical analysis based on a quadratic social loss function subject to a Lucas supply curve demonstrates that the exclusive focus on the institutional mechanism of CBI to reduce inflation bias may be flawed because it ignores the spillover effects of macroeconomic distortions on inflation. Time-series composite indices were constructed to proxy CBI and macroeconomic distortions in the labour market, the tax system and in the arena of international competition. The general-to-specific methodology was applied to sequentially derive a parsimonious VECM (Vector Error Correction Model) linking CBI and macroeconomic distortions to inflation during the study period. Granger causality tests indicated that both CBI and macroeconomic distortions Granger caused inflation. The VECM empirics revealed that CBI and neocorporatism contributed in a significant manner to reduction of inflation during the study period. The fact that neocorporatism curbed inflation during the study period raises the issue that the industrial relations reforms agenda aimed at eroding neocorporatism are politically motivated and lack an economic rationale. However, when the link between inflation and neocorporatism was reanalyzed taking feedback effects into account using the VAR methodology a different picture emerged. The impulse response functions revealed that an increase in neocorporatism exacerbated inflation in the short run. Thus the VAR empirics therefore provided a rationale for the labour market reforms aimed at rectifying labour market distortion attributed to neocorporatism. Both the VECM and VAR empirics make a strong case for tax reform in order to reduce welfare payments without compromising on safety net and equity issues. It also makes a case for reducing the volatility or the real exchange rate to sharpen Australia's competitive edge. The significance of macroeconomic distortions in causing output to deviate from potential underscore that the policy reaction function is influenced by distortions. Non-nested tests revealed that the Taylor rule taking account of deviations of output from potential due to macro distortions was superior to an inflation rate only rule. Therefore the study results recommend that policymaker (Reserve Bank of Australia) should pursue a Taylor rule rather than inflation rate only rule in smoothing the overnight cash rate to achieve the pre-announced inflation target.
    • 

    corecore