1,599 research outputs found

    Ring oscillator clocks and margins

    Get PDF
    How much margin do we have to add to the delay lines of a bundled-data circuit? This paper is an attempt to give a methodical answer to this question, taking into account all sources of variability and the existing EDA machinery for timing analysis and sign-off. The paper is based on the study of the margins of a ring oscillator that substitutes a PLL as clock generator. A timing model is proposed that shows that a 12% margin for delay lines can be sufficient to cover variability in a 65nm technology. In a typical scenario, performance and energy improvements between 15% and 35% can be obtained by using a ring oscillator instead of a PLL. The paper concludes that a synchronous circuit with a ring oscillator clock shows similar benefits in performance and energy as those of bundled-data asynchronous circuits.Peer ReviewedPostprint (author's final draft

    A Control-based Methodology for Power-performance Optimization in NoCs Exploiting DVFS

    Get PDF
    Networks-on-Chip (NoCs) are considered a viable solution to fully exploit the computational power of multi- and many-cores, but their non negligible power consumption requires ad hoc power-performance design methodologies. In this perspective, several proposals exploited the possibility to dynamically tune voltage and frequency for the interconnect, taking steps from traditional CPU-based power management solutions. However, the impact of the actuators, i.e. the limited range of frequencies for a PLL (Phase Locked Loop) or the time to increase voltage and frequency for a Dynamic Voltage and Frequency Scaling (DVFS) modules, are often not carefully accounted for, thus overestimating the benefits. This paper presents a control-based methodology for the NoC power-performance optimization exploiting the Dynamic Frequency Scaling (DFS). Both timing and power overheads of the actuators are considered, thanks to an ad hoc simulation framework. Moreover the proposed methodology eventually allows for user and/or OS interactions to change between different high level power-performance modes, i.e. to trigger performance oriented or power saving system behaviors. Experimental validation considered a 16-core architecture comparing our proposal with different settings of threshold-based policies. We achieved a speedup up to 3 for the timing and a reduction up to 33.17% of the power ∗ time product against the best threshold-based policy. Moreover, our best control-based scheme provides an averaged power-performance product improvement of 16.50% and 34.79% against the best and the second considered threshold-based policy setting

    Towards the Development of High-Fidelity Models for Large Scale Solar Energy Generating Systems

    Get PDF
    Small and large scale solar photovoltaic energy generating systems have been observed to take a leading place in power systems around the world which are aiming to move away from the use of fossil fuels. Technical and other challenges associated with such systems have become the focus areas of discussion and investigation in recent years. Among a range of technical challenges, power quality issues associated with the power electronic converters, especially the harmonics, are an important aspect in order to ensure that their stipulated limits are maintained. While harmonics caused by small-scale inverters, for example, those used in rooftop systems, are managed through their harmonic current emission compliance requirements, the harmonics caused by large scale inverters used in solar farms need to be managed at network levels which is essentially the responsibility of the network owners and operators. To be successful in this management process, the relevant generator connection requirements and system standards, relevant data provided by inverter manufacturers, pre-connection and post-connection studies and procedures require attention. With regard to limits associated with harmonic voltage levels at medium, high and extra high voltage (MV, HV and EHV) levels, well-established international standards exist, whereas the pre-connection study procedures which have existed for many years are now being challenged, noting the increase in the number and capacity of inverter based resources (IBRs). With regard to pre-connection harmonic compliance studies associated with power electronic based grid integrated resources or devices, the most well-known approach is the use of equivalent frequency domain models of the systems on either side of the point of connection or the grid interface. The grid is often represented by an equivalent harmonic impedance together with a corresponding background harmonic voltage. The power electronic based resources or the devices are represented by Thevenin or Norton models at the harmonic frequencies of interest, which are provided by their vendors where the approaches or the conditions under which these models are determined are not comprehensively known. It is however understood that the parameters of such equivalent circuits are mostly determined based on site tests and represent worst case harmonic performance, which do not necessarily correspond to rated power output. There is also the anecdotal understanding that such models are determined based on mathematical or simulation modelling. The most significant concern associated with such frequency domain models is their suitability for representation of the actual harmonic behaviour at a given point in time, thus posing the question of their fidelity which forms the backbone of the work presented in this thesis

    Seamless Transition of a Microgrid Between Grid-Connected and Islanded Mode

    Get PDF
    This thesis focuses on improving the behavior of inverters during transition periods from islanded mode to grid-connected mode (GC) and vice-versa. A systematic approach is presented to add smart features to inverters to enhance their capability to cope with sudden changes in the power system. The importance of microgrids lies in their ability to provide a stable and reliable source of power for critical loads in the presence of faults. For this purpose, a design is proposed consisting of a distributed energy resource (DER), battery energy storage system (BESS) and a load connected through a bypass switch with the main utility distribution substation. The BESS is connected to the AC distribution feeder through a smart inverter that is controlled in both modes of operations. The system was tested using MATLAB/Simulink models and the results show proof of the seamless transition between the two modes of operation. The cost of building the software system was unnoticeable due to the availability of a MATLAB license but the real cost of the hardware needed to build the system will be moderate though the importance will be significant

    Comparative Analysis and Enhancement of Simplified Drilling Process Simulation Models and Exploring Machine Learning for Real-Time Optimization

    Get PDF
    Real-time optimization of drilling processes is vital for the efficient and safe operation of the oil and gas industry. For this, fast and robust models are required to enable automation safety strategies. Many existing models are computationally intensive while requiring executions speeds decades faster than real-time for certain automation tasks. This thesis aims to understand what makes models computationally intensive, compare solutions and propose alternatives, as well as look at accuracy where simplifications are made. The research framework involves two models developed in MATLAB, by Alf Kristian Gjerstad and Kjell KÃ¥re Fjelde, as a starting point. The primary tasks include analyzing the differences between the models mainly aimed at calculation of frictional pressure loss, evaluating the reasons for these differences, modifying the models to suit the needs of this thesis, and adding options for the calculations in the main model, by Alf Kristian Gjerstad. This thesis presents a thorough investigation of the discrepancies between the two models, along with implementations and modifications to the main model. A Machine Learning-based approach is proposed as an alternative to the more computationally intensive versions using Herschel-Bulkley and Bingham Plastic, to maintain real-time applicability while hopefully maintaining accuracy. The results demonstrate the potential of the proposed alternative.Real-time optimization of drilling processes is vital for the efficient and safe operation of the oil and gas industry. For this, fast and robust models are required to enable automation safety strategies. Many existing models are computationally intensive while requiring executions speeds decades faster than real-time for certain automation tasks. This thesis aims to understand what makes models computationally intensive, compare solutions and propose alternatives, as well as look at accuracy where simplifications are made. The research framework involves two models developed in MATLAB, by Alf Kristian Gjerstad and Kjell KÃ¥re Fjelde, as a starting point. The primary tasks include analyzing the differences between the models mainly aimed at calculation of frictional pressure loss, evaluating the reasons for these differences, modifying the models to suit the needs of this thesis, and adding options for the calculations in the main model, by Alf Kristian Gjerstad. This thesis presents a thorough investigation of the discrepancies between the two models, along with implementations and modifications to the main model. A Machine Learning-based approach is proposed as an alternative to the more computationally intensive versions using Herschel-Bulkley and Bingham Plastic, to maintain real-time applicability while hopefully maintaining accuracy. The results demonstrate the potential of the proposed alternative

    Timing speculation and adaptive reliable overclocking techniques for aggressive computer systems

    Get PDF
    Computers have changed our lives beyond our own imagination in the past several decades. The continued and progressive advancements in VLSI technology and numerous micro-architectural innovations have played a key role in the design of spectacular low-cost high performance computing systems that have become omnipresent in today\u27s technology driven world. Performance and dependability have become key concerns as these ubiquitous computing machines continue to drive our everyday life. Every application has unique demands, as they run in diverse operating environments. Dependable, aggressive and adaptive systems improve efficiency in terms of speed, reliability and energy consumption. Traditional computing systems run at a fixed clock frequency, which is determined by taking into account the worst-case timing paths, operating conditions, and process variations. Timing speculation based reliable overclocking advocates going beyond worst-case limits to achieve best performance while not avoiding, but detecting and correcting a modest number of timing errors. The success of this design methodology relies on the fact that timing critical paths are rarely exercised in a design, and typical execution happens much faster than the timing requirements dictated by worst-case design methodology. Better-than-worst-case design methodology is advocated by several recent research pursuits, which exploit dependability techniques to enhance computer system performance. In this dissertation, we address different aspects of timing speculation based adaptive reliable overclocking schemes, and evaluate their role in the design of low-cost, high performance, energy efficient and dependable systems. We visualize various control knobs in the design that can be favorably controlled to ensure different design targets. As part of this research, we extend the SPRIT3E, or Superscalar PeRformance Improvement Through Tolerating Timing Errors, framework, and characterize the extent of application dependent performance acceleration achievable in superscalar processors by scrutinizing the various parameters that impact the operation beyond worst-case limits. We study the limitations imposed by short-path constraints on our technique, and present ways to exploit them to maximize performance gains. We analyze the sensitivity of our technique\u27s adaptiveness by exploring the necessary hardware requirements for dynamic overclocking schemes. Experimental analysis based on SPEC2000 benchmarks running on a SimpleScalar Alpha processor simulator, augmented with error rate data obtained from hardware simulations of a superscalar processor, are presented. Even though reliable overclocking guarantees functional correctness, it leads to higher power consumption. As a consequence, reliable overclocking without considering on-chip temperatures will bring down the lifetime reliability of the chip. In this thesis, we analyze how reliable overclocking impacts the on-chip temperature of a microprocessor and evaluate the effects of overheating, due to such reliable dynamic frequency tuning mechanisms, on the lifetime reliability of these systems. We then evaluate the effect of performing thermal throttling, a technique that clamps the on-chip temperature below a predefined value, on system performance and reliability. Our study shows that a reliably overclocked system with dynamic thermal management achieves 25% performance improvement, while lasting for 14 years when being operated within 353K. Over the past five decades, technology scaling, as predicted by Moore\u27s law, has been the bedrock of semiconductor technology evolution. The continued downscaling of CMOS technology to deep sub-micron gate lengths has been the primary reason for its dominance in today\u27s omnipresent silicon microchips. Even as the transition to the next technology node is indispensable, the initial cost and time associated in doing so presents a non-level playing field for the competitors in the semiconductor business. As part of this thesis, we evaluate the capability of speculative reliable overclocking mechanisms to maximize performance at a given technology level. We evaluate its competitiveness when compared to technology scaling, in terms of performance, power consumption, energy and energy delay product. We present a comprehensive comparison for integer and floating point SPEC2000 benchmarks running on a simulated Alpha processor at three different technology nodes in normal and enhanced modes. Our results suggest that adopting reliable overclocking strategies will help skip a technology node altogether, or be competitive in the market, while porting to the next technology node. Reliability has become a serious concern as systems embrace nanometer technologies. In this dissertation, we propose a novel fault tolerant aggressive system that combines soft error protection and timing error tolerance. We replicate both the pipeline registers and the pipeline stage combinational logic. The replicated logic receives its inputs from the primary pipeline registers while writing its output to the replicated pipeline registers. The organization of redundancy in the proposed Conjoined Pipeline system supports overclocking, provides concurrent error detection and recovery capability for soft errors, intermittent faults and timing errors, and flags permanent silicon defects. The fast recovery process requires no checkpointing and takes three cycles. Back annotated post-layout gate-level timing simulations, using 45nm technology, of a conjoined two-stage arithmetic pipeline and a conjoined five-stage DLX pipeline processor, with forwarding logic, show that our approach, even under a severe fault injection campaign, achieves near 100% fault coverage and an average performance improvement of about 20%, when dynamically overclocked

    Communication system for a tooth-mounted RF sensor used for continuous monitoring of nutrient intake

    Get PDF
    In this Thesis, the communication system of a wearable device that monitors the user’s diet is studied. Based in a novel RF metamaterial-based mouth sensor, different decisions have to be made concerning the system’s technologies, such as the power source options for the device, the wireless technology used for communications and the method to obtain data from the sensor. These issues, along with other safety rules and regulations, are reviewed, as the first stage of development of the Food-Intake Monitoring projectOutgoin

    Modeling DVFS and Power-Gating Actuators for Cycle-Accurate NoC-Based Simulators

    Get PDF
    Networks-on-chip (NoCs) are a widely recognized viable interconnection paradigm to support the multi-core revolution. One of the major design issues of multicore architectures is still the power, which can no longer be considered mainly due to the cores, since the NoC contribution to the overall energy budget is relevant. To face both static and dynamic power while balancing NoC performance, different actuators have been exploited in literature, mainly dynamic voltage frequency scaling (DVFS) and power gating. Typically, simulation-based tools are employed to explore the huge design space by adopting simplified models of the components. As a consequence, the majority of state-of-the-art on NoC power-performance optimization do not accurately consider timing and power overheads of actuators, or (even worse) do not consider them at all, with the risk of overestimating the benefits of the proposed methodologies. This article presents a simulation framework for power-performance analysis of multicore architectures with specific focus on the NoC. It integrates accurate power gating and DVFS models encompassing also their timing and power overheads. The value added of our proposal is manyfold: (i) DVFS and power gating actuators are modeled starting from SPICE-level simulations; (ii) such models have been integrated in the simulation environment; (iii) policy analysis support is plugged into the framework to enable assessment of different policies; (iv) a flexible GALS (globally asynchronous locally synchronous) support is provided, covering both handshake and FIFO re-synchronization schemas. To demonstrate both the flexibility and extensibility of our proposal, two simple policies exploiting the modeled actuators are discussed in the article

    MATLAB

    Get PDF
    A well-known statement says that the PID controller is the "bread and butter" of the control engineer. This is indeed true, from a scientific standpoint. However, nowadays, in the era of computer science, when the paper and pencil have been replaced by the keyboard and the display of computers, one may equally say that MATLAB is the "bread" in the above statement. MATLAB has became a de facto tool for the modern system engineer. This book is written for both engineering students, as well as for practicing engineers. The wide range of applications in which MATLAB is the working framework, shows that it is a powerful, comprehensive and easy-to-use environment for performing technical computations. The book includes various excellent applications in which MATLAB is employed: from pure algebraic computations to data acquisition in real-life experiments, from control strategies to image processing algorithms, from graphical user interface design for educational purposes to Simulink embedded systems
    • …
    corecore