332,657 research outputs found
Analysis and design of power delivery networks exploiting simulation tools and numerical optimization techniques
A higher performance of computing systems is being demanded year after year, driving the digital industry to fiercely compete for offering the fastest computer system at the lowest cost. In addition, as computing system performance is growing, power delivery networks (PDN) and power integrity (PI) designs are getting increasingly more relevance due to the faster speeds and more parallelism required to obtain the required performance growth. The largest data throughput at the lowest power consumption is a common goal for most of the commercial computing systems. As a consequence of this performance growth and power delivery tradeoffs, the complexity involved in analyzing and designing PDN in digital systems is being increased. This complexity drives longer design cycle times when using traditional design tools. For this reason, the need of using more efficient design methods is getting more relevance in order to keep designing and launching products in a faster manner to the market. This trend pushes PDN designers to look for methodologies to simplify analysis and reduce design cycle times. The main objective for this Master’s thesis is to propose alternative methods by exploiting reliable simulation approaches and efficient numerical optimization techniques to analyze and design PDN to ensure power integrity. This thesis explores the use of circuital models and electromagnetic (EM) field solvers in combination with numerical optimization methods, including parameter extraction (PE) formulations. It also establishes a sound basis for using space mapping (SM) methodologies in future developments, in a way that we exploit the advantages of the most accurate and powerful models, such as 3D full-wave EM simulators, but conserving the simplicity and low computational resourcing of the analytical, circuital, and empirical models
Recommended from our members
Variability-aware low-power techniques for nanoscale mixed-signal circuits.
New circuit design techniques that accommodate lower supply voltages necessary for portable systems need to be integrated into the semiconductor intellectual property (IP) core. Systems that once worked at 3.3 V or 2.5 V now need to work at 1.8 V or lower, without causing any performance degradation. Also, the fluctuation of device characteristics caused by process variation in nanometer technologies is seen as design yield loss. The numerous parasitic effects induced by layouts, especially for high-performance and high-speed circuits, pose a problem for IC design. Lack of exact layout information during circuit sizing leads to long design iterations involving time-consuming runs of complex tools. There is a strong need for low-power, high-performance, parasitic-aware and process-variation-tolerant circuit design. This dissertation proposes methodologies and techniques to achieve variability, power, performance, and parasitic-aware circuit designs. Three approaches are proposed: the single iteration automatic approach, the hybrid Monte Carlo and design of experiments (DOE) approach, and the corner-based approach. Widely used mixed-signal circuits such as analog-to-digital converter (ADC), voltage controlled oscillator (VCO), voltage level converter and active pixel sensor (APS) have been designed at nanoscale complementary metal oxide semiconductor (CMOS) and subjected to the proposed methodologies. The effectiveness of the proposed methodologies has been demonstrated through exhaustive simulations. Apart from these methodologies, the application of dual-oxide and dual-threshold techniques at circuit level in order to minimize power and leakage is also explored
RTL2RTL Formal Equivalence: Boosting the Design Confidence
Increasing design complexity driven by feature and performance requirements
and the Time to Market (TTM) constraints force a faster design and validation
closure. This in turn enforces novel ways of identifying and debugging
behavioral inconsistencies early in the design cycle. Addition of incremental
features and timing fixes may alter the legacy design behavior and would
inadvertently result in undesirable bugs. The most common method of verifying
the correctness of the changed design is to run a dynamic regression test suite
before and after the intended changes and compare the results, a method which
is not exhaustive. Modern Formal Verification (FV) techniques involving new
methods of proving Sequential Hardware Equivalence enabled a new set of
solutions for the given problem, with complete coverage guarantee. Formal
Equivalence can be applied for proving functional integrity after design
changes resulting from a wide variety of reasons, ranging from simple pipeline
optimizations to complex logic redistributions. We present here our experience
of successfully applying the RTL to RTL (RTL2RTL) Formal Verification across a
wide spectrum of problems on a Graphics design. The RTL2RTL FV enabled checking
the design sanity in a very short time, thus enabling faster and safer design
churn. The techniques presented in this paper are applicable to any complex
hardware design.Comment: In Proceedings FSFMA 2014, arXiv:1407.195
Recommended from our members
The Risk of Cancer from CT Scans and Other Sources of Low-Dose Radiation: A Critical Appraisal of Methodologic Quality
AbstractIntroduction: Concern exists that radiation exposure from computerized tomography (CT) will cause thousands of malignancies. Other experts share the same perspective regarding the risk from additional sources of low-dose ionizing radiation, such as the releases from Three Mile Island (1979; Pennsylvania USA) and Fukushima (2011; Okuma, Fukushima Prefecture, Japan) nuclear power plant disasters. If this premise is false, the fear of cancer leading patients and physicians to avoid CT scans and disaster responders to initiate forcedevacuations is unfounded.Study Objective: This investigation provides a quantitative evaluation of the methodologic quality of studies to determine the evidentiary strength supporting or refuting a causal relationshipbetween low-dose radiation and cancer. It will assess the number of higher qualitystudies that support or question the role of low-dose radiation in oncogenesis.Methods: This investigation is a systematic, methodologic review of articles published from 1975–2017 examining cancer risk from external low-dose x-ray and gamma radiation, defined as less than 200 millisievert (mSv). Following the PRISMA guidelines, the authors performed a search of the PubMed, Cochrane, Scopus, and Web of Science databases. Methodologies of selected articles were scored using the Newcastle Ottawa Scale (NOS) and a tool identifying 11 lower quality indicators. Manuscript methodologies were rankedas higher quality if they scored no lower than seven out of nine on the NOS and contained no more than two lower quality indicators. Investigators then characterized articles as supporting or not supporting a causal relationship between low-dose radiation and cancer.Results: Investigators identified 4,382 articles for initial review. A total of 62 articles met all inclusion/exclusion criteria and were evaluated in this study. Quantitative evaluation of the manuscripts’ methodologic strengths found 25 studies met higher quality criteria while 37 studies met lower quality criteria. Of the 25 studies with higher quality methods, 21 out of 25did not support cancer induction by low-dose radiation (P = .0003).Conclusions: A clear preponderance of articles with higher quality methods found no increased risk of cancer from low-dose radiation. The evidence suggests that exposure to multiple CT scans and other sources of low-dose radiation with a cumulative dose up to 100 mSv (approximately 10 scans), and possibly as high as 200 mSv (approximately 20 scans), does not increase cancer risk
Recommended from our members
Design Space Exploration in Cyber-Physical Systems
Cyber physical systems (CPS) integrate a variety of engineering areas such as control, mechanical and computer engineering in a holistic design effort. While interdependencies between the different disciplines are key attributes of CPS design science, little is known about the impact of design decisions of the cyber part on the overall system qualities. To investigate these interdependencies, this paper proposes a simulation-based Design Space Exploration (DSE) framework that considers detailed cyber system parameters such as cache size, bus width, and voltage levels in addition to physical and control parameters of the CPS. We propose an exploration algorithm that surfs the parameter configurations in the cyber physical sub-systems, in order to approximate the Pareto-optimal design points with regards to the trade-os among the design objectives, such as energy consumption and control stability. We apply the proposed framework to a network control system for an inverted-pendulum application. The presented holistic evaluation of the identified Pareto-points reveals the presence of non-trivial trade-os, which are imposed by the control, physical, and detailed cyber parameters. For instance the identified energy and control optimal design points comprise configurations with a wide range of CPU speeds, sample times and cache configuration following non-trivial zig-zag patterns. The proposed framework could identify and manage those trade-os and, as a result, is an imperative rst step to automate the search for superior CSP configurations
- …