86 research outputs found
Physics-based equivalent circuit model extraction for system level PDN and a novel PDN impedance measurement method
“The power distribution network (PDN) plays an important role in the power supply system, especially with the increasing of the working frequency of the integrated circuit (IC). A physics-based circuit modeling methodology is proposed in the first section. The circuit model is extracted by following the current path in the system PDN and the related parameters are calculated based on the cavity model and plane-pair PEEC methods. By extracting the equivalent circuit model, the PDN system will be transformed into RLC element-based circuit. The role of each part of the system will be easily explained and the system behavior could be changed by changing the dominance part accordingly. This methodology makes a good contribution to the system level PDN troubleshooting and layout design optimization.
Compared with analytical methodologies, the measurement result is more solid and convincing. The special part of PDN is that the impedance could be as low as several milliohms, and the impedance varies during the frequency, so the accuracy of impedance measurement is challenging. Based on all these requirements, a novel PDN low impedance measurement methodology is proposed, and a probe based on I-V method is designed to support this methodology, which provides a new and practical approach of PDN impedance measurement with easy landing, simple setup, lower frequency, and less instrument quality dependent advantages. This probe could work in a wide frequency range with a relatively sufficient dynamic range”--Abstract, page iii
Recommended from our members
Active timing margin management to improve microprocessor power efficiency
Improving power/performance efficiency is critical for today’s micro- processors. From edge devices to datacenters, lower power or higher performance always produces better systems, measured by lower cost of ownership or longer battery time. This thesis studies improving microprocessor power/performance efficiency by optimizing the pipeline timing margin. In particular, this thesis focuses on improving the efficacy of Active Timing Margin, a young technology that dynamically adjusts the margin.
Active timing margin trims down the pipeline timing margin with a control loop that adjusts voltage and frequency based on real-time chip environment monitoring. The key insight of this thesis is that in order to maximize active timing margin’s efficiency enhancement benefits, synergistic management from processor architecture design and system software scheduling are needed. To that end, this thesis covers the major consumers of pipeline timing margin, including temperature, voltage, and process variation. For temperature variation, the thesis proposes a table-lookup based active timing margin mechanism, and an associated temperature management scheme to minimize power consumption. For voltage variation, the thesis characterizes the limiting factors of adaptive clocking’s power saving and proposes application scheduling to maximize total system power reduction. For process variation, the thesis proposes core-level adaptive clocking reconfiguration to automatically expose inter-core variation and discusses workload scheduling and throttling management to control critical application performance.
The author believes the optimization presented in this thesis can potentially benefit a variety of processor architectures as the conclusions are based on the solid measurement on state-of-the-art processors, and the research objective, active timing margin, already has wide applicability in the latest microprocessors by the time this thesis is written.Electrical and Computer Engineerin
Recommended from our members
Randomized Computations for Efficient and Robust Finite Element Domain Decomposition Methods in Electromagnetics
Numerical modeling of electromagnetic (EM) phenomenon has proved to become an effective and efficient tool in design and optimization of modern electronic devices, integrated circuits (IC) and RF systems. However the generality, efficiency and reliability/resilience of the computational EM solver is often criticised due to the fact that the underlying characteristics of the simulated problems are usually different, which makes the development of a general, \u27\u27black-box\u27\u27 EM solver to be a difficult task.
In this work, we aim to propose a reliable/resilient, scalable and efficient finite elements based domain decomposition method (FE-DDM) as a general CEM solver to tackle such ultimate CEM problems to some extent. We recognize the rank deficiency property of the Dirichlet-to-Neumann (DtN) operators involved in the previously proposed FETI-2 DDM formulation and apply such principle to improve the computational efficiency and robustness of FETI-2 DDM. Specifically, the rank deficient DtN operator is computed by a randomized computation method that was originally proposed to approximate matrix singular value decomposition (SVD). Numerical results show a up to 35\% run-time and 75% memory saving of the DtN operators computation can be achieved on a realistic example. Later, such rank deficiency principle is incorporated into a new global DDM preconditioner (W-FETI) that is inspired by the matrix Woodbury identity. Numerical study of the eigenspectrum shows the validity of the proposed W-FETI global preconditioner. Several industrial-scaled examples show significant iterative convergence advantage of W-FETI that uses 35%-80% matrix-vector-products (MxVs) than state-of-the-art DDM solvers
NASA Tech Briefs, March 1992
Topics include: New Product Ideas; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery; Fabrication Technology; Mathematics and Information Sciences; Life Sciences
Quantum Proofs
Quantum information and computation provide a fascinating twist on the notion
of proofs in computational complexity theory. For instance, one may consider a
quantum computational analogue of the complexity class \class{NP}, known as
QMA, in which a quantum state plays the role of a proof (also called a
certificate or witness), and is checked by a polynomial-time quantum
computation. For some problems, the fact that a quantum proof state could be a
superposition over exponentially many classical states appears to offer
computational advantages over classical proof strings. In the interactive proof
system setting, one may consider a verifier and one or more provers that
exchange and process quantum information rather than classical information
during an interaction for a given input string, giving rise to quantum
complexity classes such as QIP, QSZK, and QMIP* that represent natural quantum
analogues of IP, SZK, and MIP. While quantum interactive proof systems inherit
some properties from their classical counterparts, they also possess distinct
and uniquely quantum features that lead to an interesting landscape of
complexity classes based on variants of this model.
In this survey we provide an overview of many of the known results concerning
quantum proofs, computational models based on this concept, and properties of
the complexity classes they define. In particular, we discuss non-interactive
proofs and the complexity class QMA, single-prover quantum interactive proof
systems and the complexity class QIP, statistical zero-knowledge quantum
interactive proof systems and the complexity class \class{QSZK}, and
multiprover interactive proof systems and the complexity classes QMIP, QMIP*,
and MIP*.Comment: Survey published by NOW publisher
Solving hard industrial combinatorial problems with SAT
The topic of this thesis is the development of SAT-based techniques and tools for solving industrial combinatorial problems. First, it describes the architecture of state-of-the-art SAT and SMT Solvers based on the classical DPLL procedure. These systems can be used as black boxes for solving combinatorial problems. However, sometimes we can increase their efficiency with slight modifications of the basic algorithm. Therefore, the study and development of techniques for adjusting SAT Solvers to specific combinatorial problems is the first goal of this thesis.
Namely, SAT Solvers can only deal with propositional logic. For solving general combinatorial problems, two different approaches are possible:
- Reducing the complex constraints into propositional clauses.
- Enriching the SAT Solver language.
The first approach corresponds to encoding the constraint into SAT. The second one corresponds to using propagators, the basis for SMT Solvers. Regarding the first approach, in this document we improve the encoding of two of the most important combinatorial constraints: cardinality constraints and pseudo-Boolean constraints. After that, we present a new mixed approach, called lazy decomposition, which combines the advantages of encodings and propagators.
The other part of the thesis uses these theoretical improvements in industrial combinatorial problems. We give a method for efficiently scheduling some professional sport leagues with SAT. The results are promising and show that a SAT approach is valid for these problems.
However, the chaotical behavior of CDCL-based SAT Solvers due to VSIDS heuristics makes it difficult to obtain a similar solution for two similar problems. This may be inconvenient in real-world problems, since a user expects similar solutions when it makes slight modifications to the problem specification. In order to overcome this limitation, we have studied and solved the close solution problem, i.e., the problem of quickly finding a close solution when a similar problem is considered
- …