5,479 research outputs found

    Convex optimization methods for model reduction

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 153-161).Model reduction and convex optimization are prevalent in science and engineering applications. In this thesis, convex optimization solution techniques to three different model reduction problems are studied.Parameterized reduced order modeling is important for rapid design and optimization of systems containing parameter dependent reducible sub-circuits such as interconnects and RF inductors. The first part of the thesis presents a quasi-convex optimization approach to solve the parameterized model order reduction problem for linear time-invariant systems. Formulation of the model reduction problem as a quasi-convex program allows the flexibility to enforce constraints such as stability and passivity in both non-parameterized and parameterized cases. Numerical results including the parameterized reduced modeling of a large RF inductor are given to demonstrate the practical value of the proposed algorithm.A majority of nonlinear model reduction techniques can be regarded as a two step procedure as follows. First the state dimension is reduced through a projection, and then the vector field of the reduced state is approximated for improved computation efficiency. Neither of the above steps has been thoroughly studied. The second part of this thesis presents a solution to a particular problem in the second step above, namely, finding an upper bound of the system input/output error due to nonlinear vector field approximation. The system error upper bounding problem is formulated as an L2 gain upper bounding problem of some feedback interconnection, to which the small gain theorem can be applied. A numerical procedure based on integral quadratic constraint analysis and a theoretical statement based on L2 gain analysis are given to provide the solution to the error bounding problem. The numerical procedure is applied to analyze the vector field approximation quality of a transmission line with diodes.(Cont) The application of Volterra series to the reduced modeling of nonlinear systems is hampered by the rapidly increasing computation cost with respect to the degrees of the polynomials used. On the other hand, while it is less general than the Volterra series model, the Wiener-Hammerstein model has been shown to be useful for accurate and compact modeling of certain nonlinear sub-circuits such as power amplifiers. The third part of the thesis presents a convex optimization solution technique to the reduction/identification of the Wiener-Hammerstein system. The identification problem is formulated as a non-convex quadratic program, which is solved by a semidefinite programming relaxation technique. It is demonstrated in the thesis that the formulation is robust with respect to noisy measurement, and the relaxation technique is oftentimes sufficient to provide good solutions. Simple examples are provided to demonstrate the use of the proposed identification algorithm.by Kin Cheong Sou.Ph.D

    Tensor Computation: A New Framework for High-Dimensional Problems in EDA

    Get PDF
    Many critical EDA problems suffer from the curse of dimensionality, i.e. the very fast-scaling computational burden produced by large number of parameters and/or unknown variables. This phenomenon may be caused by multiple spatial or temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit simulation), nonlinearity of devices and circuits, large number of design or optimization parameters (e.g. full-chip routing/placement and circuit sizing), or extensive process variations (e.g. variability/reliability analysis and design for manufacturability). The computational challenges generated by such high dimensional problems are generally hard to handle efficiently with traditional EDA core algorithms that are based on matrix and vector computation. This paper presents "tensor computation" as an alternative general framework for the development of efficient EDA algorithms and tools. A tensor is a high-dimensional generalization of a matrix and a vector, and is a natural choice for both storing and solving efficiently high-dimensional EDA problems. This paper gives a basic tutorial on tensors, demonstrates some recent examples of EDA applications (e.g., nonlinear circuit modeling and high-dimensional uncertainty quantification), and suggests further open EDA problems where the use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and System

    Model Reduction Using Semidefinite Programming

    Get PDF
    In this thesis model reduction methods for linear time invariant systems are investigated. The reduced models are computed using semidefinite programming. Two ways of imposing the stability constraint are considered. However, both approaches add a positivity constraint to the program. The input to the algorithms is a number of frequency response samples of the original model. This makes the computational complexity relatively low for large-scale models. Extra properties on a reduced model can also be enforced, as long as the properties can be expressed as convex conditions. Semidefinite program are solved using the interior point methods which are well developed, making the implementation simpler. A number of extensions to the proposed methods were studied, for example, passive model reduction, frequency-weighted model reduction. An interesting extension is reduction of parameterized linear time invariant models, i.e. models with state-space matrices dependent on parameters. It is assumed, that parameters do not depend on state variables nor time. This extension is valuable in modeling, when a set of parameters has to be chosen to fit the required specifications. A good illustration of such a problem is modeling of a spiral radio frequency inductor. The physical model depends nonlinearly on two parameters: wire width and wire separation. To chose optimally both parameters a low-order model is usually created. The inductor modeling is considered as a case study in this thesis

    Model Order Reduction Based on Semidefinite Programming

    Get PDF
    The main topic of this PhD thesis is complexity reduction of linear time-invariant models. The complexity in such systems is measured by the number of differential equations forming the dynamical system. This number is called the order of the system. Order reduction is typically used as a tool to model complex systems, the simulation of which takes considerable time and/or has overwhelming memory requirements. Any model reflects an approximation of a real world system. Therefore, it is reasonable to sacrifice some model accuracy in order to obtain a simpler representation. Once a low-order model is obtained, the simulation becomes computationally cheaper, which saves time and resources. A low-order model still has to be "similar" to the full order one in some sense. There are many ways of measuring "similarity" and, typically, such a measure is chosen depending on the application. Three different settings of model order reduction were investigated in the thesis. The first one is H infinity model order reduction, i.e., the distance between two models is measured by the H infinity norm. Although, the problem has been tackled by many researchers, all the optimal solutions are yet to be found. However, there are a large number of methods, which solve suboptimal problems and deliver accurate approximations. Recently, research community has devoted more attention to large-scale systems and computationally scalable extensions of existing model reduction techniques. The algorithm developed in the thesis is based on the frequency response samples matching. For a large class of systems the computation of the frequency response samples can be done very efficiently. Therefore, the developed algorithm is relatively computationally cheap. The proposed algorithm can be seen as a computationally scalable extension to the well-known Hankel model reduction, which is known to deliver very accurate solutions. One of the reasons for such an assessment is that the relaxation employed in the proposed algorithm is tightly related to the one used in Hankel model reduction. Numerical simulations also show that the accuracy of the method is comparable to the Hankel model reduction one. The second part of the thesis is devoted to parameterized model order reduction. A parameterized model is essentially a family of models which depend on certain design parameters. The model reduction goal in this setting is to approximate the whole family of models for all values of parameters. The main motivation for such a model reduction setting is design of a model with an appropriate set of parameters. In order to make a good choice of parameters, the models need to be simulated for a large set of parameters. After inspecting the simulation results a model can be picked with suitable frequency or step responses. Parameterized model reduction significantly simplifies this procedure. The proposed algorithm for parameterized model reduction is a straightforward extension of the one described above. The proposed algorithm is applicable to linear parameter-varying systems modeling as well. Finally, the third topic is modeling interconnections of systems. In this thesis an interconnection is a collection of systems (or subsystems) connected in a typical block-diagram. In order to avoid confusion, throughout the thesis the entire model is called a supersystem, as opposed to subsystems, which a supersystem consists of. One of the specific cases of structured model reduction is controller reduction. In this problem there are two subsystems: the plant and the controller. Two directions of model reduction of interconnected systems are considered: model reduction in the nu-gap metric and structured model reduction. To some extent, using the nu-gap metric makes it possible to model subsystems without considering the supersystem at all. This property can be exploited for extremely large supersystems for which some forms of analysis (evaluating stability, computing step response, etc.) are intractable. However, a more systematic way of modeling is structured model reduction. There, the objective is to approximate certain subsystems in such a way that crucial characteristics of the given supersystem, such as stability, structure of interconnections, frequency response, are preserved. In structured model reduction all subsystems are taken into account, not only the approximated ones. In order to address structured model reduction, the supersystem is represented in a coprime factor form, where its structure also appears in coprime factors. Using this representation the problem is reduced to H infinity model reduction, which is addressed by the presented framework. All the presented methods are validated on academic or known benchmark problems. Since all the methods are based on semidefinite programming, adding new constraints is a matter of formulating a constraint as a semidefinite one. A number of extensions are presented, which illustrate the power of the approach. Properties of the methods are discussed throughout the thesis while some remaining problems conclude the manuscript

    RandomBoost: Simplified Multi-class Boosting through Randomization

    Full text link
    We propose a novel boosting approach to multi-class classification problems, in which multiple classes are distinguished by a set of random projection matrices in essence. The approach uses random projections to alleviate the proliferation of binary classifiers typically required to perform multi-class classification. The result is a multi-class classifier with a single vector-valued parameter, irrespective of the number of classes involved. Two variants of this approach are proposed. The first method randomly projects the original data into new spaces, while the second method randomly projects the outputs of learned weak classifiers. These methods are not only conceptually simple but also effective and easy to implement. A series of experiments on synthetic, machine learning and visual recognition data sets demonstrate that our proposed methods compare favorably to existing multi-class boosting algorithms in terms of both the convergence rate and classification accuracy.Comment: 15 page
    • …
    corecore