6,197 research outputs found

    Recent Advances in Computational Methods for the Power Flow Equations

    Get PDF
    The power flow equations are at the core of most of the computations for designing and operating electric power systems. The power flow equations are a system of multivariate nonlinear equations which relate the power injections and voltages in a power system. A plethora of methods have been devised to solve these equations, starting from Newton-based methods to homotopy continuation and other optimization-based methods. While many of these methods often efficiently find a high-voltage, stable solution due to its large basin of attraction, most of the methods struggle to find low-voltage solutions which play significant role in certain stability-related computations. While we do not claim to have exhausted the existing literature on all related methods, this tutorial paper introduces some of the recent advances in methods for solving power flow equations to the wider power systems community as well as bringing attention from the computational mathematics and optimization communities to the power systems problems. After briefly reviewing some of the traditional computational methods used to solve the power flow equations, we focus on three emerging methods: the numerical polynomial homotopy continuation method, Groebner basis techniques, and moment/sum-of-squares relaxations using semidefinite programming. In passing, we also emphasize the importance of an upper bound on the number of solutions of the power flow equations and review the current status of research in this direction.Comment: 13 pages, 2 figures. Submitted to the Tutorial Session at IEEE 2016 American Control Conferenc

    An Approximately Optimal Algorithm for Scheduling Phasor Data Transmissions in Smart Grid Networks

    Full text link
    In this paper, we devise a scheduling algorithm for ordering transmission of synchrophasor data from the substation to the control center in as short a time frame as possible, within the realtime hierarchical communications infrastructure in the electric grid. The problem is cast in the framework of the classic job scheduling with precedence constraints. The optimization setup comprises the number of phasor measurement units (PMUs) to be installed on the grid, a weight associated with each PMU, processing time at the control center for the PMUs, and precedence constraints between the PMUs. The solution to the PMU placement problem yields the optimum number of PMUs to be installed on the grid, while the processing times are picked uniformly at random from a predefined set. The weight associated with each PMU and the precedence constraints are both assumed known. The scheduling problem is provably NP-hard, so we resort to approximation algorithms which provide solutions that are suboptimal yet possessing polynomial time complexity. A lower bound on the optimal schedule is derived using branch and bound techniques, and its performance evaluated using standard IEEE test bus systems. The scheduling policy is power grid-centric, since it takes into account the electrical properties of the network under consideration.Comment: 8 pages, published in IEEE Transactions on Smart Grid, October 201

    The Maximal Positively Invariant Set: Polynomial Setting

    Get PDF
    This note considers the maximal positively invariant set for polynomial discrete time dynamics subject to constraints specified by a basic semialgebraic set. The note utilizes a relatively direct, but apparently overlooked, fact stating that the related preimage map preserves basic semialgebraic structure. In fact, this property propagates to underlying set--dynamics induced by the associated restricted preimage map in general and to its maximal trajectory in particular. The finite time convergence of the corresponding maximal trajectory to the maximal positively invariant set is verified under reasonably mild conditions. The analysis is complemented with a discussion of computational aspects and a prototype implementation based on existing toolboxes for polynomial optimization

    The Inflation Technique for Causal Inference with Latent Variables

    Full text link
    The problem of causal inference is to determine if a given probability distribution on observed variables is compatible with some causal structure. The difficult case is when the causal structure includes latent variables. We here introduce the inflation technique\textit{inflation technique} for tackling this problem. An inflation of a causal structure is a new causal structure that can contain multiple copies of each of the original variables, but where the ancestry of each copy mirrors that of the original. To every distribution of the observed variables that is compatible with the original causal structure, we assign a family of marginal distributions on certain subsets of the copies that are compatible with the inflated causal structure. It follows that compatibility constraints for the inflation can be translated into compatibility constraints for the original causal structure. Even if the constraints at the level of inflation are weak, such as observable statistical independences implied by disjoint causal ancestry, the translated constraints can be strong. We apply this method to derive new inequalities whose violation by a distribution witnesses that distribution's incompatibility with the causal structure (of which Bell inequalities and Pearl's instrumental inequality are prominent examples). We describe an algorithm for deriving all such inequalities for the original causal structure that follow from ancestral independences in the inflation. For three observed binary variables with pairwise common causes, it yields inequalities that are stronger in at least some aspects than those obtainable by existing methods. We also describe an algorithm that derives a weaker set of inequalities but is more efficient. Finally, we discuss which inflations are such that the inequalities one obtains from them remain valid even for quantum (and post-quantum) generalizations of the notion of a causal model.Comment: Minor final corrections, updated to match the published version as closely as possibl

    Métodos matemáticos e computacionais para modelagem e edição de deformações

    Get PDF
    Orientador: Jorge StolfiTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Nesta tese, descrevemos primeiramente o algoritmo ECLES (Editing by Constrained LEast Squares), um método geral para edição interativa de objetos definidos por parâmetros sujeitos a restrições lineares ou afins. Neste método, as restrições e as ações de edição do usuário são combinadas usando mínimos quadrados restritos, ao invés da abordagem mais comum de elementos finitos. Usamos aritmética exata para detectar e eliminar redundâncias no conjunto de restrições e evitar falhas devido a erros de arredondamento. O algoritmo ECLES tem diversas aplicações. Entre elas, podemos citar a edição de deformações spline com continuidade C¹. Nesta tese, descrevemos um método interativo de edição de deformações do plano, o algoritmo 2DSD (2D Spline Deformation). As deformações são definidas por splines de grau 5 sobre uma malha triangular arbitrária. Estas deformações são editadas alterando-se as posições dos pontos de controle da malha. O algoritmo ECLES é usado em cada ação de edição do usuário para detectar, de forma robusta e eficiente, o conjunto de restrições de continuidade C¹ que são relevantes, garantindo que não existam redundâncias. Em seguida, como os parâmetros são modificados pelo usuário, o ECLES é chamado para calcular as novas posições dos pontos de controle satisfazendo as restrições e as posições especificadas pelo usuário. A fim de validar nosso método 2DSD, ele foi utilizado como parte de um editor interativo para deformações do espaço 2.5D, o editor PrisMystic. Este editor foi utilizado, principalmente, para deformar modelos tridimensionais de organismos microscópicos não-rígidos de modo a coincidir com imagens reais de microscopia ótica. Também utilizamos o editor para editar modelos de terrenosAbstract: In this thesis, we present the ECLES algorithm (Editing by Constrained LEast Squares), a general method for interactive editing of objects that are defined by parameters subject to linear or affine constraints. In this method, the constraints and the user editing actions are combined using constrained least squares instead of the usual finite element approach. We use exact integer arithmetic in order to detect and eliminate redundancies in the set of constraints and to avoid failures due to rounding errors. The ECLES algorithm has various applications. Among them, we can cite the editing of C¹-continuous spline deformations. In this thesis, we describe an interactive editing method for deformations of the plane, the 2DSD algorithm (2D Spline Deformation). The deformations are defined by splines of degree 5 on an arbitrary triangular mesh. The deformations are edited by changing the positions of its control points. The ECLES algorithm is first used in each user editing action in order to detect, in a robust and efficient way, the set of relevant constraints of C¹ continuity, ensuring that there are no redundancies. Then, as the parameters are changed by the user, ECLES is called to compute the new positions of the control points satisfying the constraints and the positions specified by the user. To validate our 2DSD algorithm, we used it as part of an interactive editor for 2.5D space deformations, the PrisMystic editor. This editor has been used, mainly, to deform 3D models of non-rigid living microscopic organisms as seen in actual optical microscope images. We also used the editor to edit terrain modelsDoutoradoCiência da ComputaçãoDoutora em Ciência da Computação140780/2013-001-P-04554-2013CNPQCAPE

    High compression image and image sequence coding

    Get PDF
    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis
    corecore