32 research outputs found

    Modelo binível de despacho ótimo de potência ativa e reativa baseado nas condições necessárias de Fritz-John normalizadas

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia Elétrica, Florianópolis, 2015.Neste trabalho é apresentado um modelo de despacho ótimo de potência ativa e reativa baseado na minimização do somatório dos desvios quadráticos da potência ativa fornecida pelos geradores, quando estes provêm reativos para o sistema, em relação à potência ativa gerada por estes mesmos geradores quando não precisam gerar reativos. Os desvios quadráticos são ponderados pelos preços marginais das barras, os quais dependem das ofertas dos geradores, e pela capacidade dos geradores. O modelo foi expresso como um problema do Fluxo de Potência Ótimo não-linear de dois níveis. Os objetivos deste trabalho são adaptar o modelo proposto por SENNA (2009) ao caso brasileiro, analisar os resultados obtidos através deste modelo e buscar uma forma alternativa de resolver o problema de otimização de dois níveis. Como técnica de solução, o problema do nível inferior foi modelado como um conjunto de restrições do problema de nível superior. Assim o problema binível tornou-se um problema de otimização de um único nível, que foi resolvido através do método primal-dual de pontos interiores utilizando as condições necessárias de Fritz-John (FJ) normalizadas. Foram feitas simulações para exemplos de 2 e 5 barras e para os sistemas de 30 e 118 barras do IEEE e os resultados numéricos obtidos mostram que a metodologia apresenta bom desempenho.Abstract : This work presents an optimal active and reactive power dispatch model based on the minimization of the sum of the square deviations of the active power provided by generators, when supplying reactive power, with respect to the active power generation when reactive power generation is not needed. The square deviations are weighted by the bus marginal prices, which depend on the offers made by the generators, and by the generator capacities. The model is expressed as a bilevel nonlinear optimal power flow problem. This bilevel problem is transformed into a single level problem by substituting the optimality conditions of the lower level problem into the upper level one. A normalized set of Fritz-John conditions is used to represent the optimal solutions of the single level problem, and the primal-dual interior point method is employed to obtain its solutions. Simulations were carried out with 2-bus and 5-bus examples, and also with the IEEE 30 and 118-bus test systems, indicating the good performance of the method

    Parametric programming: An illustrative mini encyclopedia

    Get PDF
    Parametric programming is one of the broadest areas of applied mathematics. Practical problems, that can be described by parametric programming, were recorded in the rock art about thirty millennia ago. As a scientific discipline, parametric programming began emerging only in the 1950\u27s. In this tutorial we introduce, briefly study, and illustrate some of the elementary notions of parametric programming. This is done using a limited theory (mainly for linear and convex models) and by means of examples, figures, and solved real-life case studies. Among the topics discussed are stable and unstable models, such as a projectile motion model (maximizing the range of a projectile), bilevel decision making models and von Stackelberg games of market economy, law of refraction and Snell\u27s law for the ray of light, duality, Zermelo\u27s navigation problems under the water, restructuring in a textile mill, ranking of efficient DMU (university libraries) in DEA, minimal resistance to a gas flow, and semi-abstract parametric programming models. Some numerical methods of input optimization are mentioned and several open problems are posed

    Consumer load modeling and fair mechanisms in the efficient transactive energy market

    Get PDF
    Doctor of PhilosophyDepartment of Electrical and Computer EngineeringSanjoy DasTwo significant and closely related issues pertaining to the grid-constrained transactive distribution system market are investigated in this research. At first, the problem of spatial fairness in the allocation of energy among energy consumers is addressed, where consumer agents that are located at large distances from the substation – in terms of grid layout, are charged at higher rates than those close to it. This phenomenon, arising from the grid’s voltage and flow limits is aggravated during demand peaks. Using the Jain’s index to quantify fairness, two auction mechanisms are proposed. Both approaches are bilevel, with aggregators acting as interface agents between the consumers and the upstream distribution system operator (DSO). Furthermore, in spite of maximizing social welfare, neither mechanism makes use of the agents’ utility functions. The first mechanism is cost-setting, with the DSO determining unit costs. It implements the Jain’s index as a second term to the social welfare. Next, a power setting auction mechanism is put forth where the DSO’s role is to allocate energy in response to market equilibrium unit costs established at each aggregator from an iterative bidding process among its consumers. The Augmented Lagrangian Multigradient Approach (ALMA), which is based on vector gradient descent, is proposed in this research for implementation at the upper level. The mechanism’s lower level comprises of multiple auctions realized by the aggregators. The quasi-concavity of the Jain’s index is theoretically established, and it has been shown that ALMA converges to the Pareto front representing tradeoffs between social welfare and fairness. The effectiveness of both mechanisms is established through simulations carried out using a modified IEEE 37-bus system platform. The issue of extracting patterns of energy usage from time series energy use profiles of individual consumers is the focus of the second phase of this research. Two novel approaches for non-intrusive load disaggregation based on non-negative matrix factorization (NMF), are proposed. Both algorithms distinguish between fixed and shiftable load classes, with the latter being characterized by binary OFF and ON cycles. Fixed loads are represented as linear combinations of a set of basis vectors that are learned by NMF. One approach imposes L0 normed constraints on each shiftable load using a new method called binary load decomposition. The other approach models shiftable loads as Gaussian mixture models (GMM), therefore using expectation-maximization for unsupervised learning. This hybrid NMF-GMM algorithm enjoys the theoretical advantage of being interpretable as a maximum-likelihood procedure within a probabilistic framework. Numerical studies with real load profiles demonstrate that both algorithms can effectively disaggregate total loads into energy used by individual appliances. Using disaggregated loads, a maximum-margin regression approach to derive more elaborate, temperature-dependent utility functions of the consumers, is proposed. The research concludes by identifying the various ways gleaning such information can lead to more effective auction mechanisms for multi-period operation

    Numerical Methods for Mixed-Integer Optimal Control with Combinatorial Constraints

    Get PDF
    This thesis is concerned with numerical methods for Mixed-Integer Optimal Control Problems with Combinatorial Constraints. We establish an approximation theorem relating a Mixed-Integer Optimal Control Problem with Combinatorial Constraints to a continuous relaxed convexified Optimal Control Problems with Vanishing Constraints that provides the basis for numerical computations. We develop a a Vanishing- Constraint respecting rounding algorithm to exploit this correspondence computationally. Direct Discretization of the Optimal Control Problem with Vanishing Constraints yield a subclass of Mathematical Programs with Equilibrium Constraints. Mathematical Programs with Equilibrium Constraint constitute a class of challenging problems due to their inherent non-convexity and non-smoothness. We develop an active-set algorithm for Mathematical Programs with Equilibrium Constraints and prove global convergence to Bouligand stationary points of this algorithm under suitable technical conditions. For efficient computation of Newton-type steps of Optimal Control Problems, we establish the Generalized Lanczos Method for trust region problems in a Hilbert space context. To ensure real-time feasibility in Online Optimal Control Applications with tracking-type Lagrangian objective, we develop a Gauß-Newton preconditioner for the iterative solution method of the trust region problem. We implement the proposed methods and demonstrate their applicability and efficacy on several benchmark problems

    Co-evolutionary Hybrid Bi-level Optimization

    Get PDF
    Multi-level optimization stems from the need to tackle complex problems involving multiple decision makers. Two-level optimization, referred as ``Bi-level optimization'', occurs when two decision makers only control part of the decision variables but impact each other (e.g., objective value, feasibility). Bi-level problems are sequential by nature and can be represented as nested optimization problems in which one problem (the ``upper-level'') is constrained by another one (the ``lower-level''). The nested structure is a real obstacle that can be highly time consuming when the lower-level is NPhard\mathcal{NP}-hard. Consequently, classical nested optimization should be avoided. Some surrogate-based approaches have been proposed to approximate the lower-level objective value function (or variables) to reduce the number of times the lower-level is globally optimized. Unfortunately, such a methodology is not applicable for large-scale and combinatorial bi-level problems. After a deep study of theoretical properties and a survey of the existing applications being bi-level by nature, problems which can benefit from a bi-level reformulation are investigated. A first contribution of this work has been to propose a novel bi-level clustering approach. Extending the well-know ``uncapacitated k-median problem'', it has been shown that clustering can be easily modeled as a two-level optimization problem using decomposition techniques. The resulting two-level problem is then turned into a bi-level problem offering the possibility to combine distance metrics in a hierarchical manner. The novel bi-level clustering problem has a very interesting property that enable us to tackle it with classical nested approaches. Indeed, its lower-level problem can be solved in polynomial time. In cooperation with the Luxembourg Centre for Systems Biomedicine (LCSB), this new clustering model has been applied on real datasets such as disease maps (e.g. Parkinson, Alzheimer). Using a novel hybrid and parallel genetic algorithm as optimization approach, the results obtained after a campaign of experiments have the ability to produce new knowledge compared to classical clustering techniques combining distance metrics in a classical manner. The previous bi-level clustering model has the advantage that the lower-level can be solved in polynomial time although the global problem is by definition NP\mathcal{NP}-hard. Therefore, next investigations have been undertaken to tackle more general bi-level problems in which the lower-level problem does not present any specific advantageous properties. Since the lower-level problem can be very expensive to solve, the focus has been turned to surrogate-based approaches and hyper-parameter optimization techniques with the aim of approximating the lower-level problem and reduce the number of global lower-level optimizations. Adapting the well-know bayesian optimization algorithm to solve general bi-level problems, the expensive lower-level optimizations have been dramatically reduced while obtaining very accurate solutions. The resulting solutions and the number of spared lower-level optimizations have been compared to the bi-level evolutionary algorithm based on quadratic approximations (BLEAQ) results after a campaign of experiments on official bi-level benchmarks. Although both approaches are very accurate, the bi-level bayesian version required less lower-level objective function calls. Surrogate-based approaches are restricted to small-scale and continuous bi-level problems although many real applications are combinatorial by nature. As for continuous problems, a study has been performed to apply some machine learning strategies. Instead of approximating the lower-level solution value, new approximation algorithms for the discrete/combinatorial case have been designed. Using the principle employed in GP hyper-heuristics, heuristics are trained in order to tackle efficiently the NPhard\mathcal{NP}-hard lower-level of bi-level problems. This automatic generation of heuristics permits to break the nested structure into two separated phases: \emph{training lower-level heuristics} and \emph{solving the upper-level problem with the new heuristics}. At this occasion, a second modeling contribution has been introduced through a novel large-scale and mixed-integer bi-level problem dealing with pricing in the cloud, i.e., the Bi-level Cloud Pricing Optimization Problem (BCPOP). After a series of experiments that consisted in training heuristics on various lower-level instances of the BCPOP and using them to tackle the bi-level problem itself, the obtained results are compared to the ``cooperative coevolutionary algorithm for bi-level optimization'' (COBRA). Although training heuristics enables to \emph{break the nested structure}, a two phase optimization is still required. Therefore, the emphasis has been put on training heuristics while optimizing the upper-level problem using competitive co-evolution. Instead of adopting the classical decomposition scheme as done by COBRA which suffers from the strong epistatic links between lower-level and upper-level variables, co-evolving the solution and the mean to get to it can cope with these epistatic link issues. The ``CARBON'' algorithm developed in this thesis is a competitive and hybrid co-evolutionary algorithm designed for this purpose. In order to validate the potential of CARBON, numerical experiments have been designed and results have been compared to state-of-the-art algorithms. These results demonstrate that ``CARBON'' makes possible to address nested optimization efficiently

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Annual Research Report 2021

    Get PDF
    corecore