5,683 research outputs found

    Recursive convex approximations for optimal power flow solution in direct current networks

    Get PDF
    The optimal power flow problem in direct current (DC) networks considering dispersal generation is addressed in this paper from the recursive programming point of view. The nonlinear programming model is transformed into two quadratic programming approximations that are convex since the power balance constraint is approximated between affine equivalents. These models are recursively (iteratively) solved from the initial point vt equal to 1.0 pu with t equal to 0, until that the error between both consecutive voltage iterations reaches the desired convergence criteria. The main advantage of the proposed quadratic programming models is that the global optimum finding is ensured due to the convexity of the solution space around vt. Numerical results in the DC version of the IEEE 69-bus system demonstrate the effectiveness and robustness of both proposals when compared with classical metaheuristic approaches such as particle swarm and antlion optimizers, among others. All the numerical validations are carried out in the MATLAB programming environment version 2021b with the software for disciplined convex programming known as CVX tool in conjuction with the Gurobi solver version 9.0; while the metaheuristic optimizers are directly implemented in the MATLAB scripts

    Convex Relaxation of Optimal Power Flow, Part I: Formulations and Equivalence

    Get PDF
    This tutorial summarizes recent advances in the convex relaxation of the optimal power flow (OPF) problem, focusing on structural properties rather than algorithms. Part I presents two power flow models, formulates OPF and their relaxations in each model, and proves equivalence relations among them. Part II presents sufficient conditions under which the convex relaxations are exact.Comment: Citation: IEEE Transactions on Control of Network Systems, 15(1):15-27, March 2014. This is an extended version with Appendices VIII and IX that provide some mathematical preliminaries and proofs of the main result

    Convex Relaxation of Optimal Power Flow, Part II: Exactness

    Get PDF
    This tutorial summarizes recent advances in the convex relaxation of the optimal power flow (OPF) problem, focusing on structural properties rather than algorithms. Part I presents two power flow models, formulates OPF and their relaxations in each model, and proves equivalence relations among them. Part II presents sufficient conditions under which the convex relaxations are exact.Comment: Citation: IEEE Transactions on Control of Network Systems, June 2014. This is an extended version with Appendex VI that proves the main results in this tutoria

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Inference of the Kinetic Ising Model with Heterogeneous Missing Data

    Get PDF
    We consider the problem of inferring a causality structure from multiple binary time series by using the Kinetic Ising Model in datasets where a fraction of observations is missing. We take our steps from a recent work on Mean Field methods for the inference of the model with hidden spins and develop a pseudo-Expectation-Maximization algorithm that is able to work even in conditions of severe data sparsity. The methodology relies on the Martin-Siggia-Rose path integral method with second order saddle-point solution to make it possible to calculate the log-likelihood in polynomial time, giving as output a maximum likelihood estimate of the couplings matrix and of the missing observations. We also propose a recursive version of the algorithm, where at every iteration some missing values are substituted by their maximum likelihood estimate, showing that the method can be used together with sparsification schemes like LASSO regularization or decimation. We test the performance of the algorithm on synthetic data and find interesting properties when it comes to the dependency on heterogeneity of the observation frequency of spins and when some of the hypotheses that are necessary to the saddle-point approximation are violated, such as the small couplings limit and the assumption of statistical independence between couplings
    corecore