5,422 research outputs found

    A Hybrid MILP and IPM for Dynamic Economic Dispatch with Valve Point Effect

    Full text link
    Dynamic economic dispatch with valve-point effect (DED-VPE) is a non-convex and non-differentiable optimization problem which is difficult to solve efficiently. In this paper, a hybrid mixed integer linear programming (MILP) and interior point method (IPM), denoted by MILP-IPM, is proposed to solve such a DED-VPE problem, where the complicated transmission loss is also included. Due to the non-differentiable characteristic of DED-VPE, the classical derivative-based optimization methods can not be used any more. With the help of model reformulation, a differentiable non-linear programming (NLP) formulation which can be directly solved by IPM is derived. However, if the DED-VPE is solved by IPM in a single step, the optimization will easily trap in a poor local optima due to its non-convex and multiple local minima characteristics. To exploit a better solution, an MILP method is required to solve the DED-VPE without transmission loss, yielding a good initial point for IPM to improve the quality of the solution. Simulation results demonstrate the validity and effectiveness of the proposed MILP-IPM in solving DED-VPE

    Efficient Evolutionary Algorithm for Single-Objective Bilevel Optimization

    Full text link
    Bilevel optimization problems are a class of challenging optimization problems, which contain two levels of optimization tasks. In these problems, the optimal solutions to the lower level problem become possible feasible candidates to the upper level problem. Such a requirement makes the optimization problem difficult to solve, and has kept the researchers busy towards devising methodologies, which can efficiently handle the problem. Despite the efforts, there hardly exists any effective methodology, which is capable of handling a complex bilevel problem. In this paper, we introduce bilevel evolutionary algorithm based on quadratic approximations (BLEAQ) of optimal lower level variables with respect to the upper level variables. The approach is capable of handling bilevel problems with different kinds of complexities in relatively smaller number of function evaluations. Ideas from classical optimization have been hybridized with evolutionary methods to generate an efficient optimization algorithm for generic bilevel problems. The efficacy of the algorithm has been shown on two sets of test problems. The first set is a recently proposed SMD test set, which contains problems with controllable complexities, and the second set contains standard test problems collected from the literature. The proposed method has been evaluated against two benchmarks, and the performance gain is observed to be significant

    Neural Network Architecture Search with Differentiable Cartesian Genetic Programming for Regression

    Full text link
    The ability to design complex neural network architectures which enable effective training by stochastic gradient descent has been the key for many achievements in the field of deep learning. However, developing such architectures remains a challenging and resourceintensive process full of trial-and-error iterations. All in all, the relation between the network topology and its ability to model the data remains poorly understood. We propose to encode neural networks with a differentiable variant of Cartesian Genetic Programming (dCGPANN) and present a memetic algorithm for architecture design: local searches with gradient descent learn the network parameters while evolutionary operators act on the dCGPANN genes shaping the network architecture towards faster learning. Studying a particular instance of such a learning scheme, we are able to improve the starting feed forward topology by learning how to rewire and prune links, adapt activation functions and introduce skip connections for chosen regression tasks. The evolved network architectures require less space for network parameters and reach, given the same amount of time, a significantly lower error on average.Comment: a short version of this was accepted as poster paper at GECCO 201

    A Sequential Quadratic Programming Method for Constrained Multi-objective Optimization Problems

    Full text link
    In this article, a globally convergent sequential quadratic programming (SQP) method is developed for multi-objective optimization problems with inequality type constraints. A feasible descent direction is obtained using a linear approximation of all objective functions as well as constraint functions. The sub-problem at every iteration of the sequence has feasible solution. A non-differentiable penalty function is used to deal with constraint violations. A descent sequence is generated which converges to a critical point under the Mangasarian-Fromovitz constraint qualification along with some other mild assumptions. The method is compared with a selection of existing methods on a suitable set of test problems.Comment: 19 pages, 11 figure

    Embryo staging with weakly-supervised region selection and dynamically-decoded predictions

    Full text link
    To optimize clinical outcomes, fertility clinics must strategically select which embryos to transfer. Common selection heuristics are formulas expressed in terms of the durations required to reach various developmental milestones, quantities historically annotated manually by experienced embryologists based on time-lapse EmbryoScope videos. We propose a new method for automatic embryo staging that exploits several sources of structure in this time-lapse data. First, noting that in each image the embryo occupies a small subregion, we jointly train a region proposal network with the downstream classifier to isolate the embryo. Notably, because we lack ground-truth bounding boxes, our we weakly supervise the region proposal network optimizing its parameters via reinforcement learning to improve the downstream classifier's loss. Moreover, noting that embryos reaching the blastocyst stage progress monotonically through earlier stages, we develop a dynamic-programming-based decoder that post-processes our predictions to select the most likely monotonic sequence of developmental stages. Our methods outperform vanilla residual networks and rival the best numbers in contemporary papers, as measured by both per-frame accuracy and transition prediction error, despite operating on smaller data than many

    Variational Optimization

    Full text link
    We discuss a general technique that can be used to form a differentiable bound on the optima of non-differentiable or discrete objective functions. We form a unified description of these methods and consider under which circumstances the bound is concave. In particular we consider two concrete applications of the method, namely sparse learning and support vector classification

    Positional Cartesian Genetic Programming

    Full text link
    Cartesian Genetic Programming (CGP) has many modifications across a variety of implementations, such as recursive connections and node weights. Alternative genetic operators have also been proposed for CGP, but have not been fully studied. In this work, we present a new form of genetic programming based on a floating point representation. In this new form of CGP, called Positional CGP, node positions are evolved. This allows for the evaluation of many different genetic operators while allowing for previous CGP improvements like recurrency. Using nine benchmark problems from three different classes, we evaluate the optimal parameters for CGP and PCGP, including novel genetic operators

    Genetic algorithms with DNN-based trainable crossover as an example of partial specialization of general search

    Full text link
    Universal induction relies on some general search procedure that is doomed to be inefficient. One possibility to achieve both generality and efficiency is to specialize this procedure w.r.t. any given narrow task. However, complete specialization that implies direct mapping from the task parameters to solutions (discriminative models) without search is not always possible. In this paper, partial specialization of general search is considered in the form of genetic algorithms (GAs) with a specialized crossover operator. We perform a feasibility study of this idea implementing such an operator in the form of a deep feedforward neural network. GAs with trainable crossover operators are compared with the result of complete specialization, which is also represented as a deep neural network. Experimental results show that specialized GAs can be more efficient than both general GAs and discriminative models.Comment: AGI 2017 procedding, The final publication is available at link.springer.co

    Estimating the Region of Attraction Using Polynomial Optimization: a Converse Lyapunov Result

    Full text link
    In this paper, we propose an iterative method for using SOS programming to estimate the region of attraction of a polynomial vector field, the conjectured convergence of which necessitates the existence of polynomial Lyapunov functions whose sublevel sets approximate the true region of attraction arbitrarily well. The main technical result of the paper is the proof of existence of such a Lyapunov function. Specifically, we use the Hausdorff distance metric to analyze convergence and in the main theorem demonstrate that the existence of an nn-times continuously differentiable maximal Lyapunov function implies that for any ϵ>0\epsilon>0, there exists a polynomial Lyapunov function and associated sub-level set which together prove stability of a set which is within ϵ\epsilon Hausdorff distance of the true region of attraction. The proposed iterative method and probably convergence is illustrated with a numerical example

    Deep Learning: Our Miraculous Year 1990-1991

    Full text link
    In 2020, we will celebrate that many of the basic ideas behind the deep learning revolution were published three decades ago within fewer than 12 months in our "Annus Mirabilis" or "Miraculous Year" 1990-1991 at TU Munich. Back then, few people were interested, but a quarter century later, neural networks based on these ideas were on over 3 billion devices such as smartphones, and used many billions of times per day, consuming a significant fraction of the world's compute.Comment: 37 pages, 188 references, based on work of 4 Oct 201
    • …
    corecore