18,248 research outputs found

    Optimal Phase Swapping in Low Voltage Distribution Networks Based on Smart Meter Data and Optimization Heuristics

    Get PDF
    In this paper a modified version of the Harmony Search algorithm is proposed as a novel tool for phase swapping in Low Voltage Distribution Networks where the objective is to determine to which phase each load should be connected in order to reduce the unbalance when all phases are added into the neutral conductor. Unbalanced loads deteriorate power quality and increase costs of investment and operation. A correct assignment is a direct, effective alternative to prevent voltage peaks and network outages. The main contribution of this paper is the proposal of an optimization model for allocating phases consumers according to their individual consumption in the network of low-voltage distribution considering mono and bi-phase connections using real hourly load patterns, which implies that the computational complexity of the defined combinatorial optimization problem is heavily increased. For this purpose a novel metric function is defined in the proposed scheme. The performance of the HS algorithm has been compared with classical Genetic Algorithm. Presented results show that HS outperforms GA not only on terms of quality but on the convergence rate, reducing the computational complexity of the proposed scheme while provide mono and bi phase connections.This paper includes partial results of the UPGRID project. This project has re- ceived funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 646.531), for further information check the website: http://upgrid.eu. As well as by the Basque Government through the ELKARTEK programme (BID3A and BID3ABI projects)

    Assessing hyper parameter optimization and speedup for convolutional neural networks

    Get PDF
    The increased processing power of graphical processing units (GPUs) and the availability of large image datasets has fostered a renewed interest in extracting semantic information from images. Promising results for complex image categorization problems have been achieved using deep learning, with neural networks comprised of many layers. Convolutional neural networks (CNN) are one such architecture which provides more opportunities for image classification. Advances in CNN enable the development of training models using large labelled image datasets, but the hyper parameters need to be specified, which is challenging and complex due to the large number of parameters. A substantial amount of computational power and processing time is required to determine the optimal hyper parameters to define a model yielding good results. This article provides a survey of the hyper parameter search and optimization methods for CNN architectures

    Metaheuristic Algorithms for Convolution Neural Network

    Get PDF
    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).Comment: Article ID 1537325, 13 pages. Received 29 January 2016; Revised 15 April 2016; Accepted 10 May 2016. Academic Editor: Martin Hagan. in Hindawi Publishing. Computational Intelligence and Neuroscience Volume 2016 (2016

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
    corecore