265,970 research outputs found
Recent trends of the most used metaheuristic techniques for distribution network reconfiguration
Distribution network reconfiguration (DNR) continues to be a good option to reduce technical losses in a distribution
power grid. However, this non-linear combinatorial problem is not easy to assess by exact methods when solving for
large distribution networks, which requires large computational times. For solving this type of problem, some researchers
prefer to use metaheuristic techniques due to convergence speed, near-optimal solutions, and simple programming. Some
literature reviews specialize in topics concerning the optimization of power network reconfiguration and try to cover
most techniques. Nevertheless, this does not allow detailing properly the use of each technique, which is important to
identify the trend. The contributions of this paper are three-fold. First, it presents the objective functions and constraints
used in DNR with the most used metaheuristics. Second, it reviews the most important techniques such as particle swarm
optimization (PSO), genetic algorithm (GA), simulated annealing (SA), ant colony optimization (ACO), immune
algorithms (IA), and tabu search (TS). Finally, this paper presents the trend of each technique from 2011 to 2016. This
paper will be useful for researchers interested in knowing the advances of recent approaches in these metaheuristics
applied to DNR in order to continue developing new best algorithms and improving solutions for the topi
Optimization in Networks
The recent surge in the network modeling of complex systems has set the stage
for a new era in the study of fundamental and applied aspects of optimization
in collective behavior. This Focus Issue presents an extended view of the state
of the art in this field and includes articles from a large variety of domains
where optimization manifests itself, including physical, biological, social,
and technological networked systems.Comment: Opening article of the CHAOS Focus Issue "Optimization in Networks",
available at http://link.aip.org/link/?CHA/17/2/htmlto
Finding community structure in networks using the eigenvectors of matrices
We consider the problem of detecting communities or modules in networks,
groups of vertices with a higher-than-average density of edges connecting them.
Previous work indicates that a robust approach to this problem is the
maximization of the benefit function known as "modularity" over possible
divisions of a network. Here we show that this maximization process can be
written in terms of the eigenspectrum of a matrix we call the modularity
matrix, which plays a role in community detection similar to that played by the
graph Laplacian in graph partitioning calculations. This result leads us to a
number of possible algorithms for detecting community structure, as well as
several other results, including a spectral measure of bipartite structure in
networks and a new centrality measure that identifies those vertices that
occupy central positions within the communities to which they belong. The
algorithms and measures proposed are illustrated with applications to a variety
of real-world complex networks.Comment: 22 pages, 8 figures, minor corrections in this versio
On the Analysis of Trajectories of Gradient Descent in the Optimization of Deep Neural Networks
Theoretical analysis of the error landscape of deep neural networks has
garnered significant interest in recent years. In this work, we theoretically
study the importance of noise in the trajectories of gradient descent towards
optimal solutions in multi-layer neural networks. We show that adding noise (in
different ways) to a neural network while training increases the rank of the
product of weight matrices of a multi-layer linear neural network. We thus
study how adding noise can assist reaching a global optimum when the product
matrix is full-rank (under certain conditions). We establish theoretical
foundations between the noise induced into the neural network - either to the
gradient, to the architecture, or to the input/output to a neural network - and
the rank of product of weight matrices. We corroborate our theoretical findings
with empirical results.Comment: 4 pages + 1 figure (main, excluding references), 5 pages + 4 figures
(appendix
- …