1,673 research outputs found
Neural networks in geophysical applications
Neural networks are increasingly popular in geophysics.
Because they are universal approximators, these
tools can approximate any continuous function with an
arbitrary precision. Hence, they may yield important
contributions to finding solutions to a variety of geophysical applications.
However, knowledge of many methods and techniques
recently developed to increase the performance
and to facilitate the use of neural networks does not seem
to be widespread in the geophysical community. Therefore,
the power of these tools has not yet been explored to
their full extent. In this paper, techniques are described
for faster training, better overall performance, i.e., generalization,and the automatic estimation of network size
and architecture
Basic Enhancement Strategies When Using Bayesian Optimization for Hyperparameter Tuning of Deep Neural Networks
Compared to the traditional machine learning models, deep neural networks (DNN) are known to be highly sensitive to the choice of hyperparameters. While the required time and effort for manual tuning has been rapidly decreasing for the well developed and commonly used DNN architectures, undoubtedly DNN hyperparameter optimization will continue to be a major burden whenever a new DNN architecture needs to be designed, a new task needs to be solved, a new dataset needs to be addressed, or an existing DNN needs to be improved further. For hyperparameter optimization of general machine learning problems, numerous automated solutions have been developed where some of the most popular solutions are based on Bayesian Optimization (BO). In this work, we analyze four fundamental strategies for enhancing BO when it is used for DNN hyperparameter optimization. Specifically, diversification, early termination, parallelization, and cost function transformation are investigated. Based on the analysis, we provide a simple yet robust algorithm for DNN hyperparameter optimization - DEEP-BO (Diversified, Early-termination-Enabled, and Parallel Bayesian Optimization). When evaluated over six DNN benchmarks, DEEP-BO mostly outperformed well-known solutions including GP-Hedge, BOHB, and the speed-up variants that use Median Stopping Rule or Learning Curve Extrapolation. In fact, DEEP-BO consistently provided the top, or at least close to the top, performance over all the benchmark types that we have tested. This indicates that DEEP-BO is a robust solution compared to the existing solutions. The DEEP-BO code is publicly available at <uri>https://github.com/snu-adsl/DEEP-BO</uri>
Nature-inspired algorithms for solving some hard numerical problems
Optimisation is a branch of mathematics that was developed to find the optimal solutions,
among all the possible ones, for a given problem. Applications of optimisation techniques
are currently employed in engineering, computing, and industrial problems. Therefore, optimisation is a very active research area, leading to the publication of a large number of
methods to solve specific problems to its optimality.
This dissertation focuses on the adaptation of two nature inspired algorithms that, based
on optimisation techniques, are able to compute approximations for zeros of polynomials
and roots of non-linear equations and systems of non-linear equations.
Although many iterative methods for finding all the roots of a given function already
exist, they usually require: (a) repeated deflations, that can lead to very inaccurate results
due to the problem of accumulating rounding errors, (b) good initial approximations to the
roots for the algorithm converge, or (c) the computation of first or second order derivatives,
which besides being computationally intensive, it is not always possible.
The drawbacks previously mentioned served as motivation for the use of Particle Swarm
Optimisation (PSO) and Artificial Neural Networks (ANNs) for root-finding, since they are
known, respectively, for their ability to explore high-dimensional spaces (not requiring good
initial approximations) and for their capability to model complex problems. Besides that,
both methods do not need repeated deflations, nor derivative information.
The algorithms were described throughout this document and tested using a test suite of
hard numerical problems in science and engineering. Results, in turn, were compared with
several results available on the literature and with the well-known Durand–Kerner method,
depicting that both algorithms are effective to solve the numerical problems considered.A Optimização é um ramo da matemática desenvolvido para encontrar as soluções óptimas, de entre todas as possíveis, para um determinado problema. Actualmente, são várias as
técnicas de optimização aplicadas a problemas de engenharia, de informática e da indústria.
Dada a grande panóplia de aplicações, existem inúmeros trabalhos publicados que propõem
métodos para resolver, de forma óptima, problemas específicos.
Esta dissertação foca-se na adaptação de dois algoritmos inspirados na natureza que,
tendo como base técnicas de optimização, são capazes de calcular aproximações para zeros
de polinómios e raízes de equações não lineares e sistemas de equações não lineares.
Embora já existam muitos métodos iterativos para encontrar todas as raízes ou zeros de
uma função, eles usualmente exigem: (a) deflações repetidas, que podem levar a resultados
muito inexactos, devido ao problema da acumulação de erros de arredondamento a cada
iteração; (b) boas aproximações iniciais para as raízes para o algoritmo convergir, ou (c) o
cálculo de derivadas de primeira ou de segunda ordem que, além de ser computacionalmente
intensivo, para muitas funções é impossível de se calcular.
Estas desvantagens motivaram o uso da Optimização por Enxame de Partículas (PSO) e
de Redes Neurais Artificiais (RNAs) para o cálculo de raízes. Estas técnicas são conhecidas,
respectivamente, pela sua capacidade de explorar espaços de dimensão superior (não exigindo
boas aproximações iniciais) e pela sua capacidade de modelar problemas complexos. Além
disto, tais técnicas não necessitam de deflações repetidas, nem do cálculo de derivadas.
Ao longo deste documento, os algoritmos são descritos e testados, usando um conjunto de
problemas numéricos com aplicações nas ciências e na engenharia. Os resultados foram comparados com outros disponíveis na literatura e com o método de Durand–Kerner, e sugerem
que ambos os algoritmos são capazes de resolver os problemas numéricos considerados
Multilocal programming and applications
Preprint versionMultilocal programming aims to identify all local minimizers of unconstrained
or constrained nonlinear optimization problems. The multilocal programming
theory relies on global optimization strategies combined with simple ideas
that are inspired in deflection or stretching techniques to avoid convergence to the
already detected local minimizers. The most used methods to solve this type of problems
are based on stochastic procedures and a population of solutions. In general,
population-based methods are computationally expensive but rather reliable in identifying
all local solutions. In this chapter, a review on recent techniques for multilocal
programming is presented. Some real-world multilocal programming problems
based on chemical engineering process design applications are described.Fundação para a Ciência e a Tecnologia (FCT
Hybrid metaheuristic for combinatorial optimization based on immune network for optimization and VNS
Metaheuristics for optimization based on the immune network theory are often highlighted by being able to maintain the diversity of candidate solutions present in the population, allowing a greater coverage of the search space. This work, however, shows that algorithms derived from the aiNET family for the solution of combinatorial problems may not present an adequate strategy for search space exploration, leading to premature convergence in local minimums. In order to solve this issue, a hybrid metaheuristic called VNS-aiNET is proposed, integrating aspects of the COPT-aiNET algorithm with characteristics of the trajectory metaheuristic Variable Neighborhood Search (VNS), as well as a new fitness function, which makes it possible to escape from local minima and enables it to a greater exploration of the search space. The proposed metaheuristic is evaluated using a scheduling problem widely studied in the literature. The performed experiments show that the proposed hybrid metaheuristic presents a convergence superior to two approaches of the aiNET family and to the reference algorithms of the literature. In contrast, the solutions present in the resulting immunological memory have less diversity when compared to the aiNET family approaches
Chaotic Sand Cat Swarm Optimization
In this study, a new hybrid metaheuristic algorithm named Chaotic Sand Cat Swarm Optimization (CSCSO) is proposed for constrained and complex optimization problems. This algorithm
combines the features of the recently introduced SCSO with the concept of chaos. The basic aim of
the proposed algorithm is to integrate the chaos feature of non-recurring locations into SCSO’s core
search process to improve global search performance and convergence behavior. Thus, randomness
in SCSO can be replaced by a chaotic map due to similar randomness features with better statistical
and dynamic properties. In addition to these advantages, low search consistency, local optimum trap,
inefficiency search, and low population diversity issues are also provided. In the proposed CSCSO,
several chaotic maps are implemented for more efficient behavior in the exploration and exploitation
phases. Experiments are conducted on a wide variety of well-known test functions to increase the
reliability of the results, as well as real-world problems. In this study, the proposed algorithm was
applied to a total of 39 functions and multidisciplinary problems. It found 76.3% better responses
compared to a best-developed SCSO variant and other chaotic-based metaheuristics tested. This
extensive experiment indicates that the CSCSO algorithm excels in providing acceptable results
- …