865 research outputs found

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Hybrid ant colony system algorithm for static and dynamic job scheduling in grid computing

    Get PDF
    Grid computing is a distributed system with heterogeneous infrastructures. Resource management system (RMS) is one of the most important components which has great influence on the grid computing performance. The main part of RMS is the scheduler algorithm which has the responsibility to map submitted tasks to available resources. The complexity of scheduling problem is considered as a nondeterministic polynomial complete (NP-complete) problem and therefore, an intelligent algorithm is required to achieve better scheduling solution. One of the prominent intelligent algorithms is ant colony system (ACS) which is implemented widely to solve various types of scheduling problems. However, ACS suffers from stagnation problem in medium and large size grid computing system. ACS is based on exploitation and exploration mechanisms where the exploitation is sufficient but the exploration has a deficiency. The exploration in ACS is based on a random approach without any strategy. This study proposed four hybrid algorithms between ACS, Genetic Algorithm (GA), and Tabu Search (TS) algorithms to enhance the ACS performance. The algorithms are ACS(GA), ACS+GA, ACS(TS), and ACS+TS. These proposed hybrid algorithms will enhance ACS in terms of exploration mechanism and solution refinement by implementing low and high levels hybridization of ACS, GA, and TS algorithms. The proposed algorithms were evaluated against twelve metaheuristic algorithms in static (expected time to compute model) and dynamic (distribution pattern) grid computing environments. A simulator called ExSim was developed to mimic the static and dynamic nature of the grid computing. Experimental results show that the proposed algorithms outperform ACS in terms of best makespan values. Performance of ACS(GA), ACS+GA, ACS(TS), and ACS+TS are better than ACS by 0.35%, 2.03%, 4.65% and 6.99% respectively for static environment. For dynamic environment, performance of ACS(GA), ACS+GA, ACS+TS, and ACS(TS) are better than ACS by 0.01%, 0.56%, 1.16%, and 1.26% respectively. The proposed algorithms can be used to schedule tasks in grid computing with better performance in terms of makespan

    Adaptação automática de algoritmos de otimização metaheurística

    Get PDF
    A maioria dos problemas do mundo real tem uma multiplicidade de possíveis soluções. Além disso, usualmente, são encontradas limitações de recursos e tempo na resolução de problemas reais complexos e, por isso, frequentemente, não é possível aplicar um método determinístico na resolução desses problemas. Por este motivo, as meta-heurísticas têm ganho uma relevância significativa sobre os métodos determinísticos na resolução de problemas de otimização com múltiplas combinações. Ainda que as abordagens meta-heurísticas sejam agnósticas ao problema, os resultados da otimização são fortemente influenciados pelos parâmetros que estas meta-heurísticos necessitam para a sua configuração. Por sua vez, as melhores parametrizações são fortemente influenciadas pela meta-heurística e pela função objetivo. Por este motivo, a cada novo desenvolvimento é necessária uma otimização dos parâmetros das metas heurísticas praticamente partindo do zero. Assim, e, atendendo ao aumento da complexidade das meta-heurísticas e dos problemas aos quais estassão normalmente aplicadas, tem-se vindo a observar um crescente interesse no problema da configuração ótima destes algoritmos. Neste projeto é apresentada uma nova abordagem de otimização automática dos parâmetros de algoritmos meta-heurísticos. Esta abordagem não consiste numa pré-seleção estática de um único conjunto de parâmetros que será utilizado ao longo da pesquisa, como é a abordagem comum, mas sim na criação de um processo dinâmico, em que a parametrização é alterada ao longo da otimização. Esta solução consiste na divisão do processo de otimização em três etapas, forçando, numa primeira etapa um nível alto de exploração do espaço de procura, seguida de uma exploração intermédia e, na última etapa, privilegiando a pesquisa local focada nos pontos de maior potencial. De forma a permitir uma solução eficiente e eficaz, foram desenvolvidos dois módulos um Módulo de Treino e um Módulo de Otimização. No Módulo de Treino, o processo de fine-tuning é automatizado e, consequentemente, o processo de integração de uma nova meta-heurística ou uma nova função objetivo é facilitado. No Módulo de Otimização é usado um sistema multiagente para a otimização de uma dada função seguindo a abordagem de pesquisa proposta. Com base nos resultados obtidos através da aplicação de otimização por enxame de partículas e algoritmos genéticos a várias funções benchmark e a um problema real na área dos sistemas de energia, o Módulo de Treino permitiu automatizar o processo de fine-tuning e, consequentemente, facilitar o processo de introdução no sistema de uma nova meta-heurística ou de uma nova função relativa a um novo problema a resolver. Utilizando a abordagem de otimização proposta através do Módulo de Otimização, obtém-se uma maior generalização e os resultados são melhorados sem comprometer o tempo máximo para a otimização.Most real-word problems have a large solution space. Due to resource and time constraints, it is often not possible to apply a deterministic method to solve such problems. For this reason, metaheuristic optimization algorithm has earned increased popularity over the deterministic methods in solving complex combination optimization problems. However, despite being problem-agnostic techniques, metaheuristic’s optimization results are highly impacted by the defined parameters. The best parameterizations are highly impacted by the metaheuristic version and by the addressed objective function. For this reason, with each new development it is necessary to optimize the metaheuristic parameters practically from scratch. Thus, and given the increasing complexity of metaheuristics and the problems to which they are normally applied, there has been a growing interest in the problem of optimal configuration of these algorithms. In this work, a new approach for automatic optimization of metaheuristic algorithms parameters is presented. This approach does not consist in a static pre-selection of a single set of parameters that will be used throughout the search process, as is the common approach, but in the creation of a dynamic process, in which the parameterization is changed during the optimization. This solution consists of dividing the optimization process into three stages, forcing, in a first stage, a high level of exploration of the search space, followed by an intermediate exploration and, in the last stage, fostering local search focused on the points of greatest potential. In order to allow an efficient and effective solution, two modules are developed, a Training Module and an Optimization Module. In the Training Module, the finetuning process is automated and, consequently, the process of integrating a new metaheuristic or a new objective function is facilitated. In the Optimization Module, a multi-agent system is used to optimize a given function following the proposed research approach. Based on the results obtained using particle swarm optimization and genetic algorithms to solve several benchmark functions and a real problem in the area of power and energy systems, the Training Module made it possible to automate the fine-tuning process and, consequently, facilitate the process of introducing in the system a new metaheuristic or a new function related to a new problem to be solved. Using the proposed optimization approach through the Optimization Module, a greater generalization is obtained, and the results are improved without compromising the maximum time for the optimization

    The buttressed walls problem: An application of a hybrid clustering particle swarm optimization algorithm

    Full text link
    [EN] The design of reinforced earth retaining walls is a combinatorial optimization problem of interest due to practical applications regarding the cost savings involved in the design and the optimization in the amount of CO2 emissions generated in its construction. On the other hand, this problem presents important challenges in computational complexity since it involves 32 design variables; therefore we have in the order of 10^20 possible combinations. In this article, we propose a hybrid algorithm in which the particle swarm optimization method is integrated that solves optimization problems in continuous spaces with the db-scan clustering technique, with the aim of addressing the combinatorial problem of the design of reinforced earth retaining walls. This algorithm optimizes two objective functions: the carbon emissions embedded and the economic cost of reinforced concrete walls. To assess the contribution of the db-scan operator in the optimization process, a random operator was designed. The best solutions, the averages, and the interquartile ranges of the obtained distributions are compared. The db-scan algorithm was then compared with a hybrid version that uses k-means as the discretization method and with a discrete implementation of the harmony search algorithm. The results indicate that the db-scan operator significantly improves the quality of the solutions and that the proposed metaheuristic shows competitive results with respect to the harmony search algorithm.The first author was supported by the Grant CONICYT/FONDECYT/INICIACION/11180056, the other two authors were supported by the Spanish Ministry of Economy and Competitiveness, along with FEDER funding (Project: BIA2017-85098-R).Garcia, J.; Martí Albiñana, JV.; Yepes, V. (2020). The buttressed walls problem: An application of a hybrid clustering particle swarm optimization algorithm. Mathematics. 8(6):862-01-862-22. https://doi.org/10.3390/math8060862S862-01862-228

    Advances in Artificial Intelligence: Models, Optimization, and Machine Learning

    Get PDF
    The present book contains all the articles accepted and published in the Special Issue “Advances in Artificial Intelligence: Models, Optimization, and Machine Learning” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of artificial intelligence and its subfields. These topics include, among others, deep learning and classic machine learning algorithms, neural modelling, architectures and learning algorithms, biologically inspired optimization algorithms, algorithms for autonomous driving, probabilistic models and Bayesian reasoning, intelligent agents and multiagent systems. We hope that the scientific results presented in this book will serve as valuable sources of documentation and inspiration for anyone willing to pursue research in artificial intelligence, machine learning and their widespread applications

    Current Studies and Applications of Krill Herd and Gravitational Search Algorithms in Healthcare

    Full text link
    Nature-Inspired Computing or NIC for short is a relatively young field that tries to discover fresh methods of computing by researching how natural phenomena function to find solutions to complicated issues in many contexts. As a consequence of this, ground-breaking research has been conducted in a variety of domains, including synthetic immune functions, neural networks, the intelligence of swarm, as well as computing of evolutionary. In the domains of biology, physics, engineering, economics, and management, NIC techniques are used. In real-world classification, optimization, forecasting, and clustering, as well as engineering and science issues, meta-heuristics algorithms are successful, efficient, and resilient. There are two active NIC patterns: the gravitational search algorithm and the Krill herd algorithm. The study on using the Krill Herd Algorithm (KH) and the Gravitational Search Algorithm (GSA) in medicine and healthcare is given a worldwide and historical review in this publication. Comprehensive surveys have been conducted on some other nature-inspired algorithms, including KH and GSA. The various versions of the KH and GSA algorithms and their applications in healthcare are thoroughly reviewed in the present article. Nonetheless, no survey research on KH and GSA in the healthcare field has been undertaken. As a result, this work conducts a thorough review of KH and GSA to assist researchers in using them in diverse domains or hybridizing them with other popular algorithms. It also provides an in-depth examination of the KH and GSA in terms of application, modification, and hybridization. It is important to note that the goal of the study is to offer a viewpoint on GSA with KH, particularly for academics interested in investigating the capabilities and performance of the algorithm in the healthcare and medical domains.Comment: 35 page

    A search algorithm for constrained engineering optimization and tuning the gains of controllers

    Get PDF
    In this work, the application of an optimization algorithm is investigated to optimize static and dynamic engineering problems. The methodology of the approach is to generate random solutions and find a zone for the initial answer and keep reducing the zones. The generated solution in each loop is independent of the previous answer that creates a powerful method. Simplicity as its main advantage and the interlaced use of intensification and diversification mechanisms--to refine the solution and avoid local minima/maxima--enable the users to apply that for a variety of problems. The proposed approach has been validated by several previously solved examples in structural optimization and scored good results. The method is also employed for dynamic problems in vibration and control. A modification has also been done on the method for high-dimensional test functions (functions with very large search domains) to converge fast to the global minimum or maximum; simulated for several well-known benchmarks successfully. For validation, a number of 9 static and 4 dynamic constrained optimization benchmark applications and 32 benchmark test functions are solved and provided, 45 in total. All the codes of this work are available as supplementary material in the online version of the paper on the journal website
    corecore