706 research outputs found
ΠΠ°Π»ΠΎΠ³ΠΎΠ²ΠΎΠ΅ ΠΈΠ»ΠΈ ΠΌΠΎΠ½Π΅ΡΠ°ΡΠ½ΠΎΠ΅ ΡΡΠΈΠΌΡΠ»ΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅? ΠΠ²ΠΎΠ»ΡΡΠΈΠΎΠ½Π½ΡΠ΅ Π°ΡΠ³ΡΠΌΠ΅Π½ΡΡ Π² ΠΏΠΎΠ»ΡΠ·Ρ Π½Π°Π»ΠΎΠ³ΠΎΠ²ΡΡ ΡΠ΅ΡΠΎΡΠΌ
Π‘ΡΠ°ΡΡΡ ΠΏΠΎΡΠ²ΡΡΠ΅Π½Π° ΠΈΡΡΠ»Π΅Π΄ΠΎΠ²Π°Π½ΠΈΡ ΠΏΡΠΎΠ±Π»Π΅ΠΌΡ ΠΎΠ±ΠΎΡΠ½ΠΎΠ²Π°Π½ΠΈΡ ΠΌΠ΅Ρ ΡΠ΅Π³ΡΠ»ΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΡΠ°Π·Π²ΠΈΡΠΈΡ ΡΠΌΠ΅ΡΠ΄ΠΆΠ΅Π½ΡΠ½ΠΎΠΉ ΡΠΊΠΎΠ½ΠΎΠΌΠΈΠΊΠΈ - ΡΠΈΡΠΊΠ°Π»ΡΠ½ΡΡ
ΠΈ (ΠΈΠ»ΠΈ) ΠΌΠΎΠ½Π΅ΡΠ°ΡΠ½ΡΡ
, Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ΠΌ ΠΌΠ΅ΡΠΎΠ΄ΠΎΠ² ΡΠ²ΠΎΠ»ΡΡΠΈΠΎΠ½Π½ΠΎΠ³ΠΎ ΠΌΠΎΠ΄Π΅Π»ΠΈΡΠΎΠ²Π°Π½ΠΈΡ. ΠΠ»Ρ ΡΡΠΎΠ³ΠΎ Π±ΡΠ»Π° ΠΏΠΎΡΡΡΠΎΠ΅Π½Π° ΡΠΊΠΎΠ½ΠΎΠΌΠΈΠΊΠΎ-ΠΌΠ°ΡΠ΅ΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠ°Ρ ΠΌΠΎΠ΄Π΅Π»Ρ, ΠΈΠΌΠΈΡΠΈΡΡΡΡΠ°Ρ ΠΏΡΠΎΡΠ΅ΡΡΡ ΠΊΠΎΡΠ²ΠΎΠ»ΡΡΠΈΠΈ ΡΠ°Π·Π²ΠΈΡΠΎΠΉ ΠΈ ΡΠ°Π·Π²ΠΈΠ²Π°ΡΡΠ΅ΠΉΡΡ ΡΡΡΠ°Π½, ΡΠ²ΡΠ·Π°Π½Π½ΡΡ
ΡΠ΅ΡΠ΅Π· Π³Π»ΠΎΠ±Π°Π»ΡΠ½ΡΠ΅ ΡΠ΅ΠΏΠΎΡΠΊΠΈ ΡΠΎΠ·Π΄Π°Π½ΠΈΡ ΡΡΠΎΠΈΠΌΠΎΡΡΠΈ. Π ΡΡΠΎΠΉ ΠΌΠΎΠ΄Π΅Π»ΠΈ ΠΊΠ°ΠΆΠ΄Π°Ρ ΠΈΠ· ΡΡΡΠ°Π½ Ρ
Π°ΡΠ°ΠΊΡΠ΅ΡΠΈΠ·ΡΠ΅ΡΡΡ ΡΠΎΠ±ΡΡΠ²Π΅Π½Π½ΠΎΠΉ ΠΈΡΡ
ΠΎΠ΄Π½ΠΎΠΉ ΡΡΡΡΠΊΡΡΡΠΎΠΉ ΡΠΊΠΎΠ½ΠΎΠΌΠΈΡΠ΅ΡΠΊΠΈΡ
ΡΡΠ±ΡΠ΅ΠΊΡΠΎΠ², ΠΎΠΏΡΠ΅Π΄Π΅Π»ΡΠ΅ΠΌΠΎΠΉ ΡΠΎΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΠ΅ΠΌ ΠΏΡΠ΅Π΄ΠΏΡΠΈΡΡΠΈΠΉ-ΡΠ³ΠΎΠΈΡΡΠΎΠ² (ΠΏΡΠ΅Π΄ΡΠ°ΡΠΏΠΎΠ»ΠΎΠΆΠ΅Π½Π½ΡΡ
ΠΊ ΠΊΠΎΠ½ΡΠ΅ΡΠ²Π°ΡΠΈΠ²Π½ΠΎΠΌΡ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΡ) ΠΈ ΠΏΡΠ΅Π΄ΠΏΡΠΈΡΡΠΈΠΉ-Π°Π»ΡΡΡΡΠΈΡΡΠΎΠ² (ΠΏΡΠ΅Π΄ΡΠ°ΡΠΏΠΎΠ»ΠΎΠΆΠ΅Π½Π½ΡΡ
ΠΊ ΠΈΠ½Π½ΠΎΠ²Π°ΡΠΈΠΎΠ½Π½ΠΎΠΌΡ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΡ), Π° ΡΠ°ΠΊΠΆΠ΅ ΡΠΏΠ΅ΡΠΈΡΠΈΡΠ΅ΡΠΊΠΈΠΌ Π½Π°ΡΠ΅Π»Π΅Π½ΠΈΠ΅ΠΌ ΠΈ Π΄Π΅ΠΌΠΎΠ³ΡΠ°ΡΠΈΡΠ΅ΡΠΊΠΈΠΌΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠ°ΠΌΠΈ. Π Π΅Π·ΡΠ»ΡΡΠ°ΡΡ Π²ΡΡΠΈΡΠ»ΠΈΡΠ΅Π»ΡΠ½ΡΡ
ΡΠΊΡΠΏΠ΅ΡΠΈΠΌΠ΅Π½ΡΠΎΠ² ΠΏΠΎΠΊΠ°Π·Π°Π»ΠΈ, ΡΡΠΎ ΡΡΠΏΠ΅Ρ
ΡΠΎΠ³ΠΎ ΠΈΠ»ΠΈ ΠΈΠ½ΠΎΠ³ΠΎ ΡΠΏΠΎΡΠΎΠ±Π° ΡΠΊΠΎΠ½ΠΎΠΌΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ ΡΠ΅Π³ΡΠ»ΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΠΏΡΠΈΠ½ΡΠΈΠΏΠΈΠ°Π»ΡΠ½ΠΎ Π·Π°Π²ΠΈΡΠΈΡ ΠΎΡ ΠΎΡΠΎΠ±Π΅Π½Π½ΠΎΡΡΠ΅ΠΉ ΠΈΡΡ
ΠΎΠ΄Π½ΠΎΠ³ΠΎ ΡΠΎΡΡΠΎΡΠ½ΠΈΡ ΠΈΠ½ΡΡΠΈΡΡΡΠΈΠΎΠ½Π°Π»ΡΠ½ΠΎΠΉ ΡΡΠ΅Π΄Ρ Ρ
ΠΎΠ·ΡΠΉΡΡΠ²ΠΎΠ²Π°Π½ΠΈΡ. Π ΠΈΠ½ΡΡΠΈΡΡΡΠΈΠΎΠ½Π°Π»ΡΠ½ΠΎΠΉ ΡΡΠ΅Π΄Π΅ Ρ Β«ΠΏΡΠΎΠ·ΡΠ°ΡΠ½ΡΠΌΠΈΒ» Π΄Π»ΠΈΠ½Π½ΡΠΌΠΈ ΠΏΡΠ°Π²ΠΈΠ»Π°ΠΌΠΈ ΠΈΠ³ΡΡ ΠΈ, ΡΠΎΠΎΡΠ²Π΅ΡΡΡΠ²Π΅Π½Π½ΠΎ, Π΄Π»ΠΈΠ½Π½ΡΠΌ Π³ΠΎΡΠΈΠ·ΠΎΠ½ΡΠΎΠΌ Ρ
ΠΎΠ·ΡΠΉΡΡΠ²Π΅Π½Π½ΠΎΠ³ΠΎ ΠΏΠ»Π°Π½ΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π½Π°ΠΈΠ»ΡΡΡΠΈΠΉ ΡΠ΅Π·ΡΠ»ΡΡΠ°Ρ Π² Π²ΠΈΠ΄Π΅ Π²ΡΡΠΎΠΊΠΈΡ
ΡΠ΅ΠΌΠΏΠΎΠ² ΡΠΎΡΡΠ° ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΡΡΠ²Π° Π² ΡΠΌΠ΅ΡΠ΄ΠΆΠ΅Π½ΡΠ½ΠΎΠΉ ΡΠΊΠΎΠ½ΠΎΠΌΠΈΠΊΠ΅ Π΄Π°Π΅Ρ ΠΏΠΎΠ»ΠΈΡΠΈΠΊΠ° Π΄Π΅ΡΠ΅Π²ΡΡ
Π΄Π΅Π½Π΅Π³ Π² ΡΠΎΡΠ΅ΡΠ°Π½ΠΈΠΈ Ρ Π²ΡΡΠΎΠΊΠΈΠΌΠΈ Β«Π΅Π²ΡΠΎΠΏΠ΅ΠΉΡΠΊΠΈΠΌΠΈΒ» Π½Π°Π»ΠΎΠ³Π°ΠΌΠΈ. ΠΠ½Π°Ρ ΡΠΈΡΡΠ°ΡΠΈΡ Π½Π°Π±Π»ΡΠ΄Π°Π΅ΡΡΡ Π² Π±ΠΎΠ»Π΅Π΅ ΡΠ΅Π°Π»ΠΈΡΡΠΈΡΠ½ΠΎΠΉ ΡΠΈΡΡΠ°ΡΠΈΠΈ Ρ ΠΊΠΎΡΠΎΡΠΊΠΈΠΌΠΈ ΠΏΡΠ°Π²ΠΈΠ»Π°ΠΌΠΈ ΠΈΠ³ΡΡ ΠΈ, ΡΠΎΠΎΡΠ²Π΅ΡΡΡΠ²Π΅Π½Π½ΠΎ, ΠΊΠΎΡΠΎΡΠΊΠΈΠΌ (Π½Π΅ Π±ΠΎΠ»Π΅Π΅ 5 Π»Π΅Ρ) Π³ΠΎΡΠΈΠ·ΠΎΠ½ΡΠΎΠΌ Ρ
ΠΎΠ·ΡΠΉΡΡΠ²Π΅Π½Π½ΠΎΠ³ΠΎ ΠΏΠ»Π°Π½ΠΈΡΠΎΠ²Π°Π½ΠΈΡ. Π ΡΡΠΎΠΌ ΡΠ»ΡΡΠ°Π΅ Π»ΡΠ±Π°Ρ Π½Π°Π»ΠΎΠ³ΠΎΠ²Π°Ρ ΠΏΠΎΠ»ΠΈΡΠΈΠΊΠ° (Π½ΠΈΠ·ΠΊΠΈΠ΅ ΠΈΠ»ΠΈ Π²ΡΡΠΎΠΊΠΈΠ΅ Π½Π°Π»ΠΎΠ³ΠΈ) Π² ΡΠΎΡΠ΅ΡΠ°Π½ΠΈΠΈ Π»ΡΠ±ΡΠΌΠΈ Π΄Π΅Π½ΡΠ³Π°ΠΌΠΈ (Π΄Π΅ΡΠ΅Π²ΡΠΌΠΈ ΠΈΠ»ΠΈ Π΄ΠΎΡΠΎΠ³ΠΈΠΌΠΈ), Π² ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½Π½ΠΎΠΌ ΡΠΌΡΡΠ»Π΅ ΡΠ΅ΡΡΠ΅Ρ Π·Π½Π°ΡΠ΅Π½ΠΈΠ΅, ΠΏΠΎΡΠΊΠΎΠ»ΡΠΊΡ ΠΈΠ·Π½Π°ΡΠ°Π»ΡΠ½ΠΎ ΠΎΡΡΡΠ°Π»Π°Ρ ΠΈΠ½Π½ΠΎΠ²Π°ΡΠΈΠΎΠ½Π½Π°Ρ ΡΠΈΡΡΠ΅ΠΌΠ° Π½Π΅ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ Π±ΡΡΡΡΠΎ ΠΏΠΎΠ»ΡΡΠ°ΡΡ Π²ΡΡΠΎΠΊΠΈΠ΅ ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΡ, Π° ΠΏΡΠ΅ΠΈΠΌΡΡΠ΅ΡΡΠ²Π° ΡΠΊΠΎΠ½ΠΎΠΌΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ ΡΠΎΡΡΠ° Π² ΠΎΡΠ΄Π΅Π»Π΅Π½Π½ΠΎΠΌ Π±ΡΠ΄ΡΡΠ΅ΠΌ Π½Π΅ ΠΏΡΠΈΠ½ΠΈΠΌΠ°ΡΡΡΡ Π²ΠΎ Π²Π½ΠΈΠΌΠ°Π½ΠΈΠ΅. ΠΠΌΠ΅ΡΡΠ΅ Ρ ΡΠ΅ΠΌ, Π΄Π»Ρ ΠΏΠΎΡΡΠ΅ΠΏΠ΅Π½Π½ΠΎΠ³ΠΎ ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π»ΡΡΡΠ΅ΠΉ ΠΈΠ½Π½ΠΎΠ²Π°ΡΠΈΠΎΠ½Π½ΠΎΠΉ ΡΠΈΡΡΠ΅ΠΌΡ Π½ΠΈΠ·ΠΊΠΈΠ΅ Π½Π°Π»ΠΎΠ³ΠΈ ΠΈ Π΄Π΅ΡΠ΅Π²ΡΠ΅ Π΄Π΅Π½ΡΠ³ΠΈ ΠΈΠΌΠ΅ΡΡ Π²Π°ΠΆΠ½ΠΎΠ΅ Π·Π½Π°ΡΠ΅Π½ΠΈΠ΅, ΠΏΠΎΡΠΊΠΎΠ»ΡΠΊΡ ΡΠΎΠ·Π΄Π°ΡΡ Π»ΡΡΡΠΈΠ΅ ΡΡΠ»ΠΎΠ²ΠΈΡ Π΄Π»Ρ Π²ΡΠΆΠΈΠ²Π°Π½ΠΈΡ ΠΏΡΠ΅Π΄ΠΏΡΠΈΡΡΠΈΠΉΠ°Π»ΡΡΡΡΠΈΡΡΠΎΠ², ΠΎΠ±Π»Π΅Π³ΡΠ°Ρ ΠΈΠΌ ΠΈΠ½Π²Π΅ΡΡΠΈΡΠΈΠΎΠ½Π½ΡΡ Π΄Π΅ΡΡΠ΅Π»ΡΠ½ΠΎΡΡΡ, ΡΠΏΠΎΡΠΎΠ±Π½ΡΡ ΠΏΡΠΈΠ½Π΅ΡΡΠΈ ΠΌΠ½ΠΎΠ³ΠΎΠΊΡΠ°ΡΠ½ΡΠΉ ΠΏΡΠΈΡΠΎΡΡ ΡΠ΅Ρ
Π½ΠΈΡΠ΅ΡΠΊΠΎΠΉ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΠ΅Π»ΡΠ½ΠΎΡΡΠΈ ΠΈ ΡΠΊΠΎΠ½ΠΎΠΌΠΈΡΠ΅ΡΠΊΠΎΠΉ ΡΡΡΠ΅ΠΊΡΠΈΠ²Π½ΠΎΡΡΠΈ. Π Π»ΡΠ±ΠΎΠΌ ΡΠ»ΡΡΠ°Π΅, Π² ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡΠ΅ ΡΠ²ΠΎΠ»ΡΡΠΈΠΎΠ½Π½ΠΎΠΉ ΡΠΊΠΎΠ½ΠΎΠΌΠΈΡΠ΅ΡΠΊΠΎΠΉ ΡΠ΅ΠΎΡΠΈΠΈ, ΠΈΡΡ
ΠΎΠ΄Ρ ΠΈΠ· ΠΏΡΠΎΠ²Π΅Π΄Π΅Π½Π½ΡΡ
Π²ΡΡΠΈΡΠ»ΠΈΡΠ΅Π»ΡΠ½ΡΡ
ΡΠΊΡΠΏΠ΅ΡΠΈΠΌΠ΅Π½ΡΠΎΠ², Π½Π°Π»ΠΎΠ³ΠΎΠ²Π°Ρ ΠΏΠΎΠ»ΠΈΡΠΈΠΊΠ° Π² ΡΡΠ»ΠΎΠ²ΠΈΡΡ
ΡΠΌΠ΅ΡΠ΄ΠΆΠ΅Π½ΡΠ½ΡΡ
ΡΡΠ½ΠΊΠΎΠ² ΡΠΎΡ
ΡΠ°Π½ΡΠ΅Ρ ΡΠ²ΠΎΠΉ ΡΠ΅Π³ΡΠ»ΡΡΠΎΡΠ½ΡΠΉ ΠΏΠΎΡΠ΅Π½ΡΠΈΠ°Π», ΠΈ, ΡΠ°ΠΊΠΈΠΌ ΠΎΠ±ΡΠ°Π·ΠΎΠΌ, ΡΡΠ΅Π±ΡΠ΅Ρ Π΄Π°Π»ΡΠ½Π΅ΠΉΡΠ΅Π³ΠΎ ΡΠ΅ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π² ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡΠ΅ Β«Π½ΠΎΠ²ΠΎΠΉ ΡΠ΅Π°Π»ΡΠ½ΠΎΡΡΠΈΒ», ΠΎΡΠ½ΠΎΠ²Π°Π½Π½ΠΎΠΉ Π½Π° Π³Π»ΠΎΠ±Π°Π»ΡΠ½ΡΡ
ΡΠ΅ΠΏΠΎΡΠΊΠ°Ρ
ΡΠΎΠ·Π΄Π°Π½ΠΈΡ ΡΡΠΎΠΈΠΌΠΎΡΡΠΈ.The article deals with the problem of substantiation of the emergent economies development regulatory measures (fiscal and / or monetary), using the evolutionary modelling methods. For this purpose, the mathematical model was constructed that simulates the co-evolution process of the advanced and developing countries, linked by global value chains. In this model, each country is characterized by its original structure of economic entities, defined by the ratio of the egoistic enterprises (predisposed to conservative behaviour) to the altruistic enterprises (predisposed to innovation), as well as by specific population and demographic processes. The results of the computational experiments have shown that the success of economic regulation fundamentally depends on the peculiarities of the initial state of the institutional environment. In the institutional environment with the Β«transparentΒ» long behaviour and, accordingly, a long economic planning horizon, the best result in the form of average annual production growth rate of the emergent economies is provided by the cheap money policy combined with the high European taxes. A different situation is observed in more realistic short behaviour and, accordingly, short (under 5 years) economic planning horizon. In this case, any tax policy (neither low nor high taxes) together with any money (neither cheap nor expensive), to a certain extent loses its significance, as the initially backward innovative system does not allow to quickly get good results, and the long-term benefits of the potential economic growth are not taken into consideration. However, low taxes and cheap money are important as they create better conditions for survival of the altruistic enterprises, facilitating their investment activities, which can multiply increase their technical performance and economic efficiency. Still, in the context of the evolutionary economics and following the conducted computational experiments, the fiscal policy in terms of emerging markets retains its regulatory capacity, and therefore requires further reforms in the context of the Β«new realityΒ» based on the global value chains
Optimal distribution of incentives for public cooperation in heterogeneous interaction environments
In the framework of evolutionary games with institutional reciprocity,
limited incentives are at disposal for rewarding cooperators and punishing
defectors. In the simplest case, it can be assumed that, depending on their
strategies, all players receive equal incentives from the common pool. The
question arises, however, what is the optimal distribution of institutional
incentives? How should we best reward and punish individuals for cooperation to
thrive? We study this problem for the public goods game on a scale-free
network. We show that if the synergetic effects of group interactions are weak,
the level of cooperation in the population can be maximized simply by adopting
the simplest "equal distribution" scheme. If synergetic effects are strong,
however, it is best to reward high-degree nodes more than low-degree nodes.
These distribution schemes for institutional rewards are independent of payoff
normalization. For institutional punishment, however, the same optimization
problem is more complex, and its solution depends on whether absolute or
degree-normalized payoffs are used. We find that degree-normalized payoffs
require high-degree nodes be punished more lenient than low-degree nodes.
Conversely, if absolute payoffs count, then high-degree nodes should be
punished stronger than low-degree nodes.Comment: 19 pages, 8 figures; accepted for publication in Frontiers in
Behavioral Neuroscienc
A Brief Review on Mathematical Tools Applicable to Quantum Computing for Modelling and Optimization Problems in Engineering
Since its emergence, quantum computing has enabled a wide spectrum of new possibilities and advantages, including its efficiency in accelerating computational processes exponentially. This has directed much research towards completely novel ways of solving a wide variety of engineering problems, especially through describing quantum versions of many mathematical tools such as Fourier and Laplace transforms, differential equations, systems of linear equations, and optimization techniques, among others. Exploration and development in this direction will revolutionize the world of engineering. In this manuscript, we review the state of the art of these emerging techniques from the perspective of quantum computer development and performance optimization, with a focus on the most common mathematical tools that support engineering applications. This review focuses on the application of these mathematical tools to quantum computer development and performance improvement/optimization. It also identifies the challenges and limitations related to the exploitation of quantum computing and outlines the main opportunities for future contributions. This review aims at offering a valuable reference for researchers in fields of engineering that are likely to turn to quantum computing for solutions.Β Doi: 10.28991/ESJ-2023-07-01-020 Full Text: PD
Learning to Coordinate with Anyone
In open multi-agent environments, the agents may encounter unexpected
teammates. Classical multi-agent learning approaches train agents that can only
coordinate with seen teammates. Recent studies attempted to generate diverse
teammates to enhance the generalizable coordination ability, but were
restricted by pre-defined teammates. In this work, our aim is to train agents
with strong coordination ability by generating teammates that fully cover the
teammate policy space, so that agents can coordinate with any teammates. Since
the teammate policy space is too huge to be enumerated, we find only dissimilar
teammates that are incompatible with controllable agents, which highly reduces
the number of teammates that need to be trained with. However, it is hard to
determine the number of such incompatible teammates beforehand. We therefore
introduce a continual multi-agent learning process, in which the agent learns
to coordinate with different teammates until no more incompatible teammates can
be found. The above idea is implemented in the proposed Macop (Multi-agent
compatible policy learning) algorithm. We conduct experiments in 8 scenarios
from 4 environments that have distinct coordination patterns. Experiments show
that Macop generates training teammates with much lower compatibility than
previous methods. As a result, in all scenarios Macop achieves the best overall
coordination ability while never significantly worse than the baselines,
showing strong generalization ability
Evolutionary Computation 2020
Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms
A review of population-based metaheuristics for large-scale black-box global optimization: Part B
This paper is the second part of a two-part survey series on large-scale global optimization. The first part covered two major algorithmic approaches to large-scale optimization, namely decomposition methods and hybridization methods such as memetic algorithms and local search. In this part we focus on sampling and variation operators, approximation and surrogate modeling, initialization methods, and parallelization. We also cover a range of problem areas in relation to large-scale global optimization, such as multi-objective optimization, constraint handling, overlapping components, the component imbalance issue, and benchmarks, and applications. The paper also includes a discussion on pitfalls and challenges of current research and identifies several potential areas of future research
Monte Carlo Tree Descent for Black-Box Optimization
The key to Black-Box Optimization is to efficiently search through input
regions with potentially widely-varying numerical properties, to achieve
low-regret descent and fast progress toward the optima. Monte Carlo Tree Search
(MCTS) methods have recently been introduced to improve Bayesian optimization
by computing better partitioning of the search space that balances exploration
and exploitation. Extending this promising framework, we study how to further
integrate sample-based descent for faster optimization. We design novel ways of
expanding Monte Carlo search trees, with new descent methods at vertices that
incorporate stochastic search and Gaussian Processes. We propose the
corresponding rules for balancing progress and uncertainty, branch selection,
tree expansion, and backpropagation. The designed search process puts more
emphasis on sampling for faster descent and uses localized Gaussian Processes
as auxiliary metrics for both exploitation and exploration. We show empirically
that the proposed algorithms can outperform state-of-the-art methods on many
challenging benchmark problems.Comment: 17 pages, published in NeurIPS 202
- β¦