98 research outputs found
Ensemble Differential Evolution with Simulation-Based Hybridization and Self-Adaptation for Inventory Management Under Uncertainty
This study proposes an Ensemble Differential Evolution with Simula-tion-Based
Hybridization and Self-Adaptation (EDESH-SA) approach for inven-tory management
(IM) under uncertainty. In this study, DE with multiple runs is combined with a
simulation-based hybridization method that includes a self-adaptive mechanism
that dynamically alters mutation and crossover rates based on the success or
failure of each iteration. Due to its adaptability, the algorithm is able to
handle the complexity and uncertainty present in IM. Utilizing Monte Carlo
Simulation (MCS), the continuous review (CR) inventory strategy is ex-amined
while accounting for stochasticity and various demand scenarios. This
simulation-based approach enables a realistic assessment of the proposed
algo-rithm's applicability in resolving the challenges faced by IM in practical
settings. The empirical findings demonstrate the potential of the proposed
method to im-prove the financial performance of IM and optimize large search
spaces. The study makes use of performance testing with the Ackley function and
Sensitivity Analysis with Perturbations to investigate how changes in variables
affect the objective value. This analysis provides valuable insights into the
behavior and robustness of the algorithm.Comment: 15 pages, 6 figures, AsiaSIM 2023 (Springer
Differential Evolution and Deterministic Chaotic Series: A Detailed Study
This research represents a detailed insight into the modern and popular hybridization of deterministic chaotic dynamics and evolutionary computation. It is aimed at the influence of chaotic sequences on the performance of four selected Differential Evolution (DE) variants. The variants of interest were: original DE/Rand/1/ and DE/Best/1/ mutation schemes, simple parameter adaptive jDE, and the recent state of the art version SHADE. Experiments are focused on the extensive investigation of the different randomization schemes for the selection of individuals in DE algorithm driven by the nine different two-dimensional discrete deterministic chaotic systems, as the chaotic pseudorandom number generators. The performances of DE variants and their chaotic/non-chaotic versions are recorded in the one-dimensional settings of 10D and 15 test functions from the CEC 2015 benchmark, further statistically analyzed
Information driven self-organization of complex robotic behaviors
Information theory is a powerful tool to express principles to drive
autonomous systems because it is domain invariant and allows for an intuitive
interpretation. This paper studies the use of the predictive information (PI),
also called excess entropy or effective measure complexity, of the sensorimotor
process as a driving force to generate behavior. We study nonlinear and
nonstationary systems and introduce the time-local predicting information
(TiPI) which allows us to derive exact results together with explicit update
rules for the parameters of the controller in the dynamical systems framework.
In this way the information principle, formulated at the level of behavior, is
translated to the dynamics of the synapses. We underpin our results with a
number of case studies with high-dimensional robotic systems. We show the
spontaneous cooperativity in a complex physical system with decentralized
control. Moreover, a jointly controlled humanoid robot develops a high
behavioral variety depending on its physics and the environment it is
dynamically embedded into. The behavior can be decomposed into a succession of
low-dimensional modes that increasingly explore the behavior space. This is a
promising way to avoid the curse of dimensionality which hinders learning
systems to scale well.Comment: 29 pages, 12 figure
Analyzing Adaptive Parameter Landscapes in Parameter Adaptation Methods for Differential Evolution
Since the scale factor and the crossover rate significantly influence the
performance of differential evolution (DE), parameter adaptation methods (PAMs)
for the two parameters have been well studied in the DE community. Although
PAMs can sufficiently improve the effectiveness of DE, PAMs are poorly
understood (e.g., the working principle of PAMs). One of the difficulties in
understanding PAMs comes from the unclarity of the parameter space that
consists of the scale factor and the crossover rate. This paper addresses this
issue by analyzing adaptive parameter landscapes in PAMs for DE. First, we
propose a concept of an adaptive parameter landscape, which captures a moment
in a parameter adaptation process. For each iteration, each individual in the
population has its adaptive parameter landscape. Second, we propose a method of
analyzing adaptive parameter landscapes using a 1-step-lookahead greedy
improvement metric. Third, we examine adaptive parameter landscapes in PAMs by
using the proposed method. Results provide insightful information about PAMs in
DE.Comment: This is an accepted version of a paper published in the proceedings
of GECCO 202
Resilience for large ensemble computations
With the increasing power of supercomputers, ever more detailed models of physical systems can be simulated, and ever larger problem sizes can be considered for any kind of numerical system. During the last twenty years the performance of the fastest clusters went from the teraFLOPS domain (ASCI RED: 2.3 teraFLOPS) to the pre-exaFLOPS domain (Fugaku: 442 petaFLOPS), and we will soon have the first supercomputer with a peak performance cracking the exaFLOPS (El Capitan: 1.5 exaFLOPS). Ensemble techniques experience a renaissance with the availability of those extreme scales. Especially recent techniques, such as particle filters, will benefit from it. Current ensemble methods in climate science, such as ensemble Kalman filters, exhibit a linear dependency between the problem size and the ensemble size, while particle filters show an exponential dependency. Nevertheless, with the prospect of massive computing power come challenges such as power consumption and fault-tolerance. The mean-time-between-failures shrinks with the number of components in the system, and it is expected to have failures every few hours at exascale. In this thesis, we explore and develop techniques to protect large ensemble computations from failures. We present novel approaches in differential checkpointing, elastic recovery, fully asynchronous checkpointing, and checkpoint compression. Furthermore, we design and implement a fault-tolerant particle filter with pre-emptive particle prefetching and caching. And finally, we design and implement a framework for the automatic validation and application of lossy compression in ensemble data assimilation. Altogether, we present five contributions in this thesis, where the first two improve state-of-the-art checkpointing techniques, and the last three address the resilience of ensemble computations. The contributions represent stand-alone fault-tolerance techniques, however, they can also be used to improve the properties of each other. For instance, we utilize elastic recovery (2nd contribution) for mitigating resiliency in an online ensemble data assimilation framework (3rd contribution), and we built our validation framework (5th contribution) on top of our particle filter implementation (4th contribution). We further demonstrate that our contributions improve resilience and performance with experiments on various architectures such as Intel, IBM, and ARM processors.Amb l’increment de les capacitats de còmput dels supercomputadors, es poden simular models de sistemes fĂsics encara mĂ©s detallats, i es poden resoldre problemes de mĂ©s grandĂ ria en qualsevol tipus de sistema numèric. Durant els Ăşltims vint anys, el rendiment dels clĂşsters mĂ©s rĂ pids ha passat del domini dels teraFLOPS (ASCI RED: 2.3 teraFLOPS) al domini dels pre-exaFLOPS (Fugaku: 442 petaFLOPS), i aviat tindrem el primer supercomputador amb un rendiment mĂ xim que sobrepassa els exaFLOPS (El Capitan: 1.5 exaFLOPS). Les tècniques d’ensemble experimenten un renaixement amb la disponibilitat d’aquestes escales tan extremes. Especialment les tècniques mĂ©s noves, com els filtres de partĂcules, seÂżn beneficiaran. Els mètodes d’ensemble actuals en climatologia, com els filtres d’ensemble de Kalman, exhibeixen una dependència lineal entre la mida del problema i la mida de l’ensemble, mentre que els filtres de partĂcules mostren una dependència exponencial. No obstant, juntament amb les oportunitats de poder computar massivament, apareixen desafiaments com l’alt consum energètic i la necessitat de tolerĂ ncia a errors. El temps de mitjana entre errors es redueix amb el nombre de components del sistema, i s’espera que els errors s’esdevinguin cada poques hores a exaescala. En aquesta tesis, explorem i desenvolupem tècniques per protegir grans cĂ lculs d’ensemble d’errors. Presentem noves tècniques en punts de control diferencials, recuperaciĂł elĂ stica, punts de control totalment asincrònics i compressiĂł de punts de control. A mĂ©s, dissenyem i implementem un filtre de partĂcules tolerant a errors amb captaciĂł i emmagatzematge en cachĂ© de partĂcules de manera preventiva. I finalment, dissenyem i implementem un marc per la validaciĂł automĂ tica i l’aplicaciĂł de compressiĂł amb pèrdua en l’assimilaciĂł de dades d’ensemble. En total, en aquesta tesis presentem cinc contribucions, les dues primeres de les quals milloren les tècniques de punts de control mĂ©s avançades, mentre que les tres restants aborden la resiliència dels cĂ lculs d’ensemble. Les contribucions representen tècniques independents de tolerĂ ncia a errors; no obstant, tambĂ© es poden utilitzar per a millorar les propietats de cadascuna. Per exemple, utilitzem la recuperaciĂł elĂ stica (segona contribuciĂł) per a mitigar la resiliència en un marc d’assimilaciĂł de dades d’ensemble en lĂnia (tercera contribuciĂł), i construĂŻm el nostre marc de validaciĂł (cinquena contribuciĂł) sobre la nostra implementaciĂł del filtre de partĂcules (quarta contribuciĂł). A mĂ©s, demostrem que les nostres contribucions milloren la resiliència i el rendiment amb experiments en diverses arquitectures, com processadors Intel, IBM i ARM.Postprint (published version
Analysing knowledge transfer in SHADE via complex network
In this research paper a hybridization of two computational intelligence fields, which are evolutionary computation techniques and complex networks (CNs), is presented. During the optimization run of the success-history based adaptive differential evolution (SHADE) a CN is built and its feature, node degree centrality, is extracted for each node. Nodes represent here the individual solutions from the SHADE population. Edges in the network mirror the knowledge transfer between individuals in SHADE's population, and therefore, the node degree centrality can be used to measure knowledge transfer capabilities of each individual. The correlation between individual's quality and its knowledge transfer capability is recorded and analyzed on the CEC2015 benchmark set in three different dimensionality settings-10D, 30D and 50D. Results of the analysis are discussed, and possible directions for future research are suggested.Ministry of Education, Youth and Sports of the Czech Republic within the National Sustainability Programme [LO1303 (MSMT-7778/2014)]; Internal Grant Agency of Tomas Bata University [IGA/CebiaTech/2018/003]; COST (European Cooperation in Science & Technology), Improving Applicability of NatureInspired Optimisation by Joining Theory and Practice (ImAppNIO) [CA15140]; COST (European Cooperation in Science & Technology), HighPerformance Modelling and Simulation for Big Data Applications (cHiPSet) [IC1406]; European Regional Development Fund under the Project CEBIA-Tech [CZ.1.05/2.1.00/03.0089
Predicting effective control parameters for differential evolution using cluster analysis of objective function features
A methodology is introduced which uses three simple objectivefunction features to predict effective control parameters for differential evolution. This is achieved using cluster analysis techniques to classify objectivefunctions using these features. Information on prior performance of variouscontrol parameters for each classification is then used to determine which control parameters to use in future optimisations. Our approach is compared tostate–of–the–art adaptive and non–adaptive techniques. Two accepted benchmark suites are used to compare performance and in all cases we show thatthe improvement resulting from our approach is statistically significant. Themajority of the computational effort of this methodology is performed off–line, however even when taking into account the additional on–line cost ourapproach outperforms other adaptive techniques. We also study the key tuning parameters of our methodology, such as number of clusters, which furthersupport the finding that the simple features selected are predictors of effectivecontrol parameters. The findings presented in this paper are significant becausethey show that simple to calculate features of objective functions can help toselect control parameters for optimisation algorithms. This can have an immediate positive impact the application of these optimisation algorithms on realworld problems where it is often difficult to select effective control parameters
A combined experimental and computational approach to investigate emergent network dynamics based on large-scale neuronal recordings
Sviluppo di un approccio integrato computazionale-sperimentale per lo studio di reti neuronali mediante registrazioni elettrofisiologich
- …