12 research outputs found

    Denoising Autoencoders for fast Combinatorial Black Box Optimization

    Full text link
    Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Autoencoders (AE) are generative stochastic networks with these desired properties. We integrate a special type of AE, the Denoising Autoencoder (DAE), into an EDA and evaluate the performance of DAE-EDA on several combinatorial optimization problems with a single objective. We asses the number of fitness evaluations as well as the required CPU times. We compare the results to the performance to the Bayesian Optimization Algorithm (BOA) and RBM-EDA, another EDA which is based on a generative neural network which has proven competitive with BOA. For the considered problem instances, DAE-EDA is considerably faster than BOA and RBM-EDA, sometimes by orders of magnitude. The number of fitness evaluations is higher than for BOA, but competitive with RBM-EDA. These results show that DAEs can be useful tools for problems with low but non-negligible fitness evaluation costs.Comment: corrected typos and small inconsistencie

    A Novel Deep Belief Network Architecture with Interval Type-2 Fuzzy Set Based Uncertain Parameters Towards Enhanced Learning

    Get PDF
    This paper proposes a novel Deep Belief Network (DBN) architecture, the ‘Interval Type-2 Fuzzy DBN (IT2FDBN)’, which models the weights and biases with IT2 FSs. Thus, it introduces a novel algorithm for augmented deep leaning, which has the capability to address all the limitations of the classical DBN (CDBN) and T1 fuzzy DBN (T1FDBN). We comparatively evaluate the performance of the IT2FDBN by conducting experiments using the popular MNIST handwritten digit recognition datasets. Additionally, to demonstrate its robustness and generalization capabilities, we also conduct experiments taking two noisy variants of MNIST dataset, viz. the MNIST with AWGN (additive white Gaussian noise) and the MNIST with motion blur. We conduct extensive simulations by considering different combinations of nodes in the hidden layers of the DBN for better model selection. We thoroughly compare the results using well-known performance measures such as root mean square error (RMSE) and Error rate. We show that, in terms of RMSE values and error rates, the proposed IT2FDBN outperforms both T1FDBN and CDBN across all the three datasets. Further, we also provide the results of convergence, runtime-based comparison, and statistical analysis in support of our proposal.© 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).fi=vertaisarvioitu|en=peerReviewed

    Variational Autoencoder Based Estimation Of Distribution Algorithms And Applications To Individual Based Ecosystem Modeling Using EcoSim

    Get PDF
    Individual based modeling provides a bottom up approach wherein interactions give rise to high-level phenomena in patterns equivalent to those found in nature. This method generates an immense amount of data through artificial simulation and can be made tractable by machine learning where multidimensional data is optimized and transformed. Using individual based modeling platform known as EcoSim, we modeled the abilities of elitist sexual selection and communication of fear. Data received from these experiments was reduced in dimension through use of a novel algorithm proposed by us: Variational Autoencoder based Estimation of Distribution Algorithms with Population Queue and Adaptive Variance Scaling (VAE-EDA-Q AVS). We constructed a novel Estimation of Distribution Algorithm (EDA) by extending generative models known as variational autoencoders (VAE). VAE-EDA-Q, proposed by us, smooths the data generation process using an iteratively updated queue (Q) of populations. Adaptive Variance Scaling (AVS) dynamically updates the variance at which models are sampled based on fitness. The combination of VAE-EDA-Q with AVS demonstrates high computational efficiency and requires few fitness evaluations. We extended VAE-EDA-Q AVS to act as a feature reducing wrapper method in conjunction with C4.5 Decision trees to reduce the dimensionality of data. The relationship between sexual selection, random selection, and speciation is a contested topic. Supporting evidence suggests sexual selection to drive speciation. Opposing evidence contends either a negative or absence of correlation to exist. We utilized EcoSim to model elitist and random mate selection. Our results demonstrated a significantly lower speciation rate, a significantly lower extinction rate, and a significantly higher turnover rate for sexual selection groups. Species diversification was found to display no significant difference. The relationship between communication and foraging behavior similarly features opposing hypotheses in claim of both increases and decreases of foraging behavior in response to alarm communication. Through modeling with EcoSim, we found alarm communication to decrease foraging activity in most cases, yet gradually increase foraging activity in some other cases. Furthermore, we found both outcomes resulting from alarm communication to increase fitness as compared to non-communication

    Machine Learning based Hypergraph Pruning for Partitioning

    Get PDF

    A Comprehensive Review of Bio-Inspired Optimization Algorithms Including Applications in Microelectronics and Nanophotonics

    Get PDF
    The application of artificial intelligence in everyday life is becoming all-pervasive and unavoidable. Within that vast field, a special place belongs to biomimetic/bio-inspired algorithms for multiparameter optimization, which find their use in a large number of areas. Novel methods and advances are being published at an accelerated pace. Because of that, in spite of the fact that there are a lot of surveys and reviews in the field, they quickly become dated. Thus, it is of importance to keep pace with the current developments. In this review, we first consider a possible classification of bio-inspired multiparameter optimization methods because papers dedicated to that area are relatively scarce and often contradictory. We proceed by describing in some detail some more prominent approaches, as well as those most recently published. Finally, we consider the use of biomimetic algorithms in two related wide fields, namely microelectronics (including circuit design optimization) and nanophotonics (including inverse design of structures such as photonic crystals, nanoplasmonic configurations and metamaterials). We attempted to keep this broad survey self-contained so it can be of use not only to scholars in the related fields, but also to all those interested in the latest developments in this attractive area

    Scalability of using Restricted Boltzmann Machines for combinatorial optimization

    No full text
    Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Restricted Boltzmann Machines (RBMs) are generative neural networks with these desired properties. We integrate an RBM into an EDA and evaluate the performance of this system in solving combinatorial optimization problems with a single objective. We assess how the number of fitness evaluations and the CPU time scale with problem size and complexity. The results are compared to the Bayesian Optimization Algorithm (BOA), a state-of-the-art multivariate EDA, and the Dependency Tree Algorithm (DTA), which uses a simpler probability model requiring less computational effort for training the model. Although RBM-EDA requires larger population sizes and a larger number of fitness evaluations than BOA, it outperforms BOA in terms of CPU times, in particular if the problem is large or complex. This is because RBM-EDA requires less time for model building than BOA. DTA with its restricted model is a good choice for small problems but fails for larger and more difficult problems. These results highlight the potential of using generative neural networks for combinatorial optimization. (C) 2016 Elsevier B.V. All rights reserved
    corecore