336,797 research outputs found

    Training a Feed-forward Neural Network with Artificial Bee Colony Based Backpropagation Method

    Full text link
    Back-propagation algorithm is one of the most widely used and popular techniques to optimize the feed forward neural network training. Nature inspired meta-heuristic algorithms also provide derivative-free solution to optimize complex problem. Artificial bee colony algorithm is a nature inspired meta-heuristic algorithm, mimicking the foraging or food source searching behaviour of bees in a bee colony and this algorithm is implemented in several applications for an improved optimized outcome. The proposed method in this paper includes an improved artificial bee colony algorithm based back-propagation neural network training method for fast and improved convergence rate of the hybrid neural network learning method. The result is analysed with the genetic algorithm based back-propagation method, and it is another hybridized procedure of its kind. Analysis is performed over standard data sets, reflecting the light of efficiency of proposed method in terms of convergence speed and rate.Comment: 14 Pages, 11 figure

    Adapting Quality Assurance to Adaptive Systems: The Scenario Coevolution Paradigm

    Full text link
    From formal and practical analysis, we identify new challenges that self-adaptive systems pose to the process of quality assurance. When tackling these, the effort spent on various tasks in the process of software engineering is naturally re-distributed. We claim that all steps related to testing need to become self-adaptive to match the capabilities of the self-adaptive system-under-test. Otherwise, the adaptive system's behavior might elude traditional variants of quality assurance. We thus propose the paradigm of scenario coevolution, which describes a pool of test cases and other constraints on system behavior that evolves in parallel to the (in part autonomous) development of behavior in the system-under-test. Scenario coevolution offers a simple structure for the organization of adaptive testing that allows for both human-controlled and autonomous intervention, supporting software engineering for adaptive systems on a procedural as well as technical level.Comment: 17 pages, published at ISOLA 201

    Uncertainty-Integrated Surrogate Modeling for Complex System Optimization

    Get PDF
    Approximation models such as surrogate models provide a tractable substitute to expensive physical simulations and an effective solution to the potential lack of quantitative models of system behavior. These capabilities not only enable the efficient design of complex systems, but is also essential for the effective analysis of physical phenomena/characteristics in the different domains of Engineering, Material Science, Biomedical Science, and various other disciplines. Since these models provide an abstraction of the real system behavior (often a low-fidelity representative) it is important to quantify the accuracy and the reliability of such approximation models without investing additional expensive system evaluations (simulations or physical experiments). Standard error measures, such as the mean squared error, the cross-validation error, and the Akaike\u27s information criterion however provide limited (often inadequate) information regarding the accuracy of the final surrogate model while other more effective dedicated error measures are tailored towards only one class of surrogate models. This lack of accuracy information and the ability to compare and test diverse surrogate models reduce the confidence in model application, restricts appropriate model selection, and undermines the effectiveness of surrogate-based optimization. A key contribution of this dissertation is the development of a new model-independent approach to quantify the fidelity of a trained surrogate model in a given region of the design domain. This method is called the Predictive Estimation of Model Fidelity (PEMF). The PEMF method is derived from the hypothesis that the accuracy of an approximation model is related to the amount of data resources leveraged to train the model . In PEMF, intermediate surrogate models are iteratively constructed over heuristic subsets of sample points. The median and the maximum errors estimated over the remaining points are used to determine the respective error distributions at each iteration. The estimated modes of the error distributions are represented as functions of the density of intermediate training points through nonlinear regression, assuming a smooth decreasing trend of errors with increasing sample density. These regression functions are then used to predict the expected median and maximum errors in the final surrogate models. It is observed that the model fidelities estimated by PEMF are up to two orders of magnitude more accurate and statistically more stable compared to those based on the popularly-used leave-one-out cross-validation method, when applied to a variety of benchmark problems. By leveraging this new paradigm in quantifying the fidelity of surrogate models, a novel automated surrogate model selection framework is also developed. This PEMF-based model selection framework is called the Concurrent Surrogate Model Selection (COSMOS). COSMOS, unlike existing model selection methods, coherently operates at all the three levels necessary to facilitate optimal selection, i.e., (1) selecting the model type, (2) selecting the kernel function type, and (3) determining the optimal values of the typically user-prescribed parameters. The selection criteria that guide optimal model selection are determined by PEMF and the search process is performed using a MINLP solver. The effectiveness of COSMOS is demonstrated by successfully applying it to different benchmark and practical engineering problems, where it offers a first-of-its-kind globally competitive model selection. In this dissertation, the knowledge about the accuracy of a surrogate estimated using PEMF is applied to also develop a novel model management approach for engineering optimization. This approach adaptively selects computational models (both physics-based models and surrogate models) of differing levels of fidelity and computational cost, to be used during optimization, with the overall objective to yield optimal designs with high-fidelity function estimates at a reasonable computational expense. In this technique, a new adaptive model switching (AMS) metric defined to guide the switching of model from one to the next higher fidelity model during the optimization process. The switching criterion is based on whether the uncertainty associated with the current model output dominates the latest improvement of the relative fitness function, where both the model output uncertainty and the function improvement (across the population) are expressed as probability distributions. This adaptive model switching technique is applied to two practical problems through Particle Swarm Optimization to successfully illustrate: (i) the computational advantage of this method over purely high-fidelity model-based optimization, and (ii) the accuracy advantage of this method over purely low-fidelity model-based optimization. Motivated by the unique capabilities of the model switching concept, a new model refinement approach is also developed in this dissertation. The model refinement approach can be perceived as an adaptive sequential sampling approach applied in surrogate-based optimization. Decisions regarding when to perform additional system evaluations to refine the model is guided by the same model-uncertainty principles as in the adaptive model switching technique. The effectiveness of this new model refinement technique is illustrated through application to practical surrogate-based optimization in the area of energy sustainability
    • …
    corecore