6,787 research outputs found
CFA optimizer: A new and powerful algorithm inspired by Franklin's and Coulomb's laws theory for solving the economic load dispatch problems
Copyright © 2018 John Wiley & Sons, Ltd. This paper presents a new efficient algorithm inspired by Franklin's and Coulomb's laws theory that is referred to as CFA algorithm, for finding the global solutions of optimal economic load dispatch problems in power systems. CFA is based on the impact of electrically charged particles on each other due to electrical attraction and repulsion forces. The effectiveness of the CFA in different terms is tested on basic benchmark problems. Then, the quality of the CFA to achieve accurate results in different aspects is examined and proven on economic load dispatch problems including 4 different size cases, 6, 10, 15, and 110-unit test systems. Finally, the results are compared with other inspired algorithms as well as results reported in the literature. The simulation results provide evidence for the well-organized and efficient performance of the CFA algorithm in solving great diversity of nonlinear optimization problems
A hybrid Jaya algorithm for reliability–redundancy allocation problems
© 2017 Informa UK Limited, trading as Taylor & Francis Group. This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching–learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability–redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series–parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30–100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results
Novel models and algorithms for systems reliability modeling and optimization
Recent growth in the scale and complexity of products and technologies in the defense and other industries is challenging product development, realization, and sustainment costs. Uncontrolled costs and routine budget overruns are causing all parties involved to seek lean product development processes and treatment of reliability, availability, and maintainability of the system as a true design parameter . To this effect, accurate estimation and management of the system reliability of a design during the earliest stages of new product development is not only critical for managing product development and manufacturing costs but also to control life cycle costs (LCC). In this regard, the overall objective of this research study is to develop an integrated framework for design for reliability (DFR) during upfront product development by treating reliability as a design parameter. The aim here is to develop the theory, methods, and tools necessary for: 1) accurate assessment of system reliability and availability and 2) optimization of the design to meet system reliability targets. In modeling the system reliability and availability, we aim to address the limitations of existing methods, in particular the Markov chains method and the Dynamic Bayesian Network approach, by incorporating a Continuous Time Bayesian Network framework for more effective modeling of sub-system/component interactions, dependencies, and various repair policies. We also propose a multi-object optimization scheme to aid the designer in obtaining optimal design(s) with respect to system reliability/availability targets and other system design requirements. In particular, the optimization scheme would entail optimal selection of sub-system and component alternatives. The theory, methods, and tools to be developed will be extensively tested and validated using simulation test-bed data and actual case studies from our industry partners
An approach for solving constrained reliability-redundancy allocation problems using cuckoo search algorithm
AbstractThe main goal of the present paper is to present a penalty based cuckoo search (CS) algorithm to get the optimal solution of reliability – redundancy allocation problems (RRAP) with nonlinear resource constraints. The reliability – redundancy allocation problem involves the selection of components' reliability in each subsystem and the corresponding redundancy levels that produce maximum benefits subject to the system's cost, weight, volume and reliability constraints. Numerical results of five benchmark problems are reported and compared. It has been shown that the solutions by the proposed approach are all superior to the best solutions obtained by the typical approaches in the literature are shown to be statistically significant by means of unpaired pooled t-test
Risk-based reliability allocation at component level in non-repairable systems by using evolutionary algorithm
The approach for setting system reliability in the risk-based reliability allocation
(RBRA) method is driven solely by the amount of ‘total losses’ (sum of reliability
investment and risk of failure) associated with a non-repairable system failure. For a
system consisting of many components, reliability allocation by RBRA
method becomes a very complex combinatorial optimisation problem particularly if
large numbers of alternatives, with different levels of reliability and associated cost,
are considered for each component. Furthermore, the complexity of this problem is
magnified when the relationship between cost and reliability assumed to be nonlinear
and non-monotone. An optimisation algorithm (OA) is therefore developed in
this research to demonstrate the solution for such difficult problems.
The core design of the OA originates from the fundamental concepts of
basic Evolutionary Algorithms which are well known for emulating Natural process
of evolution in solving complex optimisation problems through computer simulations
of the key genetic operations such as 'reproduction', ‘crossover’ and ‘mutation’.
However, the OA has been designed with significantly different model of evolution
(for identifying valuable parent solutions and subsequently turning them into even
better child solutions) compared to the classical genetic model for ensuring rapid and
efficient convergence of the search process towards an optimum solution. The vital
features of this OA model are 'generation of all populations (samples) with unique
chromosomes (solutions)', 'working exclusively with the elite chromosomes in each
iteration' and 'application of prudently designed genetic operators on the elite
chromosomes with extra emphasis on mutation operation'. For each possible
combination of alternatives, both system reliability and cost of failure is computed by
means of Monte-Carlo simulation technique.
For validation purposes, the optimisation algorithm is first applied to
solve an already published reliability optimisation problem with constraint on some
target level of system reliability, which is required to be achieved at a minimum
system cost. After successful validation, the viability of the OA is demonstrated by
showing its application in optimising four different non-repairable sample systems in view of the risk based reliability allocation method. Each system is assumed to have
discrete choice of component data set, showing monotonically increasing cost and
reliability relationship among the alternatives, and a fixed amount associated with
cost of failure. While this optimisation process is the main objective of the research
study, two variations are also introduced in this process for the purpose of
undertaking parametric studies. To study the effects of changes in the reliability
investment on system reliability and total loss, the first variation involves using a
different choice of discrete data set exhibiting a non-monotonically increasing
relationship between cost and reliability among the alternatives. To study the effects
of risk of failure, the second variation in the optimisation process is introduced by
means of a different cost of failure amount, associated with a given non-repairable
system failure.
The optimisation processes show very interesting results between system
reliability and total loss. For instance, it is observed that while maximum reliability
can generally be associated with high total loss and low risk of failure, the minimum
observed value of the total loss is not always associated with minimum system
reliability. Therefore, the results exhibit various levels of system reliability and total
loss with both values showing strong sensitivity towards the selected combination of
component alternatives. The first parametric study shows that second data set (nonmonotone)
creates more opportunities for the optimisation process for producing
better values of the loss function since cheaper components with higher reliabilities
can be selected with higher probabilities. In the second parametric study, it can be
seen that the reduction in the cost of failure amount reduces the size of risk of failure
which also increases the chances of using cheaper components with lower levels of
reliability hence producing lower values of the loss functions.
The research study concludes that the risk-based reliability allocation
method together with the optimisation algorithm can be used as a powerful tool for
highlighting various levels of system reliabilities with associated total losses for any
given system in consideration. This notion can be further extended in selecting
optimal system configuration from various competing topologies. With such
information to hand, reliability engineers can streamline complicated system designs
in view of the required level of system reliability with minimum associated total cost of premature failure. In all cases studied, the run time of the optimisation algorithm
increases linearly with the complexity of the algorithm and due to its unique model
of evolution, it appears to conduct very detailed multi-directional search across the
solution space in fewer generations - a very important attribute for solving the kind
of problem studied in this research. Consequently, it converges rapidly towards
optimum solution unlike the classical genetic algorithm which gradually reaches the
optimum, when successful. The research also identifies key areas for future
development with the scope to expand in various other dimensions due to its
interdisciplinary applications
Cloud engineering is search based software engineering too
Many of the problems posed by the migration of computation to cloud platforms can be formulated and solved using techniques associated with Search Based Software Engineering (SBSE). Much of cloud software engineering involves problems of optimisation: performance, allocation, assignment and the dynamic balancing of resources to achieve pragmatic trade-offs between many competing technical and business objectives. SBSE is concerned with the application of computational search and optimisation to solve precisely these kinds of software engineering challenges. Interest in both cloud computing and SBSE has grown rapidly in the past five years, yet there has been little work on SBSE as a means of addressing cloud computing challenges. Like many computationally demanding activities, SBSE has the potential to benefit from the cloud; ‘SBSE in the cloud’. However, this paper focuses, instead, of the ways in which SBSE can benefit cloud computing. It thus develops the theme of ‘SBSE for the cloud’, formulating cloud computing challenges in ways that can be addressed using SBSE
Energy management in communication networks: a journey through modelling and optimization glasses
The widespread proliferation of Internet and wireless applications has
produced a significant increase of ICT energy footprint. As a response, in the
last five years, significant efforts have been undertaken to include
energy-awareness into network management. Several green networking frameworks
have been proposed by carefully managing the network routing and the power
state of network devices.
Even though approaches proposed differ based on network technologies and
sleep modes of nodes and interfaces, they all aim at tailoring the active
network resources to the varying traffic needs in order to minimize energy
consumption. From a modeling point of view, this has several commonalities with
classical network design and routing problems, even if with different
objectives and in a dynamic context.
With most researchers focused on addressing the complex and crucial
technological aspects of green networking schemes, there has been so far little
attention on understanding the modeling similarities and differences of
proposed solutions. This paper fills the gap surveying the literature with
optimization modeling glasses, following a tutorial approach that guides
through the different components of the models with a unified symbolism. A
detailed classification of the previous work based on the modeling issues
included is also proposed
- …