4 research outputs found

    Robust and Efficient Swarm Communication Topologies for Hostile Environments

    Full text link
    Swarm Intelligence-based optimization techniques combine systematic exploration of the search space with information available from neighbors and rely strongly on communication among agents. These algorithms are typically employed to solve problems where the function landscape is not adequately known and there are multiple local optima that could result in premature convergence for other algorithms. Applications of such algorithms can be found in communication systems involving design of networks for efficient information dissemination to a target group, targeted drug-delivery where drug molecules search for the affected site before diffusing, and high-value target localization with a network of drones. In several of such applications, the agents face a hostile environment that can result in loss of agents during the search. Such a loss changes the communication topology of the agents and hence the information available to agents, ultimately influencing the performance of the algorithm. In this paper, we present a study of the impact of loss of agents on the performance of such algorithms as a function of the initial network configuration. We use particle swarm optimization to optimize an objective function with multiple sub-optimal regions in a hostile environment and study its performance for a range of network topologies with loss of agents. The results reveal interesting trade-offs between efficiency, robustness, and performance for different topologies that are subsequently leveraged to discover general properties of networks that maximize performance. Moreover, networks with small-world properties are seen to maximize performance under hostile conditions

    Hybrid Genetic Bees Algorithm applied to Single Machine Scheduling with Earliness and Tardiness Penalties

    Get PDF
    This paper presents a hybrid Genetic-Bees Algorithm based optimised solution for the single machine scheduling problem. The enhancement of the Bees Algorithm (BA) is conducted using the Genetic Algorithm's (GA's) operators during the global search stage. The proposed enhancement aims to increase the global search capability of the BA gradually with new additions. Although the BA has very successful implementations on various type of optimisation problems, it has found that the algorithm suffers from weak global search ability which increases the computational complexities on NP-hard type optimisation problems e.g. combinatorial/permutational type optimisation problems. This weakness occurs due to using a simple global random search operation during the search process. To reinforce the global search process in the BA, the proposed enhancement is utilised to increase exploration capability by expanding the number of fittest solutions through the genetical variations of promising solutions. The hybridisation process is realised by including two strategies into the basic BA, named as â\u80\u9creinforced global searchâ\u80\u9d and â\u80\u9cjumping functionâ\u80\u9d strategies. The reinforced global search strategy is the first stage of the hybridisation process and contains the mutation operator of the GA. The second strategy, jumping function strategy, consists of four GA operators as single point crossover, multipoint crossover, mutation and randomisation. To demonstrate the strength of the proposed solution, several experiments were carried out on 280 well-known single machine benchmark instances, and the results are presented by comparing to other well-known heuristic algorithms. According to the experiments, the proposed enhancements provides better capability to basic BA to jump from local minima, and GBA performed better compared to BA in terms of convergence and the quality of results. The convergence time reduced about 60% with about 30% better results for highly constrained jobs

    Learning automata and sigma imperialist competitive algorithm for optimization of single and multi-objective functions

    Get PDF
    Evolutionary Algorithms (EA) consist of several heuristics which are able to solve optimisation tasks by imitating some aspects of natural evolution. Two widely-used EAs, namely Harmony Search (HS) and Imperialist Competitive Algorithm (ICA), are considered for improving single objective EA and Multi Objective EA (MOEA), respectively. HS is popular because of its speed and ICA has the ability for escaping local optima, which is an important criterion for a MOEA. In contrast, both algorithms have suffered some shortages. The HS algorithm could be trapped in local optima if its parameters are not tuned properly. This shortage causes low convergence rate and high computational time. In ICA, there is big obstacle that impedes ICA from becoming MOEA. ICA cannot be matched with crowded distance method which produces qualitative value for MOEAs, while ICA needs quantitative value to determine power of each solution. This research proposes a learnable EA, named learning automata harmony search (LAHS). The EA employs a learning automata (LA) based approach to ensure that HS parameters are learnable. This research also proposes a new MOEA based on ICA and Sigma method, named Sigma Imperialist Competitive Algorithm (SICA). Sigma method provides a mechanism to measure the solutions power based on their quantity value. The proposed LAHS and SICA algorithms are tested on wellknown single objective and multi objective benchmark, respectively. Both LAHS and MOICA show improvements in convergence rate and computational time in comparison to the well-known single EAs and MOEAs
    corecore