17 research outputs found

    Optimal Path Finding With Beetle Antennae Search Algorithm by Using Ant Colony Optimization Initialization and Different Searching Strategies

    Get PDF
    Intelligent algorithm acts as one of the most important solutions to path planning problem. In order to solve the problems of poor real-time and low accuracy of the heuristic optimization algorithm in 3D path planning, this paper proposes a novel heuristic intelligent algorithm derived from the Beetle Antennae Search (BAS) algorithm. The algorithm proposed in this paper has the advantages of wide search range and high search accuracy, and can still maintain a low time complexity when multiple mechanisms are introduced. This paper combines the BAS algorithm with three non-trivial mechanisms proposed to solve the problems of low search efficiency and poor convergence accuracy in 3D path planning. The algorithm contains three non-trivial mechanisms, including local fast search, aco initial path generation, and searching information orientation. At first, local fast search mechanism presents a specific bounded area and add fast iterative exploration to speed up the convergence of path finding. Then aco initial path generation mechanism is initialized by Ant Colony Optimization (ACO) as a pruning basis. The initialization of the ACO algorithm can quickly obtain an effective path. Using the exploration trend of this path, the algorithm can quickly obtain a locally optimal path. Thirdly, searching information orientation mechanism is employed for BAS algorithm to guarantee the stability of the path finding, thereby avoiding blind exploration and reducing wasted computing resources. Simulation results show that the algorithm proposed in this paper has higher search accuracy and exploration speed than other intelligent algorithms, and improves the adaptability of the path planning algorithms in different environments. The effectiveness of the proposed algorithm is verified in simulation

    Hybrid multi-strategy chaos somersault foraging chimp optimization algorithm research

    Get PDF
    To address the problems of slow convergence speed and low accuracy of the chimp optimization algorithm (ChOA), and to prevent falling into the local optimum, a chaos somersault foraging ChOA (CSFChOA) is proposed. First, the cat chaotic sequence is introduced to generate the initial solutions, and then opposition-based learning is used to select better solutions to form the initial population, which can ensure the diversity of the algorithm at the beginning and improve the convergence speed and optimum searching accuracy. Considering that the algorithm is likely to fall into local optimum in the final stage, by taking the optimal solution as the pivot, chimps with better adaptation at the mirror image position replace chimps from the original population using the somersault foraging strategy, which can increase the population diversity and expand the search scope. The optimization search tests were performed on 23 standard test functions and CEC2019 test functions, and the Wilcoxon rank sum test was used for statistical analysis. The CSFChOA was compared with the ChOA and other improved intelligent optimization algorithms. The experimental results show that the CSFChOA outperforms most of the other algorithms in terms of mean and standard deviation, which indicates that the CSFChOA performs well in terms of the convergence accuracy, convergence speed and robustness of global optimization in both low-dimensional and high-dimensional experiments. Finally, through the test and analysis comparison of two complex engineering design problems, the CSFChOA was shown to outperform other algorithms in terms of optimal cost. For the design of the speed reducer, the performance of the CSFChOA is 100% better than other algorithms in terms of optimal cost; and, for the design of a three-bar truss, the performance of the CSFChOA is 6.77% better than other algorithms in terms of optimal cost, which verifies the feasibility, applicability and superiority of the CSFChOA in practical engineering problems

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    Connections between individual dispersal behavior and the multi-scale distribution of a saproxylic beetle

    Get PDF
    Species incidence results from a complex interaction among species traits (e.g., mobility and behavior), intra- and inter-specific interactions, quality and configuration of the landscape, and historical events. Determining which factors are most important to incidence is difficult because the multiple processes affecting incidence operate at different temporal and spatial scales. I conducted an empirically-based study relating individual behavior (dispersal, habitat selection, and intra-specific interactions) with hierarchically-organized environmental filters to predict the incidence of Odontotaenius disjunctus (Passalidae), a saproxylic (=decayed wood dependent) beetle common to eastern North American forests, at multiple spatial scales. In dispersal experiments, O. disjunctus movement was faster and more linear in suitable habitat than in unsuitable matrix (non-forest), and O. disjunctus exhibited a strong response to a high-contrast boundary between forest and open-field. A hierarchically-organized (log-section \u3c log \u3c subplot \u3c forest plot) survey of incidence across 22 forest plots in Louisiana showed that patchiness in incidence was greatest at fine-scales (log-section and log), partly in relation to two environmental variables: decay state and log surface area. In fine-scale habitat selection experiments, resettlement distances were usually less than 5-10 meters, and immigration was positively influenced by log size and the presence of conspecifics, although aggregation associated with conspecific attraction did not occur because emigration balanced immigration. Additionally, population growth rate showed negative density dependence in post-settlement experiments. Finally, I developed an individual-based, spatially-explicit simulation model to relate fine-scale response to cues (habitat, mate, and conspecific density) and dispersal limitation to the density-area relationship. Unlike conspecific search, mate search did not result in large aggregations of individuals on large patches, but instead resulted in almost even density among patches. Both habitat and mate search led to high overall incidence even when dispersal limitation was high. I conclude that O. disjunctus is a low-mobility species for which incidence is primarily determined by fine-scale interactions with conspecifics and the environment, and for whom high incidence can be explained in part by efficient use of cues during habitat search. Although sensitivity to large-scale habitat loss is a consistent pattern across taxa, this study emphasizes the overriding importance of fine-scale processes in predicting incidence

    Deep neural network generation for image classification within resource-constrained environments using evolutionary and hand-crafted processes

    Get PDF
    Constructing Convolutional Neural Networks (CNN) models is a manual process requiringexpert knowledge and trial and error. Background research highlights the following knowledge gaps. 1) existing efficiency-focused CNN models make design choices that impact model performance. Better ways are needed to construct accurate models for resourceconstrained environments that lack graphics processing units (GPU’s) to speed up model inference time such as CCTV cameras and IoT devices. 2) Existing methods for automatically designing CNN architectures do not explore the search space effectively for the best solution and 3) existing methods for automatically designing CNN architectures do not exploit modern model architecture design patterns such as residual connections. The lack of residual connections means the model depth is limited owing to the vanishing gradient problem. Furthermore, existing methods for automatically designing CNN architectures adopt search strategies that make them vulnerable to local minima traps. Better techniques to construct efficient CNN models, and automated approaches that can produce accurate deep model constructions advance many areas such as hazard detection, medical diagnosis and robotics in both academia and industry. The work undertaken during this research are 1) the proposal of an efficient and accurate CNN architecture for resource-constrained environments owing to a novel block structure containing 1x3 and 3x1 convolutions to save computational cost, 2) proposed a particle swarm optimization (PSO) method of automatically constructing efficient deep CNN architectures with greater accuracy by proposing a novel encoding and search strategy, 3) proposed a PSO based method of automatically constructing deeper CNN models with improved accuracy by proposing a novel encoding scheme that employs residual connections, in novel search mechanism that follows the global and neighbouring best leaders. The main findings of this research are 1) the proposed efficiency-focused CNN model outperformed MobileNetV2 by 13.43% in respect to accuracy, and 39.63% in respect to efficiency, measured in floating-point operations. A reduction in floating-point operations means the model has the potential for faster inference times which is beneficial to applications within resource-constrained environments without GPU’s such as CCTV cameras. 2) the proposed automatic CNN generation technique outperformed existing methods by 7.58% in respect to accuracy and a 63% improvement in search time efficiency owing to the proposal of more efficient architectures speeding up the search process and 3) the proposed automatic deep residual CNN generation method improved model accuracy by 4.43% when compared against related studies owing to deeper model construction and improvements in the search process. The proposed search process embeds human knowledge of constructing deep residual networks and provides constraint settings which can be used to limit the proposed models depth and width. The ability to constrain a models depth and width is important as it ensures the upper bounds of a proposed model will fit within the constraints of resource-constrained environments

    Applied Methuerstic computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    Advances in Artificial Intelligence: Models, Optimization, and Machine Learning

    Get PDF
    The present book contains all the articles accepted and published in the Special Issue “Advances in Artificial Intelligence: Models, Optimization, and Machine Learning” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of artificial intelligence and its subfields. These topics include, among others, deep learning and classic machine learning algorithms, neural modelling, architectures and learning algorithms, biologically inspired optimization algorithms, algorithms for autonomous driving, probabilistic models and Bayesian reasoning, intelligent agents and multiagent systems. We hope that the scientific results presented in this book will serve as valuable sources of documentation and inspiration for anyone willing to pursue research in artificial intelligence, machine learning and their widespread applications

    Optimisation, Optimal Control and Nonlinear Dynamics in Electrical Power, Energy Storage and Renewable Energy Systems

    Get PDF
    The electrical power system is undergoing a revolution enabled by advances in telecommunications, computer hardware and software, measurement, metering systems, IoT, and power electronics. Furthermore, the increasing integration of intermittent renewable energy sources, energy storage devices, and electric vehicles and the drive for energy efficiency have pushed power systems to modernise and adopt new technologies. The resulting smart grid is characterised, in part, by a bi-directional flow of energy and information. The evolution of the power grid, as well as its interconnection with energy storage systems and renewable energy sources, has created new opportunities for optimising not only their techno-economic aspects at the planning stages but also their control and operation. However, new challenges emerge in the optimization of these systems due to their complexity and nonlinear dynamic behaviour as well as the uncertainties involved.This volume is a selection of 20 papers carefully made by the editors from the MDPI topic “Optimisation, Optimal Control and Nonlinear Dynamics in Electrical Power, Energy Storage and Renewable Energy Systems”, which was closed in April 2022. The selected papers address the above challenges and exemplify the significant benefits that optimisation and nonlinear control techniques can bring to modern power and energy systems

    An investigation into the utilization of swarm intellingence for the control of the doubly fed induction generator under the influence of symmetrical and assymmetrical voltage dips.

    Get PDF
    Doctoral Degree. University of KwaZulu-Natal, Durban.The rapid depletion of fossil, fuels, increase in population, and birth of various industries has put a severe strain on conventional electrical power generation systems. It is because of this, that Wind Energy Conversion Systems has recently come under intense investigation. Among all topologies, the Doubly Fed Induction Generator is the preferred choice, owing to its direct grid connection, and variable speed nature. However, this connection has disadvantages. Wind turbines are generally placed in areas where the national grid is weak. In the case of asymmetrical voltage dips, which is a common occurrence near wind farms, the operation of the DFIG is negatively affected. Further, in the case of symmetrical voltage dips, as in the case of a three-phase short circuit, this direct grid connection poses a severe threat to the health and subsequent operation of the machine. Owing to these risks, there has been various approaches which are utilized to mitigate the effect of such occurrences. Considering asymmetrical voltage dips, symmetrical component theory allows for decomposition and subsequent elimination of negative sequence components. The proportional resonant controller, which introduces an infinite gain at synchronous frequency, is another viable option. When approached with the case of symmetrical voltage dips, the crowbar is an established method to expedite the rate of decay of the rotor current and dc link voltage. However, this requires the DFIG to be disconnected from the grid, which is against the rules of recently grid codes. To overcome such, the Linear Quadratic Regulator may be utilized. As evident, there has been various approaches to these issues. However, they all require obtaining of optimized gain values. Whilst these controllers work well, poor optimization of gain quantities may result in sub-optimal performance of the controllers. This work provides an investigation into the utilization of metaheuristic optimization techniques for these purposes. This research focuses on swarm-intelligence, which have proven to provide good results. Various swarm techniques from across the timeline spectrum, beginning from the well-known Particle Swarm Optimization, to the recently proposed African Vultures Optimization Algorithm, have been applied and analysed

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
    corecore