41 research outputs found

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Towards COP27: The Water-Food-Energy Nexus in a Changing Climate in the Middle East and North Africa

    Get PDF
    Due to its low adaptability to climate change, the MENA region has become a "hot spot". Water scarcity, extreme heat, drought, and crop failure will worsen as the region becomes more urbanized and industrialized. Both water and food scarcity are made worse by civil wars, terrorism, and political and social unrest. It is unclear how climate change will affect the MENA water–food–energy nexus. All of these concerns need to be empirically evaluated and quantified for a full climate change assessment in the region. Policymakers in the MENA region need to be aware of this interconnection between population growth, rapid urbanization, food safety, climate change, and the global goal of lowering greenhouse gas emissions (as planned in COP27). Researchers from a wide range of disciplines have come together in this SI to investigate the connections between water, food, energy, and climate in the region. By assessing the impacts of climate change on hydrological processes, natural disasters, water supply, energy production and demand, and environmental impacts in the region, this SI will aid in implementation of sustainable solutions to these challenges across multiple spatial scales

    A Hierarchal Planning Framework for AUV Mission Management in a Spatio-Temporal Varying Ocean

    Full text link
    The purpose of this paper is to provide a hierarchical dynamic mission planning framework for a single autonomous underwater vehicle (AUV) to accomplish task-assign process in a limited time interval while operating in an uncertain undersea environment, where spatio-temporal variability of the operating field is taken into account. To this end, a high level reactive mission planner and a low level motion planning system are constructed. The high level system is responsible for task priority assignment and guiding the vehicle toward a target of interest considering on-time termination of the mission. The lower layer is in charge of generating optimal trajectories based on sequence of tasks and dynamicity of operating terrain. The mission planner is able to reactively re-arrange the tasks based on mission/terrain updates while the low level planner is capable of coping unexpected changes of the terrain by correcting the old path and re-generating a new trajectory. As a result, the vehicle is able to undertake the maximum number of tasks with certain degree of maneuverability having situational awareness of the operating field. The computational engine of the mentioned framework is based on the biogeography based optimization (BBO) algorithm that is capable of providing efficient solutions. To evaluate the performance of the proposed framework, firstly, a realistic model of undersea environment is provided based on realistic map data, and then several scenarios, treated as real experiments, are designed through the simulation study. Additionally, to show the robustness and reliability of the framework, Monte-Carlo simulation is carried out and statistical analysis is performed. The results of simulations indicate the significant potential of the two-level hierarchical mission planning system in mission success and its applicability for real-time implementation

    Study on Parametric Optimization of Fused Deposition Modelling (FDM) Process

    Get PDF
    Rapid prototyping (RP) is a generic term for a number of technologies that enable fabrication of physical objects directly from CAD data sources. In contrast to classical methods of manufacturing such as milling and forging which are based on subtractive and formative principles espectively, these processes are based on additive principle for part fabrication. The biggest advantage of RP processes is that an entire 3-D (three-dimensional) consolidated assembly can be fabricated in a single setup without any tooling or human intervention; further, the part fabrication methodology is independent of the mplexity of the part geometry. Due to several advantages, RP has attracted the considerable attention of manufacturing industries to meet the customer demands for incorporating continuous and rapid changes in manufacturing in shortest possible time and gain edge over competitors. Out of all commercially available RP processes, fused deposition modelling (FDM) uses heated thermoplastic filament which are extruded from the tip of nozzle in a prescribed manner in a temperature controlled environment for building the part through a layer by layer deposition method. Simplicity of operation together with the ability to fabricate parts with locally controlled properties resulted in its wide spread application not only for prototyping but also for making functional parts. However, FDM process has its own demerits related with accuracy, surface finish, strength etc. Hence, it is absolutely necessary to understand the shortcomings of the process and identify the controllable factors for improvement of part quality. In this direction, present study focuses on the improvement of part build methodology by properly controlling the process parameters. The thesis deals with various part quality measures such as improvement in dimensional accuracy, minimization of surface roughness, and improvement in mechanical properties measured in terms of tensile, compressive, flexural, impact strength and sliding wear. The understanding generated in this work not only explain the complex build mechanism but also present in detail the influence of processing parameters such as layer thickness, orientation, raster angle, raster width and air gap on studied responses with the help of statistically validated models, microphotographs and non-traditional optimization methods. For improving dimensional accuracy of the part, Taguchi‟s experimental design is adopted and it is found that measured dimension is oversized along the thickness direction and undersized along the length, width and diameter of the hole. It is observed that different factors and interactions control the part dimensions along different directions. Shrinkage of semi molten material extruding out from deposition nozzle is the major cause of part dimension reduction. The oversized dimension is attributed to uneven layer surfaces generation and slicing constraints. For recommending optimal factor setting for improving overall dimension of the part, grey Taguchi method is used. Prediction models based on artificial neural network and fuzzy inference principle are also proposed and compared with Taguchi predictive model. The model based on fuzzy inference system shows better prediction capability in comparison to artificial neural network model. In order to minimize the surface roughness, a process improvement strategy through effective control of process parameters based on central composite design (CCD) is employed. Empirical models relating response and process parameters are developed. The validity of the models is established using analysis of variance (ANOVA) and residual analysis. Experimental results indicate that process parameters and their interactions are different for minimization of roughness in different surfaces. The surface roughness responses along three surfaces are combined into a single response known as multi-response performance index (MPI) using principal component analysis. Bacterial foraging optimisation algorithm (BFOA), a latest evolutionary approach, has been adopted to find out best process parameter setting which maximizes MPI. Assessment of process parameters on mechanical properties viz. tensile, flexural, impact and compressive strength of part fabricated using FDM technology is done using CCD. The effect of each process parameter on mechanical property is analyzed. The major reason for weak strength is attributed to distortion within or between the layers. In actual practice, the parts are subjected to various types of loadings and it is necessary that the fabricated part must withhold more than one type of loading simultaneously.To address this issue, all the studied strengths are combined into a single response known as composite desirability and then optimum parameter setting which will maximize composite desirability is determined using quantum behaved particle swarm optimization (QPSO). Resistance to wear is an important consideration for enhancing service life of functional parts. Hence, present work also focuses on extensive study to understand the effect of process parameters on the sliding wear of test specimen. The study not only provides insight into complex dependency of wear on process parameters but also develop a statistically validated predictive equation. The equation can be used by the process planner for accurate wear prediction in practice. Finally, comparative evaluation of two swarm based optimization methods such as QPSO and BFOA are also presented. It is shown that BFOA, because of its biologically motivated structure, has better exploration and exploitation ability but require more time for convergence as compared to QPSO. The methodology adopted in this study is quite general and can be used for other related or allied processes, especially in multi input, multi output systems. The proposed study can be used by industries like aerospace, automobile and medical for identifying the process capability and further improvement in FDM process or developing new processes based on similar principle

    Cluster Heads Selection and Cooperative Nodes Selection for Cluster-based Internet of Things Networks

    Get PDF
    PhDClustering and cooperative transmission are the key enablers in power-constrained Internet of Things (IoT) networks. The challenges for power-constrained devices in IoT networks are to reduce the energy consumption and to guarantee the Quality of Service (QoS) provision. In this thesis, optimal node selection algorithms based on clustering and cooperative communication are proposed for different network scenarios, in particular: • The QoS-aware energy efficient cluster heads (CHs) selection algorithm in one-hop capillary networks. This algorithm selects the optimum set of CHs and construct clusters accordingly based on the location and residual energy of devices. • Cooperative nodes selection algorithms for cluster-based capillary networks. By utilising the spacial diversity of cooperative communication, these algorithms select the optimum set of cooperative nodes to assist the CHs for the long-haul transmission. In addition, with the regard of evenly energy distribution in one-hop cluster-based capillary networks, the CH selection is taken into consideration when developing cooperative devices selection algorithms. The performance of proposed selection algorithms are evaluated via comprehensive simulations. Simulation results show that the proposed algorithms can achieve up to 20% network lifetime longevity and up to 50% overall packet error rate (PER) decrement. Furthermore, the simulation results also prove that the optimal tradeoff between energy efficiency and QoS provision can be achieved in one-hop and multi-hop cluster-based scenarios.Chinese Scholarship Counci

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms

    Towards Bayesian System Identification: With Application to SHM of Offshore Structures

    Get PDF
    Within the offshore industry Structural Health Monitoring remains a growing area of interest. The oil and gas sectors are faced with ageing infrastructure and are driven by the desire for reliable lifetime extension, whereas the wind energy sector is investing heavily in a large number of structures. This leads to a number of distinct challenges for Structural Health Monitoring which are brought together by one unifying theme --- uncertainty. The offshore environment is highly uncertain, existing structures have not been monitored from construction and the loading and operational conditions they have experienced (among other factors) are not known. For the wind energy sector, high numbers of structures make traditional inspection methods costly and in some cases dangerous due to the inaccessibility of many wind farms. Structural Health Monitoring attempts to address these issues by providing tools to allow automated online assessment of the condition of structures to aid decision making. The work of this thesis presents a number of Bayesian methods which allow system identification, for Structural Health Monitoring, under uncertainty. The Bayesian approach explicitly incorporates prior knowledge that is available and combines this with evidence from observed data to allow the formation of updated beliefs. This is a natural way to approach Structural Health Monitoring, or indeed, many engineering problems. It is reasonable to assume that there is some knowledge available to the engineer before attempting to detect, locate, classify, or model damage on a structure. Having a framework where this knowledge can be exploited, and the uncertainty in that knowledge can be handled rigorously, is a powerful methodology. The problem being that the actual computation of Bayesian results can pose a significant challenge both computationally and in terms of specifying appropriate models. This thesis aims to present a number of Bayesian tools, each of which leverages the power of the Bayesian paradigm to address a different Structural Health Monitoring challenge. Within this work the use of Gaussian Process models is presented as a flexible nonparametric Bayesian approach to regression, which is extended to handle dynamic models within the Gaussian Process NARX framework. The challenge in training Gaussian Process models is seldom discussed and the work shown here aims to offer a quantitative assessment of different learning techniques including discussions on the choice of cost function for optimisation of hyperparameters and the choice of the optimisation algorithm itself. Although rarely considered, the effects of these choices are demonstrated to be important and to inform the use of a Gaussian Process NARX model for wave load identification on offshore structures. The work is not restricted to only Gaussian Process models, but Bayesian state-space models are also used. The novel use of Particle Gibbs for identification of nonlinear oscillators is shown and modifications to this algorithm are applied to handle its specific use in Structural Health Monitoring. Alongside this, the Bayesian state-space model is used to perform joint input-state-parameter inference for Operational Modal Analysis where the use of priors over the parameters and the forcing function (in the form of a Gaussian Process transformed into a state-space representation) provides a methodology for this output-only identification under parameter uncertainty. Interestingly, this method is shown to recover the parameter distributions of the model without compromising the recovery of the loading time-series signal when compared to the case where the parameters are known. Finally, a novel use of an online Bayesian clustering method is presented for performing Structural Health Monitoring in the absence of any available training data. This online method does not require a pre-collected training dataset, nor a model of the structure, and is capable of detecting and classifying a range of operational and damage conditions while in service. This leaves the reader with a toolbox of methods which can be applied, where appropriate, to identification of dynamic systems with a view to Structural Health Monitoring problems within the offshore industry and across engineering

    Cell Production System Design: A Literature Review

    Get PDF
    Purpose In a cell production system, a number of machines that differ in function are housed in the same cell. The task of these cells is to complete operations on similar parts that are in the same group. Determining the family of machine parts and cells is one of the major design problems of production cells. Cell production system design methods include clustering, graph theory, artificial intelligence, meta-heuristic, simulation, mathematical programming. This article discusses the operation of methods and research in the field of cell production system design. Methodology: To examine these methods, from 187 articles published in this field by authoritative scientific sources, based on the year of publication and the number of restrictions considered and close to reality, which are searched using the keywords of these restrictions and among them articles Various aspects of production and design problems, such as considering machine costs and cell size and process routing, have been selected simultaneously. Findings: Finally, the distribution diagram of the use of these methods and the limitations considered by their researchers, shows the use and efficiency of each of these methods. By examining them, more efficient and efficient design fields of this type of production system can be identified. Originality/Value: In this article, the literature on cell production system from 1972 to 2021 has been reviewed
    corecore