586 research outputs found

    Chaotic multi-objective optimization based design of fractional order PI{\lambda}D{\mu} controller in AVR system

    Get PDF
    In this paper, a fractional order (FO) PI{\lambda}D\mu controller is designed to take care of various contradictory objective functions for an Automatic Voltage Regulator (AVR) system. An improved evolutionary Non-dominated Sorting Genetic Algorithm II (NSGA II), which is augmented with a chaotic map for greater effectiveness, is used for the multi-objective optimization problem. The Pareto fronts showing the trade-off between different design criteria are obtained for the PI{\lambda}D\mu and PID controller. A comparative analysis is done with respect to the standard PID controller to demonstrate the merits and demerits of the fractional order PI{\lambda}D\mu controller.Comment: 30 pages, 14 figure

    Develop an autonomous product-based reconfigurable manufacturing system

    Get PDF
    With the ever-emerging market including mass customization and product variety, reconfigurable manufacturing systems (RMS) have been presented as the solution. A manufacturing system that combines the benefits of the two classic manufacturing systems to increase responsiveness and reduce production time and costs. To cope with the lack of physical systems, an RMS system have been built at UiT Narvik. Today, both reconfiguration and deciding layout must be executed manually by a human. A task that is both incredibly time consuming and far from optimal. A method of automating the layout generation and thus the manufacturing system is presented in this thesis. To the author’s knowledge such experiment has not been performed previously. Layouts is generated with a NSGA-II algorithm in Python by minimizing objectives from a developed mathematical model. The results have been tested with a MiR-100 mobile robot placing five modules in two different layouts. The results have been compared with a digital visualization for validation. In addition to the visualization, videos of the physical system's automated layout generation are presented. The results concludes that the method both generates feasible layouts as well as enhancing the automation of the system

    Optimization as a design strategy. Considerations based on building simulation-assisted experiments about problem decomposition

    Full text link
    In this article the most fundamental decomposition-based optimization method - block coordinate search, based on the sequential decomposition of problems in subproblems - and building performance simulation programs are used to reason about a building design process at micro-urban scale and strategies are defined to make the search more efficient. Cyclic overlapping block coordinate search is here considered in its double nature of optimization method and surrogate model (and metaphore) of a sequential design process. Heuristic indicators apt to support the design of search structures suited to that method are developed from building-simulation-assisted computational experiments, aimed to choose the form and position of a small building in a plot. Those indicators link the sharing of structure between subspaces ("commonality") to recursive recombination, measured as freshness of the search wake and novelty of the search moves. The aim of these indicators is to measure the relative effectiveness of decomposition-based design moves and create efficient block searches. Implications of a possible use of these indicators in genetic algorithms are also highlighted.Comment: 48 pages. 12 figures, 3 table

    Learning to decompose: a paradigm for decomposition-based multiobjective optimization

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this recordThe decomposition-based evolutionary multiobjective optimization (EMO) algorithm has become an increasingly popular choice for a posteriori multiobjective optimization. However, recent studies have shown that their performance strongly depends on the Pareto front (PF) shapes. This can be attributed to the decomposition method, of which the reference points and subproblem formulation settings are not well adaptable to various problem characteristics. In this paper, we develop a learning-to-decompose (LTD) paradigm that adaptively sets the decomposition method by learning the characteristics of the estimated PF. Specifically, it consists of two interdependent parts, i.e., a learning module and an optimization module. Given the current nondominated solutions from the optimization module, the learning module periodically learns an analytical model of the estimated PF. Thereafter, useful information is extracted from the learned model to set the decomposition method for the optimization module: 1) reference points compliant with the PF shape and 2) subproblem formulations whose contours and search directions are appropriate for the current status. Accordingly, the optimization module, which can be any decomposition-based EMO algorithm in principle, decomposes the multiobjective optimization problem into a number of subproblems and optimizes them simultaneously. To validate our proposed LTD paradigm, we integrate it with two decomposition-based EMO algorithms, and compare them with four state-of-the-art algorithms on a series of benchmark problems with various PF shapes.Royal Societ

    A Linear Combination of Heuristics Approach to Spatial Sampling Hyperspectral Data for Target Tracking

    Get PDF
    Persistent surveillance of the battlespace results in better battlespace awareness which aids in obtaining air superiority, winning battles, and saving friendly lives. Although hyperspectral imagery (HSI) data has proven useful for discriminating targets, it presents many challenges as a useful tool in persistent surveillance. A new sensor under development has the potential of overcoming these challenges and transforming our persistent surveillance capability by providing HSI data for a limited number of pixels and grayscale video for the remainder. The challenge of exploiting this new sensor is determining where the HSI data in the sensor\u27s field of view will be the most useful. The approach taken is to use a utility function with components of equal dispersion, periodic poling, missed measurements, and predictive probability of association error (PPAE). The relative importance or optimal weighting of the different types of TOI is accomplished by a genetic algorithm using a multi-objective problem formulation. Experiments show using the utility function with equal weighting results in superior target tracking compared to any individual component by itself, and the equal weighting in close to the optimal solution. The new sensor is successfully exploited resulting in improved persistent surveillance

    Higher-order Knowledge Transfer for Dynamic Community Detection with Great Changes

    Full text link
    Network structure evolves with time in the real world, and the discovery of changing communities in dynamic networks is an important research topic that poses challenging tasks. Most existing methods assume that no significant change in the network occurs; namely, the difference between adjacent snapshots is slight. However, great change exists in the real world usually. The great change in the network will result in the community detection algorithms are difficulty obtaining valuable information from the previous snapshot, leading to negative transfer for the next time steps. This paper focuses on dynamic community detection with substantial changes by integrating higher-order knowledge from the previous snapshots to aid the subsequent snapshots. Moreover, to improve search efficiency, a higher-order knowledge transfer strategy is designed to determine first-order and higher-order knowledge by detecting the similarity of the adjacency matrix of snapshots. In this way, our proposal can better keep the advantages of previous community detection results and transfer them to the next task. We conduct the experiments on four real-world networks, including the networks with great or minor changes. Experimental results in the low-similarity datasets demonstrate that higher-order knowledge is more valuable than first-order knowledge when the network changes significantly and keeps the advantage even if handling the high-similarity datasets. Our proposal can also guide other dynamic optimization problems with great changes.Comment: Submitted to IEEE TEV

    A Parallel Genetic Algorithm for Optimizing Multicellular Models Applied to Biofilm Wrinkling

    Get PDF
    Multiscale computational models integrating sub-cellular, cellular, and multicellular levels can be powerful tools that help researchers replicate, understand, and predict multicellular biological phenomena. To leverage their potential, these models need correct parameter values, which specify cellular physiology and affect multicellular outcomes. This work presents a robust parameter optimization method, utilizing a parallel and distributed genetic-algorithm software package. A genetic algorithm was chosen because of its superiority in fitting complex functions for which mathematical techniques are less suited. Searching for optimal parameters proceeds by comparing the multicellular behavior of a simulated system to that of a real biological system on the basis of features extracted from each which capture high-level, emergent multicellular outcomes. The goal is to find the set of parameters which minimizes discrepancy between the two sets of features. The method is first validated by demonstrating its effectiveness on synthetic data, then it is applied to calibrating a simple mechanical model of biofilm wrinkling, a common type of morphology observed in biofilms. Spatiotemporal convergence of cellular movement derived from experimental observations of different strains of Bacillus subtilis colonies is used as the basis of comparison

    Optimization of Thermo-mechanical Conditions in Friction Stir Welding

    Get PDF
    • …
    corecore