771 research outputs found

    Many-Objective Hybrid Optimization Under Uncertainty With Applications

    Get PDF
    A novel method for solving many-objective optimization problems under uncertainty was developed. It is well known that no single optimization algorithm performs best for all problems. Therefore, the developed method, a many-objective hybrid optimizer (MOHO), uses five constitutive algorithms and actively switches between them throughout the optimization process allowing for robust optimization. MOHO monitors the progress made by each of the five algorithms and allows the best performing algorithm more attempts at finding the optimum. This removes the need for user input for selecting algorithm as the best performing algorithm is automatically selected thereby increasing the probability of converging to the optimum. An uncertainty quantification framework, based on sparse polynomial chaos expansion, to propagate the uncertainties in the input parameter to the objective functions was also developed and validated. Where the samples and analysis runs needed for standard polynomial chaos expansion increases exponentially with the dimensionality, the presented sparse polynomial chaos approach efficiently propagates the uncertainty with only a few samples, thereby greatly reducing the computational cost. The performance of MOHO was investigated on a total of 65 analytical test problems from the DTLZ and WFG test suite, for which the analytical solution is known. MOHO is also applied to two additional real-life cases of aerodynamic shape design of subsonic and hypersonic bodies. Aerodynamic shape optimization is often computationally expensive and is, therefore, a good test case to investigate MOHO`s ability to reduce the computational time through robust optimization and accelerated convergence. The subsonic design optimization had three objectives: maximize lift and minimize drag and moment. The hypersonic design optimization had two objectives: maximize volume and minimize drag. Two accelerated solvers based on fast multipole method and Newton impact theory are developed for simulating subsonic and hypersonic flows. The results show that MOHO performed, on average, better than all five remaining algorithms in 52% of the DTLZ+WFG problems. The results of robust optimization of a subsonic body and hypersonic bodies were in good agreement with theory. The MOHO developed is capable of solving many-objective, multi-objective and single objective, constrained and unconstrained optimization problems with and without uncertainty with little user input

    Many-Objective Genetic Programming for Job-Shop Scheduling

    Get PDF
    The Job Shop Scheduling (JSS) problem is considered to be a challenging one due to practical requirements such as multiple objectives and the complexity of production flows. JSS has received great attention because of its broad applicability in real-world situations. One of the prominent solutions approaches to handling JSS problems is to design effective dispatching rules. Dispatching rules are investigated broadly in both academic and industrial environments because they are easy to implement (by computers and shop floor operators) with a low computational cost. However, the manual development of dispatching rules is time-consuming and requires expert knowledge of the scheduling environment. The hyper-heuristic approach that uses genetic programming (GP) to solve JSS problems is known as GP-based hyper-heuristic (GP-HH). GP-HH is a very useful approach for discovering dispatching rules automatically. Although it is technically simple to consider only a single objective optimization for JSS, it is now widely evidenced in the literature that JSS by nature presents several potentially conflicting objectives, including the maximal flowtime, mean flowtime, and mean tardiness. A few studies in the literature attempt to solve many-objective JSS with more than three objectives, but existing studies have some major limitations. First, many-objective JSS problems have been solved by multi-objective evolutionary algorithms (MOEAs). However, recent studies have suggested that the performance of conventional MOEAs is prone to the scalability challenge and degrades dramatically with many-objective optimization problems (MaOPs). Many-objective JSS using MOEAs inherit the same challenge as MaOPs. Thus, using MOEAs for many-objective JSS problems often fails to select quality dispatching rules. Second, although the reference points method is one of the most prominent and efficient methods for diversity maintenance in many-objective problems, it uses a uniform distribution of reference points which is only appropriate for a regular Pareto-front. However, JSS problems often have irregular Pareto-front and uniformly distributed reference points do not match well with the irregular Pareto-front. It results in many useless points during evolution. These useless points can significantly affect the performance of the reference points-based algorithms. They cannot help to enhance the solution diversity of evolved Pareto-front in many-objective JSS problems. Third, Pareto Local Search (PLS) is a prominent and effective local search method for handling multi-objective JSS optimization problems but the literature does not discover any existing studies which use PLS in GP-HH. To address these limitations, this thesis's overall goal is to develop GP-HH approaches to evolving effective rules to handle many conflicting objectives simultaneously in JSS problems. To achieve the first goal, this thesis proposes the first many-objective GP-HH method for JSS problems to find the Pareto-fronts of nondominated dispatching rules. Decision-makers can utilize this GP-HH method for selecting appropriate rules based on their preference over multiple conflicting objectives. This study combines GP with the fitness evaluation scheme of a many-objective reference points-based approach. The experimental results show that the proposed algorithm significantly outperforms MOEAs such as NSGA-II and SPEA2. To achieve the second goal, this thesis proposes two adaptive reference point approaches (model-free and model-driven). In both approaches, the reference points are generated according to the distribution of the evolved dispatching rules. The model-free reference point adaptation approach is inspired by Particle Swarm Optimization (PSO). The model-driven approach constructs the density model and estimates the density of solutions from each defined sub-location in a whole objective space. Furthermore, the model-driven approach provides smoothness to the model by applying a Gaussian Process model and calculating the area under the mean function. The mean function area helps to find the required number of the reference points in each mean function. The experimental results demonstrate that both adaptive approaches are significantly better than several state-of-the-art MOEAs. To achieve the third goal, the thesis proposes the first algorithm that combines GP as a global search with PLS as a local search in many-objective JSS. The proposed algorithm introduces an effective fitness-based selection strategy for selecting initial individuals for neighborhood exploration. It defines the GP's proper neighborhood structure and a new selection mechanism for selecting the effective dispatching rules during the local search. The experimental results on the JSS benchmark problem show that the newly proposed algorithm can significantly outperform its baseline algorithm (GP-NSGA-III)

    Optimal allocation of distributed generation for power loss reduction and voltage profile improvement

    Get PDF
    Distributed generation (DG) integration in a distribution system has increased to high penetration levels. There is a need to improve technical benefits of DG integration by optimal allocation in a power system network. These benefits include electrical power losses reduction and voltage profile improvement. Optimal DG location and sizing in a power system distribution network with the aim of reducing system power losses and improving the voltage profile still remain a major problem. Though much research has been done on optimal DG location and sizing in a power system distribution network with the aim of reducing system power losses and improving the voltage profile, most of the existing works in the literature use several techniques such as computation, artificial intelligence and an analytical approach, but they still suffer from several drawbacks. As a result, much can still be done in coming up with new algorithms to improve the already existing ones so as to address this important issue more efficiently and effectively. The majority of the proposed algorithms emphasize real power losses only in their formulations. They ignore the reactive power losses which are the key to the operation of the power systems. Hence, there is an urgent need for an approach that will incorporate reactive power and voltage profile in the optimization process, such that the effect of high power losses and poor voltage profile can be mitigated. This research used Genetic Algorithm and Improved Particle Swarm Optimization (GA-IPSO) for optimal placement and sizing of DG for power loss reduction and improvement of voltage profile. GA-IPSO is used to optimize DG location and size while considering both real and reactive power losses. The real and reactive power as well as power loss sensitivity factors were utilized in identifying the candidate buses for DG allocation. The GA-IPSO algorithm was programmed in Matlab. This algorithm reduces the search space for the search process, increases its rate of convergence and also eliminates the possibility of being trapped in local minima. Also, the new approach will help in reducing power loss and improve the voltage profile via placement and sizing

    Development of a multi-objective optimization algorithm based on lichtenberg figures

    Get PDF
    This doctoral dissertation presents the most important concepts of multi-objective optimization and a systematic review of the most cited articles in the last years of this subject in mechanical engineering. The State of the Art shows a trend towards the use of metaheuristics and the use of a posteriori decision-making techniques to solve engineering problems. This fact increases the demand for algorithms, which compete to deliver the most accurate answers at the lowest possible computational cost. In this context, a new hybrid multi-objective metaheuristic inspired by lightning and Linchtenberg Figures is proposed. The Multi-objective Lichtenberg Algorithm (MOLA) is tested using complex test functions and explicit contrainted engineering problems and compared with other metaheuristics. MOLA outperformed the most used algorithms in the literature: NSGA-II, MOPSO, MOEA/D, MOGWO, and MOGOA. After initial validation, it was applied to two complex and impossible to be analytically evaluated problems. The first was a design case: the multi-objective optimization of CFRP isogrid tubes using the finite element method. The optimizations were made considering two methodologies: i) using a metamodel, and ii) the finite element updating. The last proved to be the best methodology, finding solutions that reduced at least 45.69% of the mass, 18.4% of the instability coefficient, 61.76% of the Tsai-Wu failure index and increased by at least 52.57% the natural frequency. In the second application, MOLA was internally modified and associated with feature selection techniques to become the Multi-objective Sensor Selection and Placement Optimization based on the Lichtenberg Algorithm (MOSSPOLA), an unprecedented Sensor Placement Optimization (SPO) algorithm that maximizes the acquired modal response and minimizes the number of sensors for any structure. Although this is a structural health monitoring principle, it has never been done before. MOSSPOLA was applied to a real helicopter’s main rotor blade using the 7 best-known metrics in SPO. Pareto fronts and sensor configurations were unprecedentedly generated and compared. Better sensor distributions were associated with higher hypervolume and the algorithm found a sensor configuration for each sensor number and metric, including one with 100% accuracy in identifying delamination considering triaxial modal displacements, minimum number of sensors, and noise for all blade sections.Esta tese de doutorado traz os conceitos mais importantes de otimização multi-objetivo e uma revisão sistemática dos artigos mais citados nos últimos anos deste tema em engenharia mecânica. O estado da arte mostra uma tendência no uso de meta-heurísticas e de técnicas de tomada de decisão a posteriori para resolver problemas de engenharia. Este fato aumenta a demanda sobre os algoritmos, que competem para entregar respostas mais precisas com o menor custo computacional possível. Nesse contexto, é proposta uma nova meta-heurística híbrida multi-objetivo inspirada em raios e Figuras de Lichtenberg. O Algoritmo de Lichtenberg Multi-objetivo (MOLA) é testado e comparado com outras metaheurísticas usando funções de teste complexas e problemas restritos e explícitos de engenharia. Ele superou os algoritmos mais utilizados na literatura: NSGA-II, MOPSO, MOEA/D, MOGWO e MOGOA. Após validação, foi aplicado em dois problemas complexos e impossíveis de serem analiticamente otimizados. O primeiro foi um caso de projeto: otimização multi-objetivo de tubos isogrid CFRP usando o método dos elementos finitos. As otimizações foram feitas considerando duas metodologias: i) usando um meta-modelo, e ii) atualização por elementos finitos. A última provou ser a melhor metodologia, encontrando soluções que reduziram pelo menos 45,69% da massa, 18,4% do coeficiente de instabilidade, 61,76% do TW e aumentaram em pelo menos 52,57% a frequência natural. Na segunda aplicação, MOLA foi modificado internamente e associado a técnicas de feature selection para se tornar o Seleção e Alocação ótima de Sensores Multi-objetivo baseado no Algoritmo de Lichtenberg (MOSSPOLA), um algoritmo inédito de Otimização de Posicionamento de Sensores (SPO) que maximiza a resposta modal adquirida e minimiza o número de sensores para qualquer estrutura. Embora isto seja um princípio de Monitoramento da Saúde Estrutural, nunca foi feito antes. O MOSSPOLA foi aplicado na pá do rotor principal de um helicóptero real usando as 7 métricas mais conhecidas em SPO. Frentes de Pareto e configurações de sensores foram ineditamente geradas e comparadas. Melhores distribuições de sensores foram associadas a um alto hipervolume e o algoritmo encontrou uma configuração de sensor para cada número de sensores e métrica, incluindo uma com 100% de precisão na identificação de delaminação considerando deslocamentos modais triaxiais, número mínimo de sensores e ruído para todas as seções da lâmina

    Intelligent Navigational Strategies For Multiple Wheeled Mobile Robots Using Artificial Hybrid Methodologies

    Get PDF
    At present time, the application of mobile robot is commonly seen in every fields of science and engineering. The application is not only limited to industries but also in thehousehold, medical, defense, transportation, space and much more. They can perform all kind of tasks which human being cannot do efficiently and accurately such as working in hazardous and highly risk condition, space research etc. Hence, the autonomous navigation of mobile robot is the highly discussed topic of today in an uncertain environment. The present work concentrates on the implementation of the Artificial Intelligence approaches for the mobile robot navigation in an uncertain environment. The obstacle avoidance and optimal path planning is the key issue in autonomous navigation, which is solved in the present work by using artificial intelligent approaches. The methods use for the navigational accuracy and efficiency are Firefly Algorithm (FA), Probability- Fuzzy Logic (PFL), Matrix based Genetic Algorithm (MGA) and Hybrid controller (FAPFL,FA-MGA, FA-PFL-MGA).The proposed work provides an effective navigation of single and multiple mobile robots in both static and dynamic environment. The simulational analysis is carried over the Matlab software and then it is implemented on amobile robot for real-time navigation analysis. During the analysis of the proposed controller, it has been noticed that the Firefly Algorithm performs well as compared to fuzzy and genetic algorithm controller. It also plays an important role inbuilding the successful Hybrid approaches such as FA-PFL, FA-MGA, FA-PFL-MGA. The proposed hybrid methodology perform well over the individual controller especially for pathoptimality and navigational time. The developed controller also proves to be efficient when they are compared with other navigational controller such as Neural Network, Ant Colony Algorithm, Particle Swarm Optimization, Neuro-Fuzzy etc

    Contribution à l’optimisation de l’écoulement de puissance par les méthodes d’intelligence artificielle améliorées

    Get PDF
    La répartition optimale de la puissance réactive (ORPD) est une tâche importante pour atteindre un meilleur état d’économie, de sécurité et de stabilité du système de l’énergie électrique. Il s'agit d'un problème d'optimisation complexe qui vise à identifier les variables de contrôle optimales des différents équipements de régulation du réseau afin de minimiser une fonction objective sous contraintes. De nombreuses techniques méta-heuristiques ont été proposées pour surmonter les diverses complexités dans la résolution du problème ORPD, qui sont caractérisées par l'exploration et l'exploitation du mécanisme de recherche. L'équilibre entre ces deux caractéristiques est un défi à surmonter pour aboutir à une meilleure qualité de solution. L'algorithme de la colonie Artificiel des Abeilles (Artificial Bee Colony - ABC) est une méthode méta-heuristique réputée, s'est avéré efficace en matière d'exploration et faible en matière d'exploitation, ce qui rend nécessaire l'amélioration de la version de base de l'algorithme ABC. L'algorithme Salp Swarm (SSA) est une méta-heuristique nouvellement développée, basée sur un essaim, qui possède la meilleure capacité de recherche locale en utilisant la meilleure solution globale à chaque itération pour découvrir des solutions III prometteuses. Dans ce sujet de recherche, une nouvelle approche hybride basée sur les algorithmes ABC et SSA (ABC-SSA) est développée et appliquée pour résoudre le problème ORPD. L'approche proposée tente d'améliorer la capacité d'exploitation de l'algorithme ABC en utilisant SSA. L'efficacité de l'ABC-SSA est examinée en utilisant quatre réseaux électriques d'essai standard : IEEE 30 bus, IEEE 57 bus, IEEE 118 bus et IEEE 300 bus à grande échelle, en tenant compte des célèbres fonctions objectives du problème ORPD, notamment les pertes totales de puissance active de transmission (Ploss), l'écart total de tension (TVD) par rapport à l’amplitude de tension nominale et l'indice de stabilité de la tension (VSI) des jeux de barres de charge. Les résultats de simulation obtenus ont prouvé que l'ABC-SSA proposé est plus efficace que l'ABC, le SSA et d'autres techniques d'optimisation méta-heuristiques récemment développées dans la littérature du domaine d’application

    Vibration Analysis of Cracked Beam using Intelligent Technique

    Get PDF
    Structural systems in a wide range of Aeronautical, Mechanical and Civil Engineering fields are prone to damage and deterioration during their service life. So an effective and reliable damage assessment methodology will be a valuable tool in timely determination of damage and deterioration in structural members. Interest in various damage detection methods has considerably increased over the past two decades. During this time many detection methods founded on modal analysis techniques have been developed. Non-destructive inspection techniques are generally used to investigate the critical changes in the structural parameters so that an unexpected failure can be prevented. These methods concentrate on a part of the structure and in order to perform the inspection, the structure needs to be taken out of service. Since these damage identification techniques require a large amount of human intervention, they are passive and costly methods

    Toward Accurate, Efficient, and Robust Hybridized Discontinuous Galerkin Methods

    Full text link
    Computational science, including computational fluid dynamics (CFD), has become an indispensible tool for scientific discovery and engineering design, yet a key remaining challenge is to simultaneously ensure accuracy, efficiency, and robustness of the calculations. This research focuses on advancing a class of high-order finite element methods and develops a set of algorithms to increase the accuracy, efficiency, and robustness of calculations involving convection and diffusion, with application to the inviscid Euler and viscous Navier-Stokes equations. In particular, it addresses high-order discontinuous Galerkin (DG) methods, especially hybridized (HDG) methods, and develops adjoint-based methods for simultaneous mesh and order adaptation to reduce the error in a scalar functional of the approximate solution to the discretized equations. Contributions are made in key aspects of these methods applied to general systems of equations, addressing the scalability and memory requirements, accuracy of HDG methods, and efficiency and robustness with new adaptation methods. First, this work generalizes existing HDG methods to systems of equations, and in so doing creates a new primal formulation by applying DG stabilization methods as the viscous stabilization for HDG. The primal formulation is shown to be even more computationally efficient than the existing methods. Second, by instead keeping existing viscous stabilization methods and developing a new convection stabilization, this work shows that additional accuracy can be obtained, even in the case of purely convective systems. Both HDG methods are compared to DG in the same computational framework and are shown to be more efficient. Finally, the set of adaptation frameworks is developed for combined mesh and order refinement suitable for both DG and HDG discretizations. The first of these frameworks uses hanging-node-based mesh adaptation and develops a novel local approach for evaluating the refinement options. The second framework intended for simplex meshes extends the mesh optimization via error sampling and synthesis (MOESS) method to incorporate order adaptation. Collectively, the results from this research address a number of key issues that currently are at the forefront of high-order CFD methods, and particularly to output-based hp-adaptation for DG and HDG methods.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/137150/1/jdahm_1.pd

    An improved data classification framework based on fractional particle swarm optimization

    Get PDF
    Particle Swarm Optimization (PSO) is a population based stochastic optimization technique which consist of particles that move collectively in iterations to search for the most optimum solutions. However, conventional PSO is prone to lack of convergence and even stagnation in complex high dimensional-search problems with multiple local optima. Therefore, this research proposed an improved Mutually-Optimized Fractional PSO (MOFPSO) algorithm based on fractional derivatives and small step lengths to ensure convergence to global optima by supplying a fine balance between exploration and exploitation. The proposed algorithm is tested and verified for optimization performance comparison on ten benchmark functions against six existing established algorithms in terms of Mean of Error and Standard Deviation values. The proposed MOFPSO algorithm demonstrated lowest Mean of Error values during the optimization on all benchmark functions through all 30 runs (Ackley = 0.2, Rosenbrock = 0.2, Bohachevsky = 9.36E-06, Easom = -0.95, Griewank = 0.01, Rastrigin = 2.5E-03, Schaffer = 1.31E-06, Schwefel 1.2 = 3.2E-05, Sphere = 8.36E-03, Step = 0). Furthermore, the proposed MOFPSO algorithm is hybridized with Back-Propagation (BP), Elman Recurrent Neural Networks (RNN) and Levenberg-Marquardt (LM) Artificial Neural Networks (ANNs) to propose an enhanced data classification framework, especially for data classification applications. The proposed classification framework is then evaluated for classification accuracy, computational time and Mean Squared Error on five benchmark datasets against seven existing techniques. It can be concluded from the simulation results that the proposed MOFPSO-ERNN classification algorithm demonstrated good classification performance in terms of classification accuracy (Breast Cancer = 99.01%, EEG = 99.99%, PIMA Indian Diabetes = 99.37%, Iris = 99.6%, Thyroid = 99.88%) as compared to the existing hybrid classification techniques. Hence, the proposed technique can be employed to improve the overall classification accuracy and reduce the computational time in data classification applications

    Analysis of heat pumps potential in demand response programs for residential buildings in Belgium and their impact on grid flexibility with thermal comfort consideration

    Get PDF
    This study investigates the potential of heat pumps in demand response (DR) programs, to provide flexibility to power grids with a focus on residential buildings in Belgium. The research highlights the interplay between grid flexibility, energy efficiency, and thermal comfort, presenting a multi-dimensional analysis of sustainable practices within the residential sector through analytical simulations and analysis of (3) case studies of demonstration projects in this research domain. Through strategic heat pump management, the study explores pathways for enhancing energy efficiency without significantly sacrificing occupants' thermal comfort. The core strategy of this work relies on the utilization of two distinct building types, with varying insulation levels defined by Belgian building standards as K15 and K45, each with a 180m2 floor area, as the backdrop for the investigation. These buildings are equipped with aero-thermal heat pumps that supply either radiators or a floor heating system and the building insulation serve as a proxy for thermal mass storage. The uniqueness of the study is embedded in the deployment of a genetic algorithm that optimizes the heat pump operations according to day-ahead pricing signals. In a winter scenario set for February 2022, the findings reveal a 13% difference in heating energy demand between the two building types, attributable to their different insulation levels. The genetic algorithm's application brought about notable cost savings, reducing peak demand by 28.56% for the K45 building and 14.52% for the K15 building. Flexibility is quantified in terms of heat pump consumption shifted away from peak demand periods. These numbers highlight the benefits of strategic heat pump operation and reflect the potential of DR programs to shift substantial energy demand from peak to off-peak period
    corecore