6 research outputs found

    A Review of Geophysical Modeling Based on Particle Swarm Optimization

    Get PDF
    This paper reviews the application of the algorithm particle swarm optimization (PSO) to perform stochastic inverse modeling of geophysical data. The main features of PSO are summarized, and the most important contributions in several geophysical felds are analyzed. The aim is to indicate the fundamental steps of the evolution of PSO methodologies that have been adopted to model the Earth’s subsurface and then to undertake a critical evaluation of their benefts and limitations. Original works have been selected from the existing geophysical literature to illustrate successful PSO applied to the interpretation of electromagnetic (magnetotelluric and time-domain) data, gravimetric and magnetic data, self-potential, direct current and seismic data. These case studies are critically described and compared. In addition, joint optimization of multiple geophysical data sets by means of multi-objective PSO is presented to highlight the advantage of using a single solver that deploys Pareto optimality to handle diferent data sets without conficting solutions. Finally, we propose best practices for the implementation of a customized algorithm from scratch to perform stochastic inverse modeling of any kind of geophysical data sets for the beneft of PSO practitioners or inexperienced researchers

    A more realistic genetic algorithm

    Get PDF
    Genetic Algorithms (GAs) are loosely based on the concept of the natural cycle of reproduction with selective pressures favouring the individuals which are best suited to their environment (i.e. fitness function). However, there are many features of natural reproduction which are not replicated in GAs, such as population members taking some time to reach puberty. This thesis describes a programme of research which set out to investigate what would be the impact on the performance of a GA of introducing additional features which more closely replicate real life processes. The motivation for the work was curiosity. The approach has been tested using various standard test functions. The results are interesting and show that when compared with a Canonical GA, introducing various features such as the need to reach puberty before reproduction can occur and risk of illness can enhance the effectiveness of GAs in terms of the overall effort needed to find a solution. As the method simulating the nature rules, Cardiff Genetic Algorithm (CGA) introduces several features to each individual in programming modelling the real world. Each individual of the population is given a life-span and an age, the population size is allowed to vary; and rather than generations, the concept of time steps is introduced with each individual living for a number of time steps. An additional feature is also discussed involving multiple populations which have to compete for a limited resource which can be thought of as “water”. This together with an illness parameter and accidental death are used to study the behaviour of these population

    The Maximin Fitness Function; Multi-objective City and Regional Planning

    No full text

    Advanced Data Mining and Machine Learning Algorithms for Integrated Computer-Based Analyses of Big Environmental Databases

    Get PDF
    Einsicht in die räumliche Verteilung geotechnischer und hydrologischer Untergrundeigenschaften sowie von Reservoir- und Umweltparametern sind grundlegend für geowissenschaftliche Forschungen. Entwicklungen in den Bereichen geophysikalische Erkundung sowie Fernerkundung resultieren in der Verfügbarkeit verschiedenster Verfahren für die nichtinvasive, räumlich kontinuierliche Datenerfassung im Rahmen hochauflösender Messverfahren. In dieser Arbeit habe ich verschiedene Verfahren für die Analyse erdwissenschaftlicher Datenbasen entwickelt auf der Basis von Wissenserschließungsverfahren. Eine wichtige Datenbasis stellt geophysikalische Tomographie dar, die als einziges geowissenschaftliches Erkundungsverfahren 2D und 3D Abbilder des Untergrunds liefern kann. Mittels unterschiedlicher Verfahren aus den Bereichen intelligente Datenanalyse und maschinelles Lernen (z.B. Merkmalsextraktion, künstliche neuronale Netzwerke, etc.) habe ich ein Verfahren zur Datenanalyse mittels künstlicher neuronaler Netzwerke entwickelt, das die räumlich kontinuierliche 2D oder 3D Vorhersage von lediglich an wenigen Punkten gemessenen Untergrundeigenschaften im Rahmen von Wahrscheinlichkeitsaussagen ermöglicht. Das Vorhersageverfahren basiert auf geophysikalischer Tomographie und berücksichtigt die Mehrdeutigkeit der tomographischen Bildgebung. Außerdem wird auch die Messunsicherheit bei der Erfassung der Untergrundeigenschaften an wenigen Punkten in der Vorhersage berücksichtigt. Des Weiteren habe ich untersucht, ob aus den Trainingsergebnissen künstlicher neuronaler Netzwerke bei der Vorhersage auch Aussagen über die Realitätsnähe mathematisch gleichwertiger Lösungen der geophysikalischen tomographischen Bildgebung abgeleitet werden können. Vorhersageverfahren wie das von mir vorgeschlagene, können maßgeblich zur verbesserten Lösung hydrologischer und geotechnischer Fragestellungen beitragen. Ein weiteres wichtiges Problem ist die Kartierung der Erdoberfläche, die von grundlegender Bedeutung für die Bearbeitung verschiedener ökonomischer und ökologischer Fragestellungen ist, wie z.B., die Identifizierung von Lagerstätten, den Schutz von Böden, oder Ökosystemmanagement. Kartierungsdaten resultieren entweder aus technischen (objektiven) Messungen oder visuellen (subjektiven) Untersuchungen durch erfahrene Experten. Im Rahmen dieser Arbeit zeige ich erste Entwicklungen hin zu einer automatisierten und schnellen Integration technischer und visueller (subjektiver) Daten auf der Basis unterschiedlicher intelligenter Datenanalyseverfahren (z.B., Graphenanalyse, automatische Konturerfassung, Clusteranalyse, etc.). Mit solchem Verfahren sollen hart oder weich klassifizierte Karten erstellt werden, die das Untersuchungsgebiet optimal segmentieren um höchstmögliche Konformität mit allen verfügbaren Daten zu erzielen

    User-preference based evolutionary algorithms for many-objective optimisation

    Get PDF
    Evolutionary Algorithms (EA) have enjoyed great success in finding solutions for multi-objective problems that have two or three-objectives in the past decade. The majority of these Evolutionary Multi-objective Optimisation (EMO) algorithms explored the decision-space using the selection pressure governed methods that are based on dominance relation. Although these algorithms are effective locating solutions for multi-objective problems, they have not been very successful for problem instances having more than three objectives, usually named as many-objective problems. The main reason behind this shortcoming is the fact that the dominance comparison becomes ineffective as the number of objectives increases. In this thesis, we incorporate some user-preference methods into EMO algorithms to enhance their ability to handle many-objective problems. To this end, we introduce a distance metric derived from user-preference schemes such as the reference point method and light beam search found in multi-criteria decision making. This distance metric is used to guide the EMO algorithm to locate solutions within certain areas of the objective-space known as preferred regions. In our distance metric approach, the decision maker is allowed to specify the amount of spread of solutions along the solution front as well. We name this distance metric based EMO algorithm as d-EMO, which is a generalised framework that can be constructed using any EA. This distance metric approach is computationally less expensive as it does not rely on dominance ranking methods, but very effective in solving many-objective problems. One key issue that remains to be resolved is that there are no suitable metrics for comparing the performance of these user-preference EMO algorithms. Therefore, we introduce a variation of the normalised Hyper-Volume (HV) metric suitable for comparing user-preference EMO algorithms. The key feature in our HV calculation process is to consider only the solutions within each preferred region. This methodology favours user-preference EMO algorithms that have converged closely to the Pareto front within a preferred region. We have identified two real-world engineering design problems in optimising aerofoil and lens designs, and formulated them as many-objective problems. The optimisation process of these many-objective problems is computationally expensive. Hence, we use a reference point PSO algorithm named MDEPSO to locate solutions effectively in fewer function evaluations. This PSO algorithm is less prone to getting stuck in local optimal fronts and still retains its fast convergence ability. In MDEPSO, this feature is achieved by generating leader particles using a differential evolution rule rather than picking particles directly from the population or an external archive. The main feature of the optimisation process of these aerofoil and lens design problems is the derivation of reference points based on existing designs. We illustrate how these existing designs can be used to either obtain better or new design solutions that correspond to various requirements. This process of deriving reference points based on existing design models, and integrating them into a user-preference EMO framework is a novel approach in the optimisation process of such computationally expensive engineering design problems
    corecore