233 research outputs found

    Genetic algorithms for designing digital filters

    Get PDF
    This thesis presents a method of adapting IIR filters implemented as lattice structures using a Genetic Algorithm (GA), called ZGA. This method addresses some of the difficulties encountered with existing methods of adaptation, providing guaranteed filter stability and the ability to search multi-modal error surfaces. ZGA mainly focuses on convergence improvement in respects of crossover and mutation operators. Four kinds of crossover methods are used to scan as much as possible the potential solution area, only the best of them will be taken as ZGA crossover offspring. And ZGA mutation takes the best of three mutation results as final mutation offspring. Simulation results are presented, demonstrating the suitability of ZGA to the problem of IIR system identification and comparing with the results of Standard GA, Genitor and NGA

    Exploration of Neural Structures for Dynamic System Control

    Get PDF
    Biological neural systems are powerful mechanisms for controlling biological sys- tems. While the complexity of biological neural networks makes exact simulation intractable, several key aspects lend themselves to implementation on computational systems. This thesis constructs a discrete event neural network simulation that implements aspects of biological neural networks. A combined genetic programming/simulated annealing approach is utilized to design network structures that function as regulators for continuous time dynamic systems in the presence of process noise when simulated using a discrete event neural simulation. Methods of constructing such networks are analyzed including examination of the final network structure and the algorithm used to construct the networks. The parameters of the network simulation are also analyzed, as well as the interface between the network and the dynamic system. This analysis provides insight to the construction of networks for more complicated control applications

    Incremental construction of LSTM recurrent neural network

    Get PDF
    Long Short--Term Memory (LSTM) is a recurrent neural network that uses structures called memory blocks to allow the net remember significant events distant in the past input sequence in order to solve long time lag tasks, where other RNN approaches fail. Throughout this work we have performed experiments using LSTM networks extended with growing abilities, which we call GLSTM. Four methods of training growing LSTM has been compared. These methods include cascade and fully connected hidden layers as well as two different levels of freezing previous weights in the cascade case. GLSTM has been applied to a forecasting problem in a biomedical domain, where the input/output behavior of five controllers of the Central Nervous System control has to be modelled. We have compared growing LSTM results against other neural networks approaches, and our work applying conventional LSTM to the task at hand.Postprint (published version

    Optimal Control of the Strong-Field Laser Ionization of Clusters in Helium Droplets

    Get PDF
    The strong-field ionization dynamics of Ag and Xe clusters are studied using fs pulse shaping. By tailoring the temporal shapes of the laser pulses, the coupling of the energy into the Ag clusters can be controlled, leading to a maximum yield of highly charged atomic ions and an enhancement of the highest atomic charge states. For Xe, fitness scans of the laser parameters show that a two-step ionization scheme gives rise to an extreme charging of the clusters. Three-pulse trains obtained in an optimization experiment are even more effective and result in maximum yields of different chosen charge states

    Theoretical development and experimental validation of a method to reconstruct forces on the TBM structure during operation

    Get PDF
    Auf Blanket Systeme in einem Fusionsreaktor wirken starke statische und transiente elektromagnetische KrÀfte. Um diese KrÀfte am Test Blanket Module (TBM) in ITER (International Thermonuclear Experimental Reactor) zu rekonstruieren, werden ein geeignetes System und Methoden entwickelt. Ein Sensorsystem sowie eine Methode zur Optimierung der Sensorpositionen werden vorgeschlagen. Die Anwendbarkeit der Methode wird anhand eines experimentellen Aufbaus mit Versuchsmodellen demonstriert

    MODELLING AND OPTIMIZATION TECHNIQUES FOR ACOUSTIC FULL WAVEFORM INVERSION IN SEISMIC EXPLORATION

    Get PDF
    Full Waveform Inversion has become an important research field in the context of seismic exploration, due to the possibility to estimate a high-resolution model of the subsurface in terms of acoustic and elastic parameters. To this aim, issues such as an efficient implementation of wave equation solution for the forward problem, and optimization algorithms, both local and global, for this high non-linear inverse problem must be tackled. In this thesis, in the framework of 2D acoustic approximation, I implemented an efficient numerical solution of the wave equation based on a local order of approximation of the spatial derivatives to reduce the computational time and the approximation error. Moreover, for what concerns the inversion, I studied two different global optimization algorithms (Simulated Annealing and Genetic Algorithms) on analytic functions that represent different possible scenarios of the misfit function to estimate an initial model for local optimization algorithm in the basin of attraction of the global minimum. Due to the high number of unknowns in seismic exploration context, of the order of some thousands or more, different strategies based on the adjoint method must be used to compute the gradient of the misfit function. By this procedure, only three wave equation solutions are required to compute the gradient instead of a number of solutions proportional to the unknown parameters. The FWI approach developed in this thesis has been applied first on a synthetic inverse problem on the Marmousi model to validate the whole procedure, then on two real seismic datasets. The first is a land profile with two expanding spread experiments and is characterized by a low S/N ratio. In this case, the main variations of the estimated P-wave velocity model well correspond to the shallow events observed on the post-stack depth migrated section. The second is a marine profile extracted from a 3D volume where the local optimization, based on the adjoint method, allows to estimate a high-resolution velocity model whose reliability has been checked by the alignment of the CIGs computed by pre-stack depth migration

    Machine Learning and Its Application to Reacting Flows

    Get PDF
    This open access book introduces and explains machine learning (ML) algorithms and techniques developed for statistical inferences on a complex process or system and their applications to simulations of chemically reacting turbulent flows. These two fields, ML and turbulent combustion, have large body of work and knowledge on their own, and this book brings them together and explain the complexities and challenges involved in applying ML techniques to simulate and study reacting flows. This is important as to the world’s total primary energy supply (TPES), since more than 90% of this supply is through combustion technologies and the non-negligible effects of combustion on environment. Although alternative technologies based on renewable energies are coming up, their shares for the TPES is are less than 5% currently and one needs a complete paradigm shift to replace combustion sources. Whether this is practical or not is entirely a different question, and an answer to this question depends on the respondent. However, a pragmatic analysis suggests that the combustion share to TPES is likely to be more than 70% even by 2070. Hence, it will be prudent to take advantage of ML techniques to improve combustion sciences and technologies so that efficient and “greener” combustion systems that are friendlier to the environment can be designed. The book covers the current state of the art in these two topics and outlines the challenges involved, merits and drawbacks of using ML for turbulent combustion simulations including avenues which can be explored to overcome the challenges. The required mathematical equations and backgrounds are discussed with ample references for readers to find further detail if they wish. This book is unique since there is not any book with similar coverage of topics, ranging from big data analysis and machine learning algorithm to their applications for combustion science and system design for energy generation
    • 

    corecore