294 research outputs found

    Particle Swarm Optimization

    Get PDF
    Particle swarm optimization (PSO) is a population based stochastic optimization technique influenced by the social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. This book represents the contributions of the top researchers in this field and will serve as a valuable tool for professionals in this interdisciplinary field

    Towards Swarm Diversity: Random Sampling in Variable Neighborhoods Procedure Using a Lévy Distribution

    Get PDF
    Abstract. Particle Swarm Optimization (PSO) is a nondirect search method for numerical optimization. The key advantages of this metaheuristic are principally associated to its simplicity, few parameters and high convergence rate. In the canonical PSO using a fully connected topology, a particle adjusts its position by using two attractors: the best record stored for the current agent, and the best point discovered for the entire swarm. It leads to a high convergence rate, but also progressively deteriorates the swarm diversity. As a result, the particle swarm frequently gets attracted by sub-optimal points. Once the particles have been attracted to a local optimum, they continue the search process within a small region of the solution space, thus reducing the algorithm exploration. To deal with this issue, this paper presents a variant of the Random Sampling in Variable Neighborhoods (RSVN) procedure using a Lévy distribution, which is able to notably improve the PSO search ability in multimodal problems. Keywords. Swarm diversity, local optima, premature convergence, RSVN procedure, Lévy distribution. Hacia la diversidad de la bandada: procedimiento RSVN usando una distribución de Lévy Resumen. Particle Swarm Optimization (PSO) es un método de búsqueda no directo para la optimización numérica. Las principales ventajas de esta metaheurística están relacionadas principalmente con su simplicidad, pocos parámetros y alta tasa de convergencia. En el PSO canónico usando una topología totalmente conectada, una partícula ajusta su posición usando dos atractores: el mejor registro almacenado por el individuo y el mejor punto descubierto por la bandada completa. Este esquema conduce a un alto factor de convergencia, pero también deteriora la diversidad de la población progresivamente. Como resultado la bandada de partículas frecuentemente es atraída por puntos subóptimos. Una vez que las partículas han sido atraídas hacia un óptimo local, ellas continúan el proceso de búsqueda dentro de una región muy pequeña del espacio de soluciones, reduciendo las capacidades de exploración del algoritmo. Para tratar esta situación este artículo presenta una variante del procedimiento Random Sampling in Variable Neighborhoods (RSVN) usando una distribución de Lévy. Este algoritmo es capaz de mejorar notablemente la capacidad de búsqueda de los algoritmos PSO en problemas multimodales de optimización. Palabras clave. Diversidad de la bandada, óptimos locales, convergencia prematura, procedimiento RSVN, distribución de Lévy

    MECHANICAL ENERGY HARVESTER FOR POWERING RFID SYSTEMS COMPONENTS: MODELING, ANALYSIS, OPTIMIZATION AND DESIGN

    Get PDF
    Finding alternative power sources has been an important topic of study worldwide. It is vital to find substitutes for finite fossil fuels. Such substitutes may be termed renewable energy sources and infinite supplies. Such limitless sources are derived from ambient energy like wind energy, solar energy, sea waves energy; on the other hand, smart cities megaprojects have been receiving enormous amounts of funding to transition our lives into smart lives. Smart cities heavily rely on smart devices and electronics, which utilize small amounts of energy to run. Using batteries as the power source for such smart devices imposes environmental and labor cost issues. Moreover, in many cases, smart devices are in hard-to-access places, making accessibility for disposal and replacement difficult. Finally, battery waste harms the environment. To overcome these issues, vibration-based energy harvesters have been proposed and implemented. Vibration-based energy harvesters convert the dynamic or kinetic energy which is generated due to the motion of an object into electric energy. Energy transduction mechanisms can be delivered based on piezoelectric, electromagnetic, or electrostatic methods; the piezoelectric method is generally preferred to the other methods, particularly if the frequency fluctuations are considerable. In response, piezoelectric vibration-based energy harvesters (PVEHs), have been modeled and analyzed widely. However, there are two challenges with PVEH: the maximum amount of extractable voltage and the effective (operational) frequency bandwidth are often insufficient. In this dissertation, a new type of integrated multiple system comprised of a cantilever and spring-oscillator is proposed to improve and develop the performance of the energy harvester in terms of extractable voltage and effective frequency bandwidth. The new energy harvester model is proposed to supply sufficient energy to power low-power electronic devices like RFID components. Due to the temperature fluctuations, the thermal effect over the performance of the harvester is initially studied. To alter the resonance frequency of the harvester structure, a rotating element system is considered and analyzed. In the analytical-numerical analysis, Hamilton’s principle along with Galerkin’s decomposition approach are adopted to derive the governing equations of the harvester motion and corresponding electric circuit. It is observed that integration of the spring-oscillator subsystem alters the boundary condition of the cantilever and subsequently reforms the resulting characteristic equation into a more complicated nonlinear transcendental equation. To find the resonance frequencies, this equation is solved numerically in MATLAB. It is observed that the inertial effects of the oscillator rendered to the cantilever via the restoring force effects of the spring significantly alter vibrational features of the harvester. Finally, the voltage frequency response function is analytically and numerically derived in a closed-from expression. Variations in parameter values enable the designer to mutate resonance frequencies and mode shape functions as desired. This is particularly important, since the generated energy from a PVEH is significant only if the excitation frequency coming from an external source matches the resonance (natural) frequency of the harvester structure. In subsequent sections of this work, the oscillator mass and spring stiffness are considered as the design parameters to maximize the harvestable voltage and effective frequency bandwidth, respectively. For the optimization, a genetic algorithm is adopted to find the optimal values. Since the voltage frequency response function cannot be implemented in a computer algorithm script, a suitable function approximator (regressor) is designed using fuzzy logic and neural networks. The voltage function requires manual assistance to find the resonance frequency and cannot be done automatically using computer algorithms. Specifically, to apply the numerical root-solver, one needs to manually provide the solver with an initial guess. Such an estimation is accomplished using a plot of the characteristic equation along with human visual inference. Thus, the entire process cannot be automated. Moreover, the voltage function encompasses several coefficients making the process computationally expensive. Thus, training a supervised machine learning regressor is essential. The trained regressor using adaptive-neuro-fuzzy-inference-system (ANFIS) is utilized in the genetic optimization procedure. The optimization problem is implemented, first to find the maximum voltage and second to find the maximum widened effective frequency bandwidth, which yields the optimal oscillator mass value along with the optimal spring stiffness value. As there is often no control over the external excitation frequency, it is helpful to design an adaptive energy harvester. This means that, considering a specific given value of the excitation frequency, energy harvester system parameters (oscillator mass and spring stiffness) need to be adjusted so that the resulting natural (resonance) frequency of the system aligns with the given excitation frequency. To do so, the given excitation frequency value is considered as the input and the system parameters are assumed as outputs which are estimated via the neural network fuzzy logic regressor. Finally, an experimental setup is implemented for a simple pure cantilever energy harvester triggered by impact excitations. Unlike the theoretical section, the experimental excitation is considered to be an impact excitation, which is a random process. The rationale for this is that, in the real world, the external source is a random trigger. Harmonic base excitations used in the theoretical chapters are to assess the performance of the energy harvester per standard criteria. To evaluate the performance of a proposed energy harvester model, the input excitation type consists of harmonic base triggers. In summary, this dissertation discusses several case studies and addresses key issues in the design of optimized piezoelectric vibration-based energy harvesters (PVEHs). First, an advanced model of the integrated systems is presented with equation derivations. Second, the proposed model is decomposed and analyzed in terms of mechanical and electrical frequency response functions. To do so, analytic-numeric methods are adopted. Later, influential parameters of the integrated system are detected. Then the proposed model is optimized with respect to the two vital criteria of maximum amount of extractable voltage and widened effective (operational) frequency bandwidth. Corresponding design (influential) parameters are found using neural network fuzzy logic along with genetic optimization algorithms, i.e., a soft computing method. The accuracy of the trained integrated algorithms is verified using the analytical-numerical closed-form expression of the voltage function. Then, an adaptive piezoelectric vibration-based energy harvester (PVEH) is designed. This final design pertains to the cases where the excitation (driving) frequency is given and constant, so the desired goal is to match the natural frequency of the system with the given driving frequency. In this response, a regressor using neural network fuzzy logic is designed to find the proper design parameters. Finally, the experimental setup is implemented and tested to report the maximum voltage harvested in each test execution

    Optimisation par essaim de particules application au clustering des données de grandes dimensions

    Get PDF
    Clustering high-dimensional data is an important but difficult task in various data mining applications. A fundamental starting point for data mining is the assumption that a data object, such as text document, can be represented as a high-dimensional feature vector. Traditional clustering algorithms struggle with high-dimensional data because the quality of results deteriorates due to the curse of dimensionality. As the number of features increases, data becomes very sparse and distance measures in the whole feature space become meaningless. Usually, in a high-dimensional data set, some features may be irrelevant or redundant for clusters and different sets of features may be relevant for different clusters. Thus, clusters can often be found in different feature subsets rather than the whole feature space. Clustering for such data sets is called subspace clustering or projected clustering, aimed at finding clusters from different feature subspaces. On the other hand, the performance of many subspace/projected clustering algorithms drops quickly with the size of the subspaces in which the clusters are found. Also, many of them require domain knowledge provided by the user to help select and tune their settings, like the maximum distance between dimensional values, the threshold of input parameters and the minimum density, which are difficult to set. Developing effective particle swarm optimization (PSO) for clustering high-dimensional data is the main focus of this thesis. First, in order to improve the performance of the conventional PSO algorithm, we analyze the main causes of the premature convergence and propose a novel PSO algorithm, call InformPSO, based on principles of adaptive diffusion and hybrid mutation. Inspired by the physics of information diffusion, we design a function to achieve a better particle diversity, by taking into account their distribution and the number of evolutionary generations and by adjusting their"social cognitive" abilities. Based on genetic self-organization and chaos evolution, we build clonal selection into InformPSO to implement local evolution of the best particle candidate, gBest, and make use of a Logistic sequence to control the random drift of gBest. These techniques greatly contribute to breaking away from local optima. The global convergence of the algorithm is proved using the theorem of Markov chain. Experiments on optimization of unimodal and multimodal benchmark functions show that, comparing with some other PSO variants, InformPSO converges faster, results in better optima, is more robust, and prevents more effectively the premature convergence. Then, special treatments of objective functions and encoding schemes are proposed to tailor PSO for two problems commonly encountered in studies related to high-dimensional data clustering. The first problem is the variable weighting problem in soft projected clustering with known the number of clusters k . With presetting the number of clusters k, the problem aims at finding a set of variable weights for each cluster and is formulated as a nonlinear continuous optimization problem subjected to bound. constraints. A new algorithm, called PSOVW, is proposed to achieve optimal variable weights for clusters. In PSOVW, we design a suitable k -means objective weighting function, in which a change of variable weights is exponentially reflected. We also transform the original constrained variable weighting problem into a problem with bound constraints, using a non-normalized representation of variable weights, and we utilize a particle swarm optimizer to minimize the objective function in order to obtain global optima to the variable weighting problem in clustering. Our experimental results on both synthetic and real data show that the proposed algorithm greatly improves cluster quality. In addition, the results of the new algorithm are much less dependent on the initial cluster centroids. The latter problem aims at automatically determining the number of clusters k as well as identifying clusters. Also, it is formulated as a nonlinear optimization problem with bound constraints. For the problem of automatical determination of k , which is troublesome to most clustering algorithms, a PSO algorithm called autoPSO is proposed. A special coding of particles is introduced into autoPSO to represent partitions with different numbers of clusters in the same population. The DB index is employed as the objective function to measure the quality of partitions with similar or different numbers of clusters. autoPSO is carried out on both synthetic high-dimensional datasets and handcrafted low-dimensional datasets and its performance is compared to other selected clustering techniques. Experimental results indicate that the promising potential pertaining to autoPSO applicability to clustering high-dimensional data without the preset number of clusters k

    Advances in Evolutionary Algorithms

    Get PDF
    With the recent trends towards massive data sets and significant computational power, combined with evolutionary algorithmic advances evolutionary computation is becoming much more relevant to practice. Aim of the book is to present recent improvements, innovative ideas and concepts in a part of a huge EA field

    A review of population-based metaheuristics for large-scale black-box global optimization: Part B

    Get PDF
    This paper is the second part of a two-part survey series on large-scale global optimization. The first part covered two major algorithmic approaches to large-scale optimization, namely decomposition methods and hybridization methods such as memetic algorithms and local search. In this part we focus on sampling and variation operators, approximation and surrogate modeling, initialization methods, and parallelization. We also cover a range of problem areas in relation to large-scale global optimization, such as multi-objective optimization, constraint handling, overlapping components, the component imbalance issue, and benchmarks, and applications. The paper also includes a discussion on pitfalls and challenges of current research and identifies several potential areas of future research

    Combining Prior Information for the Prediction of Transcription Factor Binding Sites

    Get PDF
    Despite the fact that each cell in an organism has the same genetic information, it is possible that cells fundamentally differ in their function. The molecular basis for the functional diversity of cells is governed by biochemical processes that regulate the expression of genes. Key to this regulatory process are proteins called transcription factors that recognize and bind specific DNA sequences of a few nucleotides. Here we tackle the problem of identifying the binding sites of a given transcription factor. The prediction of binding preferences from the structure of a transcription factor is still an unsolved problem. For that reason, binding sites are commonly identified by searching for overrepresented sites in a given collection of nucleotide sequences. Such sequences might be known regulatory regions of genes that are assumed to be coregulated, or they are obtained from so-called ChIP-seq experiments that identify approximately the sites that were bound by a given transcription factor. In both cases, the observed nucleotide sequences are much longer than the actual binding sites and computational tools are required to uncover the actual binding preferences of a factor. Aggravated by the fact that transcription factors recognize not only a single nucleotide sequence, the search for overrepresented patterns in a given collection of sequences has proven to be a challenging problem. Most computational methods merely relied on the given set of sequences, but additional information is required in order to make reliable predictions. Here, this information is obtained by looking at the evolution of nucleotide sequences. For that reason, each nucleotide sequence in the observed data is augmented by its orthologs, i.e. sequences from related species where the same transcription factor is present. By constructing multiple sequence alignments of the orthologous sequences it is possible to identify functional regions that are under selective pressure and therefore appear more conserved than others. The processing of the additional information exerted by ortholog sequences relies on a phylogenetic tree equipped with a nucleotide substitution model that not only carries information about the ancestry, but also about the expected similarity of functional sites. As a result, a Bayesian method for the identification of transcription factor binding sites is presented. The method relies on a phylogenetic tree that agrees with the assumptions of the nucleotide substitution process. Therefore, the problem of estimating phylogenetic trees is discussed first. The computation of point estimates relies on recent developments in Hadamard spaces. Second, the statistical model is presented that captures the enrichment and conservation of binding sites and other functional regions in the observed data. The performance of the method is evaluated on ChIP-seq data of transcription factors, where the binding preferences have been estimated in previous studies

    Embedding Approaches for Relational Data

    Get PDF
    ​Embedding methods for searching latent representations of the data are very important tools for unsupervised and supervised machine learning as well as information visualisation. Over the years, such methods have continually progressed towards the ability to capture and analyse the structure and latent characteristics of larger and more complex data. In this thesis, we examine the problem of developing efficient and reliable embedding methods for revealing, understanding, and exploiting the different aspects of the relational data. We split our work into three pieces, where each deals with a different relational data structure. In the first part, we are handling with the weighted bipartite relational structure. Based on the relational measurements between two groups of heterogeneous objects, our goal is to generate low dimensional representations of these two different types of objects in a unified common space. We propose a novel method that models the embedding of each object type symmetrically to the other type, subject to flexible scale constraints and weighting parameters. The embedding generation relies on an efficient optimisation despatched using matrix decomposition. And we have also proposed a simple way of measuring the conformity between the original object relations and the ones re-estimated from the embeddings, in order to achieve model selection by identifying the optimal model parameters with a simple search procedure. We show that our proposed method achieves consistently better or on-par results on multiple synthetic datasets and real world ones from the text mining domain when compared with existing embedding generation approaches. In the second part of this thesis, we focus on the multi-relational data, where objects are interlinked by various relation types. Embedding approaches are very popular in this field, they typically encode objects and relation types with hidden representations and use the operations between them to compute the positive scalars corresponding to the linkages' likelihood score. In this work, we aim at further improving the existing embedding techniques by taking into account the multiple facets of the different patterns and behaviours of each relation type. To the best of our knowledge, this is the first latent representation model which considers relational representations to be dependent on the objects they relate in this field. The multi-modality of the relation type over different objects is effectively formulated as a projection matrix over the space spanned by the object vectors. Two large benchmark knowledge bases are used to evaluate the performance with respect to the link prediction task. And a new test data partition scheme is proposed to offer a better understanding of the behaviour of a link prediction model. In the last part of this thesis, a much more complex relational structure is considered. In particular, we aim at developing novel embedding methods for jointly modelling the linkage structure and objects' attributes. Traditionally, link prediction task is carried out on either the linkage structure or the objects' attributes, which does not aware of their semantic connections and is insufficient for handling the complex link prediction task. Thus, our goal in this work is to build a reliable model that can fuse both sources of information to improve the link prediction problem. The key idea of our approach is to encode both the linkage validities and the nodes neighbourhood information into embedding-based conditional probabilities. Another important aspect of our proposed algorithm is that we utilise a margin-based contrastive training process for encoding the linkage structure, which relies on a more appropriate assumption and dramatically reduces the number of training links. In the experiments, our proposed method indeed improves the link prediction performance on three citation/hyperlink datasets, when compared with those methods relying on only the nodes' attributes or the linkage structure, and it also achieves much better performances compared with the state-of-arts

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms
    • …
    corecore