22 research outputs found

    Finite element model updating using estimation of distribution algorithm

    Full text link
    Finite Element (FE) model updating has been attracting research attentions in structural engineering fields for over 20 years. Its immense importance to the design, construction and maintenance of civil and mechanical structures has been highly recognised. However, many sources of uncertainties may affect the updating results. These uncertainties may be caused by FE modelling errors, measurement noises, signal processing techniques, and so on. Therefore, research efforts on model updating have been focusing on tackling with uncertainties for a long time. Recently, a new type of evolutionary algorithms has been developed to address uncertainty problems, known as Estimation of Distribution Algorithms (EDAs). EDAs are evolutionary algorithms based on estimation and sampling from probabilistic models and able to overcome some of the drawbacks exhibited by traditional genetic algorithms (GAs). In this paper, a numerical steel simple beam is constructed in commercial software ANSYS. The various damage scenarios are simulated and EDAs are employed to identify damages via FE model updating process. The results show that the performances of EDAs for model updating are efficient and reliable

    A Comparison of Selected Modifications of the Particle Swarm Optimization Algorithm

    Get PDF
    We compare 27 modifications of the original particle swarm optimization (PSO) algorithm. The analysis evaluated nine basic PSO types, which differ according to the swarm evolution as controlled by various inertia weights and constriction factor. Each of the basic PSO modifications was analyzed using three different distributed strategies. In the first strategy, the entire swarm population is considered as one unit (OC-PSO), the second strategy periodically partitions the population into equally large complexes according to the particle’s functional value (SCE-PSO), and the final strategy periodically splits the swarm population into complexes using random permutation (SCERand-PSO). All variants are tested using 11 benchmark functions that were prepared for the special session on real-parameter optimization of CEC 2005. It was found that the best modification of the PSO algorithm is a variant with adaptive inertia weight. The best distribution strategy is SCE-PSO, which gives better results than do OC-PSO and SCERand-PSO for seven functions. The sphere function showed no significant difference between SCE-PSO and SCERand-PSO. It follows that a shuffling mechanism improves the optimization process

    Compact NSGA-II for Multi-objective Feature Selection

    Full text link
    Feature selection is an expensive challenging task in machine learning and data mining aimed at removing irrelevant and redundant features. This contributes to an improvement in classification accuracy, as well as the budget and memory requirements for classification, or any other post-processing task conducted after feature selection. In this regard, we define feature selection as a multi-objective binary optimization task with the objectives of maximizing classification accuracy and minimizing the number of selected features. In order to select optimal features, we have proposed a binary Compact NSGA-II (CNSGA-II) algorithm. Compactness represents the population as a probability distribution to enhance evolutionary algorithms not only to be more memory-efficient but also to reduce the number of fitness evaluations. Instead of holding two populations during the optimization process, our proposed method uses several Probability Vectors (PVs) to generate new individuals. Each PV efficiently explores a region of the search space to find non-dominated solutions instead of generating candidate solutions from a small population as is the common approach in most evolutionary algorithms. To the best of our knowledge, this is the first compact multi-objective algorithm proposed for feature selection. The reported results for expensive optimization cases with a limited budget on five datasets show that the CNSGA-II performs more efficiently than the well-known NSGA-II method in terms of the hypervolume (HV) performance metric requiring less memory. The proposed method and experimental results are explained and analyzed in detail.Comment: 8 pages, 2 figure

    A survey of cost-sensitive decision tree induction algorithms

    Get PDF
    The past decade has seen a significant interest on the problem of inducing decision trees that take account of costs of misclassification and costs of acquiring the features used for decision making. This survey identifies over 50 algorithms including approaches that are direct adaptations of accuracy based methods, use genetic algorithms, use anytime methods and utilize boosting and bagging. The survey brings together these different studies and novel approaches to cost-sensitive decision tree learning, provides a useful taxonomy, a historical timeline of how the field has developed and should provide a useful reference point for future research in this field

    Gene Tree Labeling Using Nonnegative Matrix Factorization on Biomedical Literature

    Get PDF
    Identifying functional groups of genes is a challenging problem for biological applications. Text mining approaches can be used to build hierarchical clusters or trees from the information in the biological literature. In particular, the nonnegative matrix factorization (NMF) is examined as one approach to label hierarchical trees. A generic labeling algorithm as well as an evaluation technique is proposed, and the effects of different NMF parameters with regard to convergence and labeling accuracy are discussed. The primary goals of this study are to provide a qualitative assessment of the NMF and its various parameters and initialization, to provide an automated way to classify biomedical data, and to provide a method for evaluating labeled data assuming a static input tree. As a byproduct, a method for generating gold standard trees is proposed

    Feature Selection using Tabu Search with Learning Memory: Learning Tabu Search

    Get PDF
    International audienceFeature selection in classification can be modeled as a com-binatorial optimization problem. One of the main particularities of this problem is the large amount of time that may be needed to evaluate the quality of a subset of features. In this paper, we propose to solve this problem with a tabu search algorithm integrating a learning mechanism. To do so, we adapt to the feature selection problem, a learning tabu search algorithm originally designed for a railway network problem in which the evaluation of a solution is time-consuming. Experiments are conducted and show the benefit of using a learning mechanism to solve hard instances of the literature

    Dichotomous Binary Differential Evolution for Knapsack Problems

    Get PDF
    Differential evolution (DE) is one of the most popular and powerful evolutionary algorithms for the real-parameter global continuous optimization problems. However, how to adapt into combinatorial optimization problems without sacrificing the original evolution mechanism of DE is harder work to the researchers to design an efficient binary differential evolution (BDE). To tackle this problem, this paper presents a novel BDE based on dichotomous mechanism for knapsack problems, called DBDE, in which two new proposed methods (i.e., dichotomous mutation and dichotomous crossover) are employed. DBDE almost has any difference with original DE and no additional module or computation has been introduced. The experimental studies have been conducted on a suite of 0-1 knapsack problems and multidimensional knapsack problems. Experimental results have verified the quality and effectiveness of DBDE. Comparison with three state-of-the-art BDE variants and other two state-of-the-art binary particle swarm optimization (PSO) algorithms has proved that DBDE is a new competitive algorithm

    Multi-Objective Genetic Algorithm for Multi-View Feature Selection

    Full text link
    Multi-view datasets offer diverse forms of data that can enhance prediction models by providing complementary information. However, the use of multi-view data leads to an increase in high-dimensional data, which poses significant challenges for the prediction models that can lead to poor generalization. Therefore, relevant feature selection from multi-view datasets is important as it not only addresses the poor generalization but also enhances the interpretability of the models. Despite the success of traditional feature selection methods, they have limitations in leveraging intrinsic information across modalities, lacking generalizability, and being tailored to specific classification tasks. We propose a novel genetic algorithm strategy to overcome these limitations of traditional feature selection methods for multi-view data. Our proposed approach, called the multi-view multi-objective feature selection genetic algorithm (MMFS-GA), simultaneously selects the optimal subset of features within a view and between views under a unified framework. The MMFS-GA framework demonstrates superior performance and interpretability for feature selection on multi-view datasets in both binary and multiclass classification tasks. The results of our evaluations on three benchmark datasets, including synthetic and real data, show improvement over the best baseline methods. This work provides a promising solution for multi-view feature selection and opens up new possibilities for further research in multi-view datasets

    Active learning of link specifications using decision tree learning

    Get PDF
    In this work we presented an implementation that uses decision trees to learn highly accurate link specifications. We compared our approach with three state-of-the-art classifiers on nine datasets and showed, that our approach gives comparable results in a reasonable amount of time. It was also shown, that we outperform the state-of-the-art on four datasets by up to 30%, but are still behind slightly on average. The effect of user feedback on the active learning variant was inspected pertaining to the number of iterations needed to deliver good results. It was shown that we can get FScores above 0.8 with most datasets after 14 iterations
    corecore