100 research outputs found

    Gene pool recombination, genetic algorithm, and the onemax function

    Get PDF
    In this paper we present an analysis of gene pool recombination in genetic algorithms in the context of the onemax function. We have developed a Markov chain framework for computing the probability of convergence, and have shown how the analysis can be used to estimate the critical population size. The Markov model is used to investigate drift in the multiple-loci case. Additionally, we have estimated the minimum population size needed for optimality, and recurrence relations describing the growth of the advantageous allele in the infinite-population case have been derived. Simulation results are presented

    Quantum estimation via minimum Kullback entropy principle

    Full text link
    We address quantum estimation in situations where one has at disposal data from the measurement of an incomplete set of observables and some a priori information on the state itself. By expressing the a priori information in terms of a bias toward a given state the problem may be faced by minimizing the quantum relative entropy (Kullback entropy) with the constraint of reproducing the data. We exploit the resulting minimum Kullback entropy principle for the estimation of a quantum state from the measurement of a single observable, either from the sole mean value or from the complete probability distribution, and apply it as a tool for the estimation of weak Hamiltonian processes. Qubit and harmonic oscillator systems are analyzed in some details.Comment: 7 pages, slightly revised version, no figure

    A simple two-module problem to exemplify building-block assembly under crossover

    No full text
    Theoretically and empirically it is clear that a genetic algorithm with crossover will outperform a genetic algorithm without crossover in some fitness landscapes, and vice versa in other landscapes. Despite an extensive literature on the subject, and recent proofs of a principled distinction in the abilities of crossover and non-crossover algorithms for a particular theoretical landscape, building general intuitions about when and why crossover performs well when it does is a different matter. In particular, the proposal that crossover might enable the assembly of good building-blocks has been difficult to verify despite many attempts at idealized building-block landscapes. Here we show the first example of a two-module problem that shows a principled advantage for cross-over. This allows us to understand building-block assembly under crossover quite straightforwardly and build intuition about more general landscape classes favoring crossover or disfavoring it

    Optimizing Classification Ensembles via a Genetic Algorithm for a Web-Based Educational System

    Full text link
    Abstract. Classification fusion combines multiple classifications of data into a single classification solution of greater accuracy. Feature extraction aims to reduce the computational cost of feature measurement, increase classifier efficiency, and allow greater classification accuracy based on the process of deriving new features from the original features. This paper represents an approach for classifying students in order to predict their final grades based on features extracted from logged data in an educational web-based system. A combination of multiple classifiers leads to a significant improvement in classification performance. By weighing feature vectors representing feature importance using a Genetic Algorithm (GA) we can optimize the prediction accuracy and obtain a marked improvement over raw classification. We further show that when the number of features is few, feature weighting and transformation into a new space works efficiently compared to the feature subset selection. This approach is easily adaptable to different types of courses, different population sizes, and allows for different features to be analyzed.

    EA/G-GA for Single Machine Scheduling Problems with Earliness/Tardiness Costs

    Get PDF
    [[abstract]]An Estimation of Distribution Algorithm (EDA), which depends on explicitly sampling mechanisms based on probabilistic models with information extracted from the parental solutions to generate new solutions, has constituted one of the major research areas in the field of evolutionary computation. The fact that no genetic operators are used in EDAs is a major characteristic differentiating EDAs from other genetic algorithms (GAs). This advantage, however, could lead to premature convergence of EDAs as the probabilistic models are no longer generating diversified solutions. In our previous research [1], we have presented the evidences that EDAs suffer from the drawback of premature convergency, thus several important guidelines are provided for the design of effective EDAs. In this paper, we validated one guideline for incorporating other meta-heuristics into the EDAs. An algorithm named “EA/G-GA” is proposed by selecting a well-known EDA, EA/G, to work with GAs. The proposed algorithm was tested on the NP-Hard single machine scheduling problems with the total weighted earliness/tardiness cost in a just-in-time environment. The experimental results indicated that the EA/G-GA outperforms the compared algorithms statistically significantly across different stopping criteria and demonstrated the robustness of the proposed algorithm. Consequently, this paper is of interest and importance in the field of EDAs.[[notice]]èŁœæ­ŁćźŒ

    Market segmentation and ideal point identification for new product design using fuzzy data compression and fuzzy clustering methods

    Get PDF
    In product design, various methodologies have been proposed for market segmentation, which group consumers with similar customer requirements into clusters. Central points on market segments are always used as ideal points of customer requirements for product design, which reflects particular competitive strategies to effectively reach all consumers’ interests. However, existing methodologies ignore the fuzziness on consumers’ customer requirements. In this paper, a new methodology is proposed to perform market segmentation based on consumers’ customer requirements, which exist fuzziness. The methodology is an integration of a fuzzy compression technique for multi-dimension reduction and a fuzzy clustering technique. It first compresses the fuzzy data regarding customer requirements from high dimensions into two dimensions. After the fuzzy data is clustered into marketing segments, the centre points of market segments are used as ideal points for new product development. The effectiveness of the proposed methodology in market segmentation and identification of the ideal points for new product design is demonstrated using a case study of new digital camera design

    Using Datamining Techniques to Help Metaheuristics: A Short Survey

    Get PDF
    International audienceHybridizing metaheuristic approaches becomes a common way to improve the efficiency of optimization methods. Many hybridizations deal with the combination of several optimization methods. In this paper we are interested in another type of hybridization, where datamining approaches are combined within an optimization process. Hence, we propose to study the interest of combining metaheuristics and datamining through a short survey that enumerates the different opportunities of such combinations based on literature examples

    A Neuro-Evolutionary Approach to Electrocardiographic Signal Classification

    Get PDF
    International audienceThis chapter presents an evolutionary Artificial Neural Networks (ANN) classifier system as a heartbeat classification algorithm designed according to the rules of the PhysioNet/Computing in Cardiology Challenge 2011 (Moody, Comput Cardiol Challenge 38:273-276, 2011), whose aim is to develop an efficient algorithm able to run within a mobile phone that can provide useful feedback when acquiring a diagnostically useful 12-lead Electrocardiography (ECG) recording. The method used to solve this problem is a very powerful natural computing analysis tool, namely evolutionary neural networks, based on the joint evolution of the topology and the connection weights relying on a novel similarity-based crossover. The chapter focuses on discerning between usable and unusable electrocardiograms tele-medically acquired from mobile embedded devices. A preprocessing algorithm based on the Discrete Fourier Transform has been applied before the evolutionary approach in order to extract an ECG feature dataset in the frequency domain. Finally, a series of tests has been carried out in order to evaluate the performance and the accuracy of the classifier system for such a challenge

    Bovine proteins containing poly-glutamine repeats are often polymorphic and enriched for components of transcriptional regulatory complexes

    Get PDF
    peer-reviewedBackground: About forty human diseases are caused by repeat instability mutations. A distinct subset of these diseases is the result of extreme expansions of polymorphic trinucleotide repeats; typically CAG repeats encoding poly-glutamine (poly-Q) tracts in proteins. Polymorphic repeat length variation is also apparent in human poly-Q encoding genes from normal individuals. As these coding sequence repeats are subject to selection in mammals, it has been suggested that normal variations in some of these typically highly conserved genes are implicated in morphological differences between species and phenotypic variations within species. At present, poly-Q encoding genes in non-human mammalian species are poorly documented, as are their functions and propensities for polymorphic variation. Results: The current investigation identified 178 bovine poly-Q encoding genes (Q ≄ 5) and within this group, 26 genes with orthologs in both human and mouse that did not contain poly-Q repeats. The bovine poly-Q encoding genes typically had ubiquitous expression patterns although there was bias towards expression in epithelia, brain and testes. They were also characterised by unusually large sizes. Analysis of gene ontology terms revealed that the encoded proteins were strongly enriched for functions associated with transcriptional regulation and many contributed to physical interaction networks in the nucleus where they presumably act cooperatively in transcriptional regulatory complexes. In addition, the coding sequence CAG repeats in some bovine genes impacted mRNA splicing thereby generating unusual transcriptional diversity, which in at least one instance was tissue-specific. The poly-Q encoding genes were prioritised using multiple criteria for their likelihood of being polymorphic and then the highest ranking group was experimentally tested for polymorphic variation within a cattle diversity panel. Extensive and meiotically stable variation was identified. Conclusions: Transcriptional diversity can potentially be generated in poly-Q encoding genes by the impact of CAG repeat tracts on mRNA alternative splicing. This effect, combined with the physical interactions of the encoded proteins in large transcriptional regulatory complexes suggests that polymorphic variations of proteins in these complexes have strong potential to affect phenotype.Dairy Australia (through the Innovative Dairy Cooperative Research Center
    • 

    corecore