43 research outputs found

    Image Enhancement and Image Hiding Based on Linear Image Fusion

    Get PDF

    Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    Get PDF
    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly

    Inducing NNC-trees with the R/sup 4/-rule

    No full text

    Speed-up of the R 4 -rule for Distance-Based Neural Network Learning

    No full text
    Abstract-The R 4 -rule is a heuristic algorithm for distancebased neural network (DBNN) learning. Experimental results show that the R 4 -rule can obtain the smallest or nearly smallest DBNNs. However, the computational cost of the R 4 -rule is relatively high because the learning vector quantization (LVQ) algorithm is used iteratively during learning. To reduce the cost of the R 4 -rule, we investigate three approaches in this paper. The first one is called the distance preservation (DP) approach, which tries to reduce the number of times for calculating the distance values, and the other two are based on the attentional learning concept, which try to reduce the number of data used for learning. The efficiency of these methods is verified through experiments on several public databases. Index Terms-Distance-based neural networks, nearest neighbor classifiers, neural networks, linear vector quantization, R 4 -rule, attentional learning, pattern recognition. I. INTRODUCTION A distance-Based neural network (DBNN) is a nearest neighbor classifier (NNC) realized in neural network (NN) form. In the literature, DBNN is widely known as selforganizing neural network, though DBNN is a model suitable both for un-supervised learning and for supervised learning [1] - A direct way to design a DBNN is to put all training data into P , with the weight vector of each neuron being one of the training data. Unfortunately, the network so obtained is not efficient if the number of data is large. A more efficient way is to train a small DBNN using the training data, such that |P |, or the number of neurons, is much smaller than the number of data. The question is, how small |P | should be for a given problem. To answer this question, we proposed an algorithm called the R 4 -rule for finding the smallest or nearly smallest DBNN in To reduce the cost of the R 4 -rule, this paper investigates three approaches. The first one is called the distance preservation (DP) approach, which tries to reduce the number of times for calculating the distance values, and the other two are based on the attentional learning concept (AL), which tr
    corecore