33 research outputs found

    An Efficient Codebook Initialization Approach for LBG Algorithm

    Full text link
    In VQ based image compression technique has three major steps namely (i) Codebook Design, (ii) VQ Encoding Process and (iii) VQ Decoding Process. The performance of VQ based image compression technique depends upon the constructed codebook. A widely used technique for VQ codebook design is the Linde-Buzo-Gray (LBG) algorithm. However the performance of the standard LBG algorithm is highly dependent on the choice of the initial codebook. In this paper, we have proposed a simple and very effective approach for codebook initialization for LBG algorithm. The simulation results show that the proposed scheme is computationally efficient and gives expected performance as compared to the standard LBG algorithm

    Evolutionary design of nearest prototype classifiers

    Get PDF
    In pattern classification problems, many works have been carried out with the aim of designing good classifiers from different perspectives. These works achieve very good results in many domains. However, in general they are very dependent on some crucial parameters involved in the design. These parameters have to be found by a trial and error process or by some automatic methods, like heuristic search and genetic algorithms, that strongly decrease the performance of the method. For instance, in nearest prototype approaches, main parameters are the number of prototypes to use, the initial set, and a smoothing parameter. In this work, an evolutionary approach based on Nearest Prototype Classifier (ENPC) is introduced where no parameters are involved, thus overcoming all the problems that classical methods have in tuning and searching for the appropiate values. The algorithm is based on the evolution of a set of prototypes that can execute several operators in order to increase their quality in a local sense, and with a high classification accuracy emerging for the whole classifier. This new approach has been tested using four different classical domains, including such artificial distributions as spiral and uniform distibuted data sets, the Iris Data Set and an application domain about diabetes. In all the cases, the experiments show successfull results, not only in the classification accuracy, but also in the number and distribution of the prototypes achieved.Publicad

    Region segmentation for facial image compressing

    Get PDF
    This paper addresses the segmentation of passport images in order to improve quality of significant regions and to further reduce redundancy of insignificant ones. The approach is to first segment a facial image into two major regions, namely background and foreground. Here a new technique using pixel difference is presented. To compress facial regions at better quality, a face segmentation algorithm is introduced that detects eyes and mouth in a face. Region of interest (ROI) coding is then used to obtain better quality for facial features. At the end, some strategies that make use of region segmentation are proposed in order to increase performance in entropy codin

    Nearest prototype classification of noisy data

    Get PDF
    Nearest prototype approaches offer a common way to design classifiers. However, when data is noisy, the success of this sort of classifiers depends on some parameters that the designer needs to tune, as the number of prototypes. In this work, we have made a study of the ENPC technique, based on the nearest prototype approach, in noisy datasets. Previous experimentation of this algorithm had shown that it does not require any parameter tuning to obtain good solutions in problems where class limits are well defined, and data is not noisy. In this work, we show that the algorithm is able to obtain solutions with high classification success even when data is noisy. A comparison with optimal (hand made) solutions and other different classification algorithms demonstrates the good performance of the ENPC algorithm in accuracy and number of prototypes as the noise level increases. We have performed experiments in four different datasets, each of them with different characteristics.Publicad

    Improving the accuracy while preserving the interpretability of fuzzy function approximators by means of multi-objective evolutionary algorithms

    Get PDF
    AbstractThe identification of a model is one of the key issues in the field of fuzzy system modeling and function approximation theory. An important characteristic that distinguishes fuzzy systems from other techniques in this area is their transparency and interpretability. Especially in the construction of a fuzzy system from a set of given training examples, little attention has been paid to the analysis of the trade-off between complexity and accuracy maintaining the interpretability of the final fuzzy system. In this paper a multi-objective evolutionary approach is proposed to determine a Pareto-optimum set of fuzzy systems with different compromises between their accuracy and complexity. In particular, two fundamental and competing objectives concerning fuzzy system modeling are addressed: fuzzy rule parameter optimization and the identification of system structure (i.e. the number of membership functions and fuzzy rules), taking always in mind the transparency of the obtained system. Another key aspect of the algorithm presented in this work is the use of some new expert evolutionary operators, specifically designed for the problem of fuzzy function approximation, that try to avoid the generation of worse solutions in order to accelerate the convergence of the algorithm

    LBGS: a smart approach for very large data sets vector quantization

    Get PDF
    Abstract In this paper, LBGS, a new parallel/distributed technique for Vector Quantization is presented. It derives from the well known LBG algorithm and has been designed for very complex problems where both large data sets and large codebooks are involved. Several heuristics have been introduced to make it suitable for implementation on parallel/distributed hardware. These lead to a slight deterioration of the quantization error with respect to the serial version but a large improvement in computing efficiency

    Decentralized K-means using randomized Gossip protocols for clustering large datasets

    Get PDF
    International audienceIn this paper, we consider the clustering of very large datasets distributed over a network of computational units using a decentralized K-means algorithm. To obtain the same codebook at each node of the network, we use a randomized gossip aggregation protocol where only small messages are ex- changed. We theoretically show the equivalence of the algorithm with a centralized K-means, provided a bound on the number of messages each node has to send is met. We provide experiments showing that the consensus is reached for a number of messages consistent with the bound, but also for a smaller number of messages, albeit with a less smooth evolution of the objective function
    corecore