617 research outputs found

    Automatic generation of fuzzy classification rules using granulation-based adaptive clustering

    Get PDF
    A central problem of fuzzy modelling is the generation of fuzzy rules that fit the data to the highest possible extent. In this study, we present a method for automatic generation of fuzzy rules from data. The main advantage of the proposed method is its ability to perform data clustering without the requirement of predefining any parameters including number of clusters. The proposed method creates data clusters at different levels of granulation and selects the best clustering results based on some measures. The proposed method involves merging clusters into new clusters that have a coarser granulation. To evaluate performance of the proposed method, three different datasets are used to compare performance of the proposed method to other classifiers: SVM classifier, FCM fuzzy classifier, subtractive clustering fuzzy classifier. Results show that the proposed method has better classification results than other classifiers for all the datasets used

    On Information Granulation via Data Filtering for Granular Computing-Based Pattern Recognition: A Graph Embedding Case Study

    Get PDF
    Granular Computing is a powerful information processing paradigm, particularly useful for the synthesis of pattern recognition systems in structured domains (e.g., graphs or sequences). According to this paradigm, granules of information play the pivotal role of describing the underlying (possibly complex) process, starting from the available data. Under a pattern recognition viewpoint, granules of information can be exploited for the synthesis of semantically sound embedding spaces, where common supervised or unsupervised problems can be solved via standard machine learning algorithms. In this companion paper, we follow our previous paper (Martino et al. in Algorithms 15(5):148, 2022) in the context of comparing different strategies for the automatic synthesis of information granules in the context of graph classification. These strategies mainly differ on the specific topology adopted for subgraphs considered as candidate information granules and the possibility of using or neglecting the ground-truth class labels in the granulation process and, conversely, to our previous work, we employ a filtering-based approach for the synthesis of information granules instead of a clustering-based one. Computational results on 6 open-access data sets corroborate the robustness of our filtering-based approach with respect to data stratification, if compared to a clustering-based granulation stage

    Fine-tuning the fuzziness of strong fuzzy partitions through PSO

    Get PDF
    We study the influence of fuzziness of trapezoidal fuzzy sets in the strong fuzzy partitions (SFPs) that constitute the database of a fuzzy rule-based classifier. To this end, we develop a particular representation of the trapezoidal fuzzy sets that is based on the concept of cuts, which are the cross-points of fuzzy sets in a SFP and fix the position of the fuzzy sets in the Universe of Discourse. In this way, it is possible to isolate the parameters that characterize the fuzziness of the fuzzy sets, which are subject to fine-tuning through particle swarm optimization (PSO). In this paper, we propose a formulation of the parameter space that enables the exploration of all possible levels of fuzziness in a SFP. The experimental results show that the impact of fuzziness is strongly dependent on the defuzzification procedure used in fuzzy rule-based classifiers. Fuzziness has little influence in the case of winner-takes-all defuzzification, while it is more influential in weighted sum defuzzification, which however may pose some interpretation problems

    Data and Feature Reduction in Fuzzy Modeling through Particle Swarm Optimization

    Get PDF
    The study is concerned with data and feature reduction in fuzzy modeling. As these reduction activities are advantageous to fuzzy models in terms of both the effectiveness of their construction and the interpretation of the resulting models, their realization deserves particular attention. The formation of a subset of meaningful features and a subset of essential instances is discussed in the context of fuzzy-rule-based models. In contrast to the existing studies, which are focused predominantly on feature selection (namely, a reduction of the input space), a position advocated here is that a reduction has to involve both data and features to become efficient to the design of fuzzy model. The reduction problem is combinatorial in its nature and, as such, calls for the use of advanced optimization techniques. In this study, we use a technique of particle swarm optimization (PSO) as an optimization vehicle of forming a subset of features and data (instances) to design a fuzzy model. Given the dimensionality of the problem (as the search space involves both features and instances), we discuss a cooperative version of the PSO along with a clustering mechanism of forming a partition of the overall search space. Finally, a series of numeric experiments using several machine learning data sets is presented

    A multi-objective optimization approach for the synthesis of granular computing-based classification systems in the graph domain

    Get PDF
    The synthesis of a pattern recognition system usually aims at the optimization of a given performance index. However, in many real-world scenarios, there exist other desired facets to take into account. In this regard, multi-objective optimization acts as the main tool for the optimization of different (and possibly conflicting) objective functions in order to seek for potential trade-offs among them. In this paper, we propose a three-objective optimization problem for the synthesis of a granular computing-based pattern recognition system in the graph domain. The core pattern recognition engine searches for suitable information granules (i.e., recurrent and/or meaningful subgraphs from the training data) on the top of which the graph embedding procedure towards the Euclidean space is performed. In the latter, any classification system can be employed. The optimization problem aims at jointly optimizing the performance of the classifier, the number of information granules and the structural complexity of the classification model. Furthermore, we address the problem of selecting a suitable number of solutions from the resulting Pareto Fronts in order to compose an ensemble of classifiers to be tested on previously unseen data. To perform such selection, we employed a multi-criteria decision making routine by analyzing different case studies that differ on how much each objective function weights in the ranking process. Results on five open-access datasets of fully labeled graphs show that exploiting the ensemble is effective (especially when the structural complexity of the model plays a minor role in the decision making process) if compared against the baseline solution that solely aims at maximizing the performances
    • …
    corecore