49,342 research outputs found

    A Granular Computing-Based Model for Group Decision-Making in Multi-Criteria and Heterogeneous Environments

    Get PDF
    Granular computing is a growing computing paradigm of information processing that covers any techniques, methodologies, and theories employing information granules in complex problem solving. Within the recent past, it has been applied to solve group decision-making processes and different granular computing-based models have been constructed, which focus on some particular aspects of these decision-making processes. This study presents a new granular computing-based model for group decision-making processes defined in multi-criteria and heterogeneous environments that is able to improve with minimum adjustment both the consistency associated with individual decision-makers and the consensus related to the group. Unlike the existing granular computing-based approaches, this new one is able to take into account a higher number of features when dealing with this kind of decision-making processes

    Enterprise Composition Architecture for Micro-Granular Digital Services and Products

    Get PDF
    The digitization of our society changes the way we live, work, learn, communicate, and collaborate. This defines the strategical context for composing resilient enterprise architectures for micro-granular digital services and products. The change from a closed-world modeling perspective to more flexible open-world composition and evolution of system architectures defines the moving context for adaptable systems, which are essential to enable the digital transformation. Enterprises are presently transforming their strategy and culture together with their processes and information systems to become more digital. The digital transformation deeply disrupts existing enterprises and economies. Since years a lot of new business opportunities appeared using the potential of the Internet and related digital technologies, like Internet of Things, services computing, cloud computing, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. Digitization fosters the development of IT systems with many rather small and distributed structures, like Internet of Things or mobile systems. In this paper, we are focusing on the continuous bottom-up integration of micro-granular architectures for a huge amount of dynamically growing systems and services, like Internet of Things and Microservices, as part of a new digital enterprise architecture. To integrate micro-granular architecture models to living architectural model versions we are extending more traditional enterprise architecture reference models with state of art elements for agile architectural engineering to support the digitalization of services with related products, and their processes

    New Evolving Fuzzy System Algorithms Using Dynamic Constraint

    Get PDF
    An information granule has to be translated into significant frameworks of granular computing to realize interpretability-accuracy tradeoff. These two objectives are in conflict and constitute an open problem. Evolving information granules is a significant concept of granular computing which consider coarser partition (or lower granule i.e. higher error) to fine partition (or higher granule i.e. lower error). While this error reducing granular framework is considered, interpretability constraint is the factor to improve the tradeoff between interpretability and accuracy. Furthermore, overfitting and underfitting criteria are noteworthy to be considered while evolving process continues. In addition, the stability-plasticity tradeoff is another significant consideration to design a granular framework in order to find a consistent and up-to-date fuzzy information granule method. A new operational framework namely evolving fuzzy system (EFS) is developed in this research work, which ensures a compromise between interpretability and reasonable accuracy. Three models are designed based on EFS namely, evolving structural fuzzy system (ESFS), evolving output-context fuzzy system (EOCFS) and evolving information granule (EIG). The evolving information granule is initiated with the first information granule by translating the knowledge of the entire output domain. The initial information granule is considered as an underfitting state with a high approximation error. Then, the EFS starts evolving in the information granule by partitioning the output (or input) domain and uses a dynamic constraint to maintain semantic interpretability in the output (or input) contexts. The outcome on the synthetic and real-world data using the EFS shows the effectiveness of the proposed system, which outperforms state-of-the-art methods. The EFS needs less number of rules (i.e. high interpretable) and low error (i.e. high accuracy) with respect to the existing methods. For example, if the proposed EIG method is applied to the Nakanishi‘s nonlinear system then four fuzzy rules and 0.142 mean square error (MSE) are found. Furthermore, the EIG outperforms if compared with the existing methods. The important criterion in the EFS is to determine the prominent distinction (output or input context) and realize the distinct information granule that depicts the semantics at the fuzzy partition level. The EFS tends to evolve toward the lower error region and realizes the effective rule base by avoiding overfitting. Furthermore, the evolving overfitting index and uncertainty controller of the self-adaptive process are dynamically attained from past and current knowledge. Therefore, effective rule base is the balanced fuzzy model of the approximated system. Within the proposed three models (ESFS, EOCFS and EIG), EIG has the significant ability to tradeoff between interpretability and accuracy, while the proposed ESFS method shows the highly interpretable granular framework which also realizes the interpretability-accuracy tradeoff

    Information Flow Model for Commercial Security

    Get PDF
    Information flow in Discretionary Access Control (DAC) is a well-known difficult problem. This paper formalizes the fundamental concepts and establishes a theory of information flow security. A DAC system is information flow secure (IFS), if any data never flows into the hands of owner’s enemies (explicitly denial access list.

    Data granulation by the principles of uncertainty

    Full text link
    Researches in granular modeling produced a variety of mathematical models, such as intervals, (higher-order) fuzzy sets, rough sets, and shadowed sets, which are all suitable to characterize the so-called information granules. Modeling of the input data uncertainty is recognized as a crucial aspect in information granulation. Moreover, the uncertainty is a well-studied concept in many mathematical settings, such as those of probability theory, fuzzy set theory, and possibility theory. This fact suggests that an appropriate quantification of the uncertainty expressed by the information granule model could be used to define an invariant property, to be exploited in practical situations of information granulation. In this perspective, a procedure of information granulation is effective if the uncertainty conveyed by the synthesized information granule is in a monotonically increasing relation with the uncertainty of the input data. In this paper, we present a data granulation framework that elaborates over the principles of uncertainty introduced by Klir. Being the uncertainty a mesoscopic descriptor of systems and data, it is possible to apply such principles regardless of the input data type and the specific mathematical setting adopted for the information granules. The proposed framework is conceived (i) to offer a guideline for the synthesis of information granules and (ii) to build a groundwork to compare and quantitatively judge over different data granulation procedures. To provide a suitable case study, we introduce a new data granulation technique based on the minimum sum of distances, which is designed to generate type-2 fuzzy sets. We analyze the procedure by performing different experiments on two distinct data types: feature vectors and labeled graphs. Results show that the uncertainty of the input data is suitably conveyed by the generated type-2 fuzzy set models.Comment: 16 pages, 9 figures, 52 reference

    Approximations from Anywhere and General Rough Sets

    Full text link
    Not all approximations arise from information systems. The problem of fitting approximations, subjected to some rules (and related data), to information systems in a rough scheme of things is known as the \emph{inverse problem}. The inverse problem is more general than the duality (or abstract representation) problems and was introduced by the present author in her earlier papers. From the practical perspective, a few (as opposed to one) theoretical frameworks may be suitable for formulating the problem itself. \emph{Granular operator spaces} have been recently introduced and investigated by the present author in her recent work in the context of antichain based and dialectical semantics for general rough sets. The nature of the inverse problem is examined from number-theoretic and combinatorial perspectives in a higher order variant of granular operator spaces and some necessary conditions are proved. The results and the novel approach would be useful in a number of unsupervised and semi supervised learning contexts and algorithms.Comment: 20 Pages. Scheduled to appear in IJCRS'2017 LNCS Proceedings, Springe

    Automatic generation of fuzzy classification rules using granulation-based adaptive clustering

    Get PDF
    A central problem of fuzzy modelling is the generation of fuzzy rules that fit the data to the highest possible extent. In this study, we present a method for automatic generation of fuzzy rules from data. The main advantage of the proposed method is its ability to perform data clustering without the requirement of predefining any parameters including number of clusters. The proposed method creates data clusters at different levels of granulation and selects the best clustering results based on some measures. The proposed method involves merging clusters into new clusters that have a coarser granulation. To evaluate performance of the proposed method, three different datasets are used to compare performance of the proposed method to other classifiers: SVM classifier, FCM fuzzy classifier, subtractive clustering fuzzy classifier. Results show that the proposed method has better classification results than other classifiers for all the datasets used
    corecore