297 research outputs found

    The cross-association relation based on intervals ratio in fuzzy time series

    Get PDF
    The fuzzy time series (FTS) is a forecasting model based on linguistic values. This forecasting method was developed in recent years after the existing ones were insufficiently accurate. Furthermore, this research modified the accuracy of existing methods for determining and the partitioning universe of discourse, fuzzy logic relationship (FLR), and variation historical data using intervals ratio, cross association relationship, and rubber production Indonesia data, respectively. The modifed steps start with the intervals ratio to partition the determined universe discourse. Then the triangular fuzzy sets were built, allowing fuzzification. After this, the FLR are built based on the cross association relationship, leading to defuzzification. The average forecasting error rate (AFER) was used to compare the modified results and the existing methods. Additionally, the simulations were conducted using rubber production Indonesia data from 2000-2020. With an AFER result of 4.77%<10%, the modification accuracy has a smaller error than previous methods, indicating  very good forecasting criteria. In addition, the coefficient values of D1 and D2 were automatically obtained from the intervals ratio algorithm. The future works modified the partitioning of the universe of discourse using frequency density to eliminate unused partition intervals

    Perpetual Learning Framework based on Type-2 Fuzzy Logic System for a Complex Manufacturing Process

    Get PDF
    This paper introduces a perpetual type-2 Neuro-Fuzzy modelling structure for continuous learning and its application to the complex thermo-mechanical metal process of steel Friction Stir Welding (FSW). The ‘perpetual’ property refers to the capability of the proposed system to continuously learn from new process data, in an incremental learning fashion. This is particularly important in industrial/manufacturing processes, as it eliminates the need to retrain the model in the presence of new data, or in the case of any process drift. The proposed structure evolves through incremental, hybrid (supervised/unsupervised) learning, and accommodates new sample data in a continuous fashion. The human-like information capture paradigm of granular computing is used along with an interval type-2 neural-fuzzy system to develop a modelling structure that is tolerant to the uncertainty in the manufacturing data (common challenge in industrial/manufacturing data). The proposed method relies on the creation of new fuzzy rules which are updated and optimised during the incremental learning process. An iterative pruning strategy in the model is then employed to remove any redundant rules, as a result of the incremental learning process. The rule growing/pruning strategy is used to guarantee that the proposed structure can be used in a perpetual learning mode. It is demonstrated that the proposed structure can effectively learn complex dynamics of input-output data in an adaptive way and maintain good predictive performance in the metal processing case study of steel FSW using real manufacturing dat

    A Granular Computing-Based Model for Group Decision-Making in Multi-Criteria and Heterogeneous Environments

    Get PDF
    Granular computing is a growing computing paradigm of information processing that covers any techniques, methodologies, and theories employing information granules in complex problem solving. Within the recent past, it has been applied to solve group decision-making processes and different granular computing-based models have been constructed, which focus on some particular aspects of these decision-making processes. This study presents a new granular computing-based model for group decision-making processes defined in multi-criteria and heterogeneous environments that is able to improve with minimum adjustment both the consistency associated with individual decision-makers and the consensus related to the group. Unlike the existing granular computing-based approaches, this new one is able to take into account a higher number of features when dealing with this kind of decision-making processes
    corecore