Long-term learning for type-2 neural-fuzzy systems

Abstract

The development of a new long-term learning framework for interval-valued neural-fuzzy systems is presented for the first time in this article. The need for such a framework is twofold: to address continuous batch learning of data sets, and to take advantage the extra degree of freedom that type-2 Fuzzy Logic systems offer for better model predictive ability. The presented long-term learning framework uses principles of granular computing (GrC) to capture information/knowledge from raw data in the form of interval-valued sets in order to build a computational mechanism that has the ability to adapt to new information in an additive and long-term learning fashion. The latter, is to accommodate new input–output mappings and new classes of data without significantly disturbing existing input–output mappings, therefore maintaining existing performance while creating and integrating new knowledge (rules). This is achieved via an iterative algorithmic process, which involves a two-step operation: iterative rule-base growth (capturing new knowledge) and iterative rule-base pruning (removing redundant knowledge) for type-2 rules. The two-step operation helps create a growing, but sustainable model structure. The performance of the proposed system is demonstrated using a number of well-known non-linear benchmark functions as well as a highly nonlinear multivariate real industrial case study. Simulation results show that the performance of the original model structure is maintained and it is comparable to the updated model's performance following the incremental learning routine. The study is concluded by evaluating the performance of the proposed framework in frequent and consecutive model updates where the balance between model accuracy and complexity is further assessed

    Similar works