844 research outputs found

    Interval type-2 intuitionistic fuzzy logic system for time series and identification problems - a comparative study

    Get PDF
    This paper proposes a sliding mode control-based learning of interval type-2 intuitionistic fuzzy logic system for time series and identification problems. Until now, derivative-based algorithms such as gradient descent back propagation, extended Kalman filter, decoupled extended Kalman filter and hybrid method of decoupled extended Kalman filter and gradient descent methods have been utilized for the optimization of the parameters of interval type-2 intuitionistic fuzzy logic systems. The proposed model is based on a Takagi-Sugeno-Kang inference system. The evaluations of the model are conducted using both real world and artificially generated datasets. Analysis of results reveals that the proposed interval type-2 intuitionistic fuzzy logic system trained with sliding mode control learning algorithm (derivative-free) do outperforms some existing models in terms of the test root mean squared error while competing favourable with other models in the literature. Moreover, the proposed model may stand as a good choice for real time applications where running time is paramount compared to the derivative-based models

    Robotic ubiquitous cognitive ecology for smart homes

    Get PDF
    Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent- based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feed- back received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work

    Recurrent error-based ridge polynomial neural networks for time series forecasting

    Get PDF
    Time series forecasting has attracted much attention due to its impact on many practical applications. Neural networks (NNs) have been attracting widespread interest as a promising tool for time series forecasting. The majority of NNs employ only autoregressive (AR) inputs (i.e., lagged time series values) when forecasting time series. Moving-average (MA) inputs (i.e., errors) however have not adequately considered. The use of MA inputs, which can be done by feeding back forecasting errors as extra network inputs, alongside AR inputs help to produce more accurate forecasts. Among numerous existing NNs architectures, higher order neural networks (HONNs), which have a single layer of learnable weights, were considered in this research work as they have demonstrated an ability to deal with time series forecasting and have an simple architecture. Based on two HONNs models, namely the feedforward ridge polynomial neural network (RPNN) and the recurrent dynamic ridge polynomial neural network (DRPNN), two recurrent error-based models were proposed. These models were called the ridge polynomial neural network with error feedback (RPNN-EF) and the ridge polynomial neural network with error-output feedbacks (RPNN-EOF). Extensive simulations covering ten time series were performed. Besides RPNN and DRPNN, a pi-sigma neural network and a Jordan pi-sigma neural network were used for comparison. Simulation results showed that introducing error feedback to the models lead to significant forecasting performance improvements. Furthermore, it was found that the proposed models outperformed many state-of-the-art models. It was concluded that the proposed models have the capability to efficiently forecast time series and that practitioners could benefit from using these forecasting models

    A self-adaptive artificial bee colony algorithm with local search for TSK-type neuro-fuzzy system training

    Full text link
    © 2019 IEEE. In this paper, we introduce a self-adaptive artificial bee colony (ABC) algorithm for learning the parameters of a Takagi-Sugeno-Kang-type (TSK-type) neuro-fuzzy system (NFS). The proposed NFS learns fuzzy rules for the premise part of the fuzzy system using an adaptive clustering method according to the input-output data at hand for establishing the network structure. All the free parameters in the NFS, including the premise and the following TSK-type consequent parameters, are optimized by the modified ABC (MABC) algorithm. Experiments involve two parts, including numerical optimization problems and dynamic system identification problems. In the first part of investigations, the proposed MABC compares to the standard ABC on mathematical optimization problems. In the remaining experiments, the performance of the proposed method is verified with other metaheuristic methods, including differential evolution (DE), genetic algorithm (GA), particle swarm optimization (PSO) and standard ABC, to evaluate the effectiveness and feasibility of the system. The simulation results show that the proposed method provides better approximation results than those obtained by competitors methods

    Incremental construction of LSTM recurrent neural network

    Get PDF
    Long Short--Term Memory (LSTM) is a recurrent neural network that uses structures called memory blocks to allow the net remember significant events distant in the past input sequence in order to solve long time lag tasks, where other RNN approaches fail. Throughout this work we have performed experiments using LSTM networks extended with growing abilities, which we call GLSTM. Four methods of training growing LSTM has been compared. These methods include cascade and fully connected hidden layers as well as two different levels of freezing previous weights in the cascade case. GLSTM has been applied to a forecasting problem in a biomedical domain, where the input/output behavior of five controllers of the Central Nervous System control has to be modelled. We have compared growing LSTM results against other neural networks approaches, and our work applying conventional LSTM to the task at hand.Postprint (published version

    Learning Recurrent ANFIS Using Stochastic Pattern Search Method

    Get PDF
    Summary Pattern search learning is known for simplicity and faster convergence. However, one of the downfalls of this learning is the premature convergence problem. In this paper, we show how we can avoid the possibility of being trapped in a local pit by the introduction of stochastic value. This improved pattern search is then applied on a recurrent type neuro-fuzzy network (ANFIS) to solve time series prediction. Comparison with other method shows the effectiveness of the proposed method for this problem

    Evolving fuzzy and neuro-fuzzy approaches in clustering, regression, identification, and classification: A Survey

    Get PDF
    Major assumptions in computational intelligence and machine learning consist of the availability of a historical dataset for model development, and that the resulting model will, to some extent, handle similar instances during its online operation. However, in many real world applications, these assumptions may not hold as the amount of previously available data may be insufficient to represent the underlying system, and the environment and the system may change over time. As the amount of data increases, it is no longer feasible to process data efficiently using iterative algorithms, which typically require multiple passes over the same portions of data. Evolving modeling from data streams has emerged as a framework to address these issues properly by self-adaptation, single-pass learning steps and evolution as well as contraction of model components on demand and on the fly. This survey focuses on evolving fuzzy rule-based models and neuro-fuzzy networks for clustering, classification and regression and system identification in online, real-time environments where learning and model development should be performed incrementally. (C) 2019 Published by Elsevier Inc.Igor Škrjanc, Jose Antonio Iglesias and Araceli Sanchis would like to thank to the Chair of Excellence of Universidad Carlos III de Madrid, and the Bank of Santander Program for their support. Igor Škrjanc is grateful to Slovenian Research Agency with the research program P2-0219, Modeling, simulation and control. Daniel Leite acknowledges the Minas Gerais Foundation for Research and Development (FAPEMIG), process APQ-03384-18. Igor Škrjanc and Edwin Lughofer acknowledges the support by the ”LCM — K2 Center for Symbiotic Mechatronics” within the framework of the Austrian COMET-K2 program. Fernando Gomide is grateful to the Brazilian National Council for Scientific and Technological Development (CNPq) for grant 305906/2014-3
    corecore