29,432 research outputs found

    Evolving Ensemble Fuzzy Classifier

    Full text link
    The concept of ensemble learning offers a promising avenue in learning from data streams under complex environments because it addresses the bias and variance dilemma better than its single model counterpart and features a reconfigurable structure, which is well suited to the given context. While various extensions of ensemble learning for mining non-stationary data streams can be found in the literature, most of them are crafted under a static base classifier and revisits preceding samples in the sliding window for a retraining step. This feature causes computationally prohibitive complexity and is not flexible enough to cope with rapidly changing environments. Their complexities are often demanding because it involves a large collection of offline classifiers due to the absence of structural complexities reduction mechanisms and lack of an online feature selection mechanism. A novel evolving ensemble classifier, namely Parsimonious Ensemble pENsemble, is proposed in this paper. pENsemble differs from existing architectures in the fact that it is built upon an evolving classifier from data streams, termed Parsimonious Classifier pClass. pENsemble is equipped by an ensemble pruning mechanism, which estimates a localized generalization error of a base classifier. A dynamic online feature selection scenario is integrated into the pENsemble. This method allows for dynamic selection and deselection of input features on the fly. pENsemble adopts a dynamic ensemble structure to output a final classification decision where it features a novel drift detection scenario to grow the ensemble structure. The efficacy of the pENsemble has been numerically demonstrated through rigorous numerical studies with dynamic and evolving data streams where it delivers the most encouraging performance in attaining a tradeoff between accuracy and complexity.Comment: this paper has been published by IEEE Transactions on Fuzzy System

    An adaptive neuro-fuzzy propagation model for LoRaWAN

    Get PDF
    This article proposes an adaptive-network-based fuzzy inference system (ANFIS) model for accurate estimation of signal propagation using LoRaWAN. By using ANFIS, the basic knowledge of propagation is embedded into the proposed model. This reduces the training complexity of artificial neural network (ANN)-based models. Therefore, the size of the training dataset is reduced by 70% compared to an ANN model. The proposed model consists of an efficient clustering method to identify the optimum number of the fuzzy nodes to avoid overfitting, and a hybrid training algorithm to train and optimize the ANFIS parameters. Finally, the proposed model is benchmarked with extensive practical data, where superior accuracy is achieved compared to deterministic models, and better generalization is attained compared to ANN models. The proposed model outperforms the nondeterministic models in terms of accuracy, has the flexibility to account for new modeling parameters, is easier to use as it does not require a model for propagation environment, is resistant to data collection inaccuracies and uncertain environmental information, has excellent generalization capability, and features a knowledge-based implementation that alleviates the training process. This work will facilitate network planning and propagation prediction in complex scenarios

    An Incremental Construction of Deep Neuro Fuzzy System for Continual Learning of Non-stationary Data Streams

    Full text link
    Existing FNNs are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep FNN, namely DEVFNN. Fuzzy rules can be automatically extracted from data streams or removed if they play limited role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely gClass, drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of dimensionality of input space due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using seven datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four popular continual learning algorithms and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.Comment: This paper has been published in IEEE Transactions on Fuzzy System

    Branes, Quantization and Fuzzy Spheres

    Full text link
    We propose generalized quantization axioms for Nambu-Poisson manifolds, which allow for a geometric interpretation of n-Lie algebras and their enveloping algebras. We illustrate these axioms by describing extensions of Berezin-Toeplitz quantization to produce various examples of quantum spaces of relevance to the dynamics of M-branes, such as fuzzy spheres in diverse dimensions. We briefly describe preliminary steps towards making the notion of quantized 2-plectic manifolds rigorous by extending the groupoid approach to quantization of symplectic manifolds.Comment: 18 pages; Based on Review Talk at the Workshop on "Noncommutative Field Theory and Gravity", Corfu Summer Institute on Elementary Particles and Physics, September 8-12, 2010, Corfu, Greece; to be published in Proceedings of Scienc

    Platonic model of mind as an approximation to neurodynamics

    Get PDF
    Hierarchy of approximations involved in simplification of microscopic theories, from sub-cellural to the whole brain level, is presented. A new approximation to neural dynamics is described, leading to a Platonic-like model of mind based on psychological spaces. Objects and events in these spaces correspond to quasi-stable states of brain dynamics and may be interpreted from psychological point of view. Platonic model bridges the gap between neurosciences and psychological sciences. Static and dynamic versions of this model are outlined and Feature Space Mapping, a neurofuzzy realization of the static version of Platonic model, described. Categorization experiments with human subjects are analyzed from the neurodynamical and Platonic model points of view

    Theoretical Interpretations and Applications of Radial Basis Function Networks

    Get PDF
    Medical applications usually used Radial Basis Function Networks just as Artificial Neural Networks. However, RBFNs are Knowledge-Based Networks that can be interpreted in several way: Artificial Neural Networks, Regularization Networks, Support Vector Machines, Wavelet Networks, Fuzzy Controllers, Kernel Estimators, Instanced-Based Learners. A survey of their interpretations and of their corresponding learning algorithms is provided as well as a brief survey on dynamic learning algorithms. RBFNs' interpretations can suggest applications that are particularly interesting in medical domains

    Tameness in generalized metric structures

    Full text link
    We broaden the framework of metric abstract elementary classes (mAECs) in several essential ways, chiefly by allowing the metric to take values in a well-behaved quantale. As a proof of concept we show that the result of Boney and Zambrano on (metric) tameness under a large cardinal assumption holds in this more general context. We briefly consider a further generalization to partial metric spaces, and hint at connections to classes of fuzzy structures, and structures on sheaves
    • …
    corecore