354 research outputs found

    An Incremental Construction of Deep Neuro Fuzzy System for Continual Learning of Non-stationary Data Streams

    Full text link
    Existing FNNs are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep FNN, namely DEVFNN. Fuzzy rules can be automatically extracted from data streams or removed if they play limited role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely gClass, drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of dimensionality of input space due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using seven datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four popular continual learning algorithms and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.Comment: This paper has been published in IEEE Transactions on Fuzzy System

    Machine Learning Techniques to Mitigate Nonlinear Phase Noise in Moderate Baud Rate Optical Communication Systems

    Get PDF
    Nonlinear phase noise (NLPN) is the most common impairment that degrades the performance of radio-over-fiber networks. The effect of NLPN in the constellation diagram consists of a shape distortion of symbols that increases the symbol error rate due to symbol overlapping when using a conventional demodulation grid. Symbol shape characterization was obtained experimentally at a moderate baud rate (250 MBd) for constellations impaired by phase noise due to a mismatch between the optical carrier and the transmitted radio frequency signal. Machine learning algorithms have become a powerful tool to perform monitoring and to identify and mitigate distortions introduced in both the electrical and optical domains. Clustering-based demodulation assisted with Voronoi contours enables the definition of non-Gaussian boundaries to provide flexible demodulation of 16-QAM and 4+12 PSK modulation formats. Phase-offset and in-phase and quadrature imbalance may be detected on the received constellation and compensated by applying thresholding boundaries obtained from impairment characterization through statistical analysis. Experimental results show increased tolerance to the optical signal-to-noise ratio (OSNR) obtained from clustering methods based on k-means and fuzzy c-means Gustafson-Kessel algorithms. Improvements of 3.2 dB for 16-QAM, and 1.4 dB for 4+12 PSK in the OSNR scale as a function of the bit error rate are obtained without requiring additional compensation algorithms

    Machine Learning Techniques to Mitigate Nonlinear Phase Noise in Moderate Baud Rate Optical Communication Systems

    Get PDF
    Nonlinear phase noise (NLPN) is the most common impairment that degrades the performance of radio-over-fiber networks. The effect of NLPN in the constellation diagram consists of a shape distortion of symbols that increases the symbol error rate due to symbol overlapping when using a conventional demodulation grid. Symbol shape characterization was obtained experimentally at a moderate baud rate (250 MBd) for constellations impaired by phase noise due to a mismatch between the optical carrier and the transmitted radio frequency signal. Machine learning algorithms have become a powerful tool to perform monitoring and to identify and mitigate distortions introduced in both the electrical and optical domains. Clustering-based demodulation assisted with Voronoi contours enables the definition of non-Gaussian boundaries to provide flexible demodulation of 16-QAM and 4+12 PSK modulation formats. Phase-offset and in-phase and quadrature imbalance may be detected on the received constellation and compensated by applying thresholding boundaries obtained from impairment characterization through statistical analysis. Experimental results show increased tolerance to the optical signal-to-noise ratio (OSNR) obtained from clustering methods based on k-means and fuzzy c-means Gustafson-Kessel algorithms. Improvements of 3.2 dB for 16-QAM, and 1.4 dB for 4+12 PSK in the OSNR scale as a function of the bit error rate are obtained without requiring additional compensation algorithms

    Chapter Machine Learning Techniques to Mitigate Nonlinear Phase Noise in Moderate Baud Rate Optical Communication Systems

    Get PDF
    Nonlinear phase noise (NLPN) is the most common impairment that degrades the performance of radio-over-fiber networks. The effect of NLPN in the constellation diagram consists of a shape distortion of symbols that increases the symbol error rate due to symbol overlapping when using a conventional demodulation grid. Symbol shape characterization was obtained experimentally at a moderate baud rate (250 MBd) for constellations impaired by phase noise due to a mismatch between the optical carrier and the transmitted radio frequency signal. Machine learning algorithms have become a powerful tool to perform monitoring and to identify and mitigate distortions introduced in both the electrical and optical domains. Clustering-based demodulation assisted with Voronoi contours enables the definition of non-Gaussian boundaries to provide flexible demodulation of 16-QAM and 4+12 PSK modulation formats. Phase-offset and in-phase and quadrature imbalance may be detected on the received constellation and compensated by applying thresholding boundaries obtained from impairment characterization through statistical analysis. Experimental results show increased tolerance to the optical signal-to-noise ratio (OSNR) obtained from clustering methods based on k-means and fuzzy c-means Gustafson-Kessel algorithms. Improvements of 3.2 dB for 16-QAM, and 1.4 dB for 4+12 PSK in the OSNR scale as a function of the bit error rate are obtained without requiring additional compensation algorithms

    Evolving fuzzy and neuro-fuzzy approaches in clustering, regression, identification, and classification: A Survey

    Get PDF
    Major assumptions in computational intelligence and machine learning consist of the availability of a historical dataset for model development, and that the resulting model will, to some extent, handle similar instances during its online operation. However, in many real world applications, these assumptions may not hold as the amount of previously available data may be insufficient to represent the underlying system, and the environment and the system may change over time. As the amount of data increases, it is no longer feasible to process data efficiently using iterative algorithms, which typically require multiple passes over the same portions of data. Evolving modeling from data streams has emerged as a framework to address these issues properly by self-adaptation, single-pass learning steps and evolution as well as contraction of model components on demand and on the fly. This survey focuses on evolving fuzzy rule-based models and neuro-fuzzy networks for clustering, classification and regression and system identification in online, real-time environments where learning and model development should be performed incrementally. (C) 2019 Published by Elsevier Inc.Igor Škrjanc, Jose Antonio Iglesias and Araceli Sanchis would like to thank to the Chair of Excellence of Universidad Carlos III de Madrid, and the Bank of Santander Program for their support. Igor Škrjanc is grateful to Slovenian Research Agency with the research program P2-0219, Modeling, simulation and control. Daniel Leite acknowledges the Minas Gerais Foundation for Research and Development (FAPEMIG), process APQ-03384-18. Igor Škrjanc and Edwin Lughofer acknowledges the support by the ”LCM — K2 Center for Symbiotic Mechatronics” within the framework of the Austrian COMET-K2 program. Fernando Gomide is grateful to the Brazilian National Council for Scientific and Technological Development (CNPq) for grant 305906/2014-3

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial

    Semantic image retrieval using relevance feedback and transaction logs

    Get PDF
    Due to the recent improvements in digital photography and storage capacity, storing large amounts of images has been made possible, and efficient means to retrieve images matching a user’s query are needed. Content-based Image Retrieval (CBIR) systems automatically extract image contents based on image features, i.e. color, texture, and shape. Relevance feedback methods are applied to CBIR to integrate users’ perceptions and reduce the gap between high-level image semantics and low-level image features. The precision of a CBIR system in retrieving semantically rich (complex) images is improved in this dissertation work by making advancements in three areas of a CBIR system: input, process, and output. The input of the system includes a mechanism that provides the user with required tools to build and modify her query through feedbacks. Users behavioral in CBIR environments are studied, and a new feedback methodology is presented to efficiently capture users’ image perceptions. The process element includes image learning and retrieval algorithms. A Long-term image retrieval algorithm (LTL), which learns image semantics from prior search results available in the system’s transaction history, is developed using Factor Analysis. Another algorithm, a short-term learner (STL) that captures user’s image perceptions based on image features and user’s feedbacks in the on-going transaction, is developed based on Linear Discriminant Analysis. Then, a mechanism is introduced to integrate these two algorithms to one retrieval procedure. Finally, a retrieval strategy that includes learning and searching phases is defined for arranging images in the output of the system. The developed relevance feedback methodology proved to reduce the effect of human subjectivity in providing feedbacks for complex images. Retrieval algorithms were applied to images with different degrees of complexity. LTL is efficient in extracting the semantics of complex images that have a history in the system. STL is suitable for query and images that can be effectively represented by their image features. Therefore, the performance of the system in retrieving images with visual and conceptual complexities was improved when both algorithms were applied simultaneously. Finally, the strategy of retrieval phases demonstrated promising results when the query complexity increases

    Genetically Engineered Adaptive Resonance Theory (art) Neural Network Architectures

    Get PDF
    Fuzzy ARTMAP (FAM) is currently considered to be one of the premier neural network architectures in solving classification problems. One of the limitations of Fuzzy ARTMAP that has been extensively reported in the literature is the category proliferation problem. That is Fuzzy ARTMAP has the tendency of increasing its network size, as it is confronted with more and more data, especially if the data is of noisy and/or overlapping nature. To remedy this problem a number of researchers have designed modifications to the training phase of Fuzzy ARTMAP that had the beneficial effect of reducing this phenomenon. In this thesis we propose a new approach to handle the category proliferation problem in Fuzzy ARTMAP by evolving trained FAM architectures. We refer to the resulting FAM architectures as GFAM. We demonstrate through extensive experimentation that an evolved FAM (GFAM) exhibits good (sometimes optimal) generalization, small size (sometimes optimal size), and requires reasonable computational effort to produce an optimal or sub-optimal network. Furthermore, comparisons of the GFAM with other approaches, proposed in the literature, which address the FAM category proliferation problem, illustrate that the GFAM has a number of advantages (i.e. produces smaller or equal size architectures, of better or as good generalization, with reduced computational complexity). Furthermore, in this dissertation we have extended the approach used with Fuzzy ARTMAP to other ART architectures, such as Ellipsoidal ARTMAP (EAM) and Gaussian ARTMAP (GAM) that also suffer from the ART category proliferation problem. Thus, we have designed and experimented with genetically engineered EAM and GAM architectures, named GEAM and GGAM. Comparisons of GEAM and GGAM with other ART architectures that were introduced in the ART literature, addressing the category proliferation problem, illustrate similar advantages observed by GFAM (i.e, GEAM and GGAM produce smaller size ART architectures, of better or improved generalization, with reduced computational complexity). Moverover, to optimally cover the input space of a problem, we proposed a genetically engineered ART architecture that combines the category structures of two different ART networks, FAM and EAM. We named this architecture UART (Universal ART). We analyzed the order of search in UART, that is the order according to which a FAM category or an EAM category is accessed in UART. This analysis allowed us to better understand UART\u27s functionality. Experiments were also conducted to compare UART with other ART architectures, in a similar fashion as GFAM and GEAM were compared. Similar conclusions were drawn from this comparison, as in the comparison of GFAM and GEAM with other ART architectures. Finally, we analyzed the computational complexity of the genetically engineered ART architectures and we compared it with the computational complexity of other ART architectures, introduced into the literature. This analytical comparison verified our claim that the genetically engineered ART architectures produce better generalization and smaller sizes ART structures, at reduced computational complexity, compared to other ART approaches. In review, a methodology was introduced of how to combine the answers (categories) of ART architectures, using genetic algorithms. This methodology was successfully applied to FAM, EAM and FAM and EAM ART architectures, with success, resulting in ART neural networks which outperformed other ART architectures, previously introduced into the literature, and quite often produced ART architectures that attained optimal classification results, at reduced computational complexity

    Fuzzy Clustering Using the Convex Hull as Geometrical Model

    Get PDF
    A new approach to fuzzy clustering is proposed in this paper. It aims to relax some constraints imposed by known algorithms using a generalized geometrical model for clusters that is based on the convex hull computation. A method is also proposed in order to determine suitable membership functions and hence to represent fuzzy clusters based on the adopted geometrical model. The convex hull is not only used at the end of clustering analysis for the geometric data interpretation but also used during the fuzzy data partitioning within an online sequential procedure in order to calculate the membership function. Consequently, a pure fuzzy clustering algorithm is obtained where clusters are fitted to the data distribution by means of the fuzzy membership of patterns to each cluster. The numerical results reported in the paper show the validity and the efficacy of the proposed approach with respect to other well-known clustering algorithms
    • …
    corecore