10,111 research outputs found

    Evolving Large-Scale Data Stream Analytics based on Scalable PANFIS

    Full text link
    Many distributed machine learning frameworks have recently been built to speed up the large-scale data learning process. However, most distributed machine learning used in these frameworks still uses an offline algorithm model which cannot cope with the data stream problems. In fact, large-scale data are mostly generated by the non-stationary data stream where its pattern evolves over time. To address this problem, we propose a novel Evolving Large-scale Data Stream Analytics framework based on a Scalable Parsimonious Network based on Fuzzy Inference System (Scalable PANFIS), where the PANFIS evolving algorithm is distributed over the worker nodes in the cloud to learn large-scale data stream. Scalable PANFIS framework incorporates the active learning (AL) strategy and two model fusion methods. The AL accelerates the distributed learning process to generate an initial evolving large-scale data stream model (initial model), whereas the two model fusion methods aggregate an initial model to generate the final model. The final model represents the update of current large-scale data knowledge which can be used to infer future data. Extensive experiments on this framework are validated by measuring the accuracy and running time of four combinations of Scalable PANFIS and other Spark-based built in algorithms. The results indicate that Scalable PANFIS with AL improves the training time to be almost two times faster than Scalable PANFIS without AL. The results also show both rule merging and the voting mechanisms yield similar accuracy in general among Scalable PANFIS algorithms and they are generally better than Spark-based algorithms. In terms of running time, the Scalable PANFIS training time outperforms all Spark-based algorithms when classifying numerous benchmark datasets.Comment: 20 pages, 5 figure

    Combining Neuro-Fuzzy Classifiers for Improved Generalisation and Reliability

    Get PDF
    In this paper a combination of neuro-fuzzy classifiers for improved classification performance and reliability is considered. A general fuzzy min-max (GFMM) classifier with agglomerative learning algorithm is used as a main building block. An alternative approach to combining individual classifier decisions involving the combination at the classifier model level is proposed. The resulting classifier complexity and transparency is comparable with classifiers generated during a single crossvalidation procedure while the improved classification performance and reduced variance is comparable to the ensemble of classifiers with combined (averaged/voted) decisions. We also illustrate how combining at the model level can be used for speeding up the training of GFMM classifiers for large data sets

    Integrating group Delphi, fuzzy logic and expert systems for marketing strategy development:the hybridisation and its effectiveness

    Get PDF
    A hybrid approach for integrating group Delphi, fuzzy logic and expert systems for developing marketing strategies is proposed in this paper. Within this approach, the group Delphi method is employed to help groups of managers undertake SWOT analysis. Fuzzy logic is applied to fuzzify the results of SWOT analysis. Expert systems are utilised to formulate marketing strategies based upon the fuzzified strategic inputs. In addition, guidelines are also provided to help users link the hybrid approach with managerial judgement and intuition. The effectiveness of the hybrid approach has been validated with MBA and MA marketing students. It is concluded that the hybrid approach is more effective in terms of decision confidence, group consensus, helping to understand strategic factors, helping strategic thinking, and coupling analysis with judgement, etc

    Improving Effort Estimation by Voting Software Estimation Models

    Get PDF
    Estimating software development effort is an important task in the management of large software projects. The task is challenging, and it has been receiving the attentions of researchers ever since software was developed for commercial purpose. A number of estimation models exist for effort prediction. However, there is a need for novel models to obtain more accurate estimations. The primary purpose of this study is to propose a precise method of estimation by selecting the most popular models in order to improve accuracy. Consequently, the final results are very precise and reliable when they are applied to a real dataset in a software project. Empirical validation of this approach uses the International Software Benchmarking Standards Group (ISBSG) Data Repository Version 10 to demonstrate the improvement in software estimation accuracy
    • …
    corecore