9,618 research outputs found

    Autoregressive time series prediction by means of fuzzy inference systems using nonparametric residual variance estimation

    Get PDF
    We propose an automatic methodology framework for short- and long-term prediction of time series by means of fuzzy inference systems. In this methodology, fuzzy techniques and statistical techniques for nonparametric residual variance estimation are combined in order to build autoregressive predictive models implemented as fuzzy inference systems. Nonparametric residual variance estimation plays a key role in driving the identification and learning procedures. Concrete criteria and procedures within the proposed methodology framework are applied to a number of time series prediction problems. The learn from examples method introduced by Wang and Mendel (W&M) is used for identification. The Levenberg–Marquardt (L–M) optimization method is then applied for tuning. The W&M method produces compact and potentially accurate inference systems when applied after a proper variable selection stage. The L–M method yields the best compromise between accuracy and interpretability of results, among a set of alternatives. Delta test based residual variance estimations are used in order to select the best subset of inputs to the fuzzy inference systems as well as the number of linguistic labels for the inputs. Experiments on a diverse set of time series prediction benchmarks are compared against least-squares support vector machines (LS-SVM), optimally pruned extreme learning machine (OP-ELM), and k-NN based autoregressors. The advantages of the proposed methodology are shown in terms of linguistic interpretability, generalization capability and computational cost. Furthermore, fuzzy models are shown to be consistently more accurate for prediction in the case of time series coming from real-world applications.Ministerio de Ciencia e Innovación TEC2008-04920Junta de Andalucía P08-TIC-03674, IAC07-I-0205:33080, IAC08-II-3347:5626

    Underdetermined blind source separation based on Fuzzy C-Means and Semi-Nonnegative Matrix Factorization

    Get PDF
    Conventional blind source separation is based on over-determined with more sensors than sources but the underdetermined is a challenging case and more convenient to actual situation. Non-negative Matrix Factorization (NMF) has been widely applied to Blind Source Separation (BSS) problems. However, the separation results are sensitive to the initialization of parameters of NMF. Avoiding the subjectivity of choosing parameters, we used the Fuzzy C-Means (FCM) clustering technique to estimate the mixing matrix and to reduce the requirement for sparsity. Also, decreasing the constraints is regarded in this paper by using Semi-NMF. In this paper we propose a new two-step algorithm in order to solve the underdetermined blind source separation. We show how to combine the FCM clustering technique with the gradient-based NMF with the multi-layer technique. The simulation results show that our proposed algorithm can separate the source signals with high signal-to-noise ratio and quite low cost time compared with some algorithms

    Computational tools for low energy building design : capabilities and requirements

    Get PDF
    Integrated building performance simulation (IBPS) is an established technology, with the ability to model the heat, mass, light, electricity and control signal flows within complex building/plant systems. The technology is used in practice to support the design of low energy solutions and, in Europe at least, such use is set to expand with the advent of the Energy Performance of Buildings Directive, which mandates a modelling approach to legislation compliance. This paper summarises IBPS capabilities and identifies developments that aim to further improving integrity vis-Ă -vis the reality

    FSL-BM: Fuzzy Supervised Learning with Binary Meta-Feature for Classification

    Full text link
    This paper introduces a novel real-time Fuzzy Supervised Learning with Binary Meta-Feature (FSL-BM) for big data classification task. The study of real-time algorithms addresses several major concerns, which are namely: accuracy, memory consumption, and ability to stretch assumptions and time complexity. Attaining a fast computational model providing fuzzy logic and supervised learning is one of the main challenges in the machine learning. In this research paper, we present FSL-BM algorithm as an efficient solution of supervised learning with fuzzy logic processing using binary meta-feature representation using Hamming Distance and Hash function to relax assumptions. While many studies focused on reducing time complexity and increasing accuracy during the last decade, the novel contribution of this proposed solution comes through integration of Hamming Distance, Hash function, binary meta-features, binary classification to provide real time supervised method. Hash Tables (HT) component gives a fast access to existing indices; and therefore, the generation of new indices in a constant time complexity, which supersedes existing fuzzy supervised algorithms with better or comparable results. To summarize, the main contribution of this technique for real-time Fuzzy Supervised Learning is to represent hypothesis through binary input as meta-feature space and creating the Fuzzy Supervised Hash table to train and validate model.Comment: FICC201

    Aspects of automation of selective cleaning

    Get PDF
    Cleaning (pre-commercial thinning) is a silvicultural operation, primarily used to improve growing conditions of remaining trees in young stands (ca. 3 - 5 m of height). Cleaning costs are considered high in Sweden and the work is laborious. Selective cleaning with autonomous artificial agents (robots) may rationalise the work, but requires new knowledge. This thesis aims to analyse key issues regarding automation of cleaning; suggesting general solutions and focusing on automatic selection of main-stems. The essential requests put on cleaning robots are to render acceptable results and to be cost competitive. They must be safe and be able to operate independently and unattended for several hours in a dynamic and non-deterministic environment. Machine vision, radar, and laser scanners are promising techniques for obstacle avoidance, tree identification, and tool control. Horizontal laser scannings were made, demonstrating the possibility to find stems and make estimations regarding their height and diameter. Knowledge regarding stem selections was retrieved through qualitative interviews with persons performing cleaning. They consider similar attributes of trees, and these findings and current cleaning manuals were used in combination with a field inventory in the development of a decision support system (DSS). The DSS selects stems by the attributes species, position, diameter, and damage. It was used to run computer-based simulations in a variety of young forests. A general follow-up showed that the DSS produced acceptable results. The DSS was further evaluated by comparing its selections with those made by experienced cleaners, and by a test in which laymen performed cleanings following the system. The DSS seems to be useful and flexible, since it can be adjusted in accordance with the cleaners’ results. The laymen’s results implied that the DSS is robust and that it could be used as a training tool. Using the DSS in automatic, or semi-automatic, cleaning operations should be possible if and when selected attributes can be automatically perceived. A suitable base-machine and thorough research, regarding e.g. safety, obstacle avoidance, and target identification, is needed to develop competitive robots. However, using the DSS as a training-tool for inexperienced cleaners could be an interesting option as of today
    • 

    corecore