299 research outputs found

    Forecasting of financial data: a novel fuzzy logic neural network based on error-correction concept and statistics

    Get PDF
    First, this paper investigates the effect of good and bad news on volatility in the BUX return time series using asymmetric ARCH models. Then, the accuracy of forecasting models based on statistical (stochastic), machine learning methods, and soft/granular RBF network is investigated. To forecast the high-frequency financial data, we apply statistical ARMA and asymmetric GARCH-class models. A novel RBF network architecture is proposed based on incorporation of an error-correction mechanism, which improves forecasting ability of feed-forward neural networks. These proposed modelling approaches and SVM models are applied to predict the high-frequency time series of the BUX stock index. We found that it is possible to enhance forecast accuracy and achieve significant risk reduction in managerial decision making by applying intelligent forecasting models based on latest information technologies. On the other hand, we showed that statistical GARCH-class models can identify the presence of leverage effects, and react to the good and bad news.Web of Science421049

    Sistemas granulares evolutivos

    Get PDF
    Orientador: Fernando Antonio Campos GomideTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Recentemente tem-se observado um crescente interesse em abordagens de modelagem computacional para lidar com fluxos de dados do mundo real. Métodos e algoritmos têm sido propostos para obtenção de conhecimento a partir de conjuntos de dados muito grandes e, a princípio, sem valor aparente. Este trabalho apresenta uma plataforma computacional para modelagem granular evolutiva de fluxos de dados incertos. Sistemas granulares evolutivos abrangem uma variedade de abordagens para modelagem on-line inspiradas na forma com que os humanos lidam com a complexidade. Esses sistemas exploram o fluxo de informação em ambiente dinâmico e extrai disso modelos que podem ser linguisticamente entendidos. Particularmente, a granulação da informação é uma técnica natural para dispensar atenção a detalhes desnecessários e enfatizar transparência, interpretabilidade e escalabilidade de sistemas de informação. Dados incertos (granulares) surgem a partir de percepções ou descrições imprecisas do valor de uma variável. De maneira geral, vários fatores podem afetar a escolha da representação dos dados tal que o objeto representativo reflita o significado do conceito que ele está sendo usado para representar. Neste trabalho são considerados dados numéricos, intervalares e fuzzy; e modelos intervalares, fuzzy e neuro-fuzzy. A aprendizagem de sistemas granulares é baseada em algoritmos incrementais que constroem a estrutura do modelo sem conhecimento anterior sobre o processo e adapta os parâmetros do modelo sempre que necessário. Este paradigma de aprendizagem é particularmente importante uma vez que ele evita a reconstrução e o retreinamento do modelo quando o ambiente muda. Exemplos de aplicação em classificação, aproximação de função, predição de séries temporais e controle usando dados sintéticos e reais ilustram a utilidade das abordagens de modelagem granular propostas. O comportamento de fluxos de dados não-estacionários com mudanças graduais e abruptas de regime é também analisado dentro do paradigma de computação granular evolutiva. Realçamos o papel da computação intervalar, fuzzy e neuro-fuzzy em processar dados incertos e prover soluções aproximadas de alta qualidade e sumário de regras de conjuntos de dados de entrada e saída. As abordagens e o paradigma introduzidos constituem uma extensão natural de sistemas inteligentes evolutivos para processamento de dados numéricos a sistemas granulares evolutivos para processamento de dados granularesAbstract: In recent years there has been increasing interest in computational modeling approaches to deal with real-world data streams. Methods and algorithms have been proposed to uncover meaningful knowledge from very large (often unbounded) data sets in principle with no apparent value. This thesis introduces a framework for evolving granular modeling of uncertain data streams. Evolving granular systems comprise an array of online modeling approaches inspired by the way in which humans deal with complexity. These systems explore the information flow in dynamic environments and derive from it models that can be linguistically understood. Particularly, information granulation is a natural technique to dispense unnecessary details and emphasize transparency, interpretability and scalability of information systems. Uncertain (granular) data arise from imprecise perception or description of the value of a variable. Broadly stated, various factors can affect one's choice of data representation such that the representing object conveys the meaning of the concept it is being used to represent. Of particular concern to this work are numerical, interval, and fuzzy types of granular data; and interval, fuzzy, and neurofuzzy modeling frameworks. Learning in evolving granular systems is based on incremental algorithms that build model structure from scratch on a per-sample basis and adapt model parameters whenever necessary. This learning paradigm is meaningful once it avoids redesigning and retraining models all along if the system changes. Application examples in classification, function approximation, time-series prediction and control using real and synthetic data illustrate the usefulness of the granular approaches and framework proposed. The behavior of nonstationary data streams with gradual and abrupt regime shifts is also analyzed in the realm of evolving granular computing. We shed light upon the role of interval, fuzzy, and neurofuzzy computing in processing uncertain data and providing high-quality approximate solutions and rule summary of input-output data sets. The approaches and framework introduced constitute a natural extension of evolving intelligent systems over numeric data streams to evolving granular systems over granular data streamsDoutoradoAutomaçãoDoutor em Engenharia Elétric

    Multiple classifiers fusion and CNN feature extraction for handwritten digits recognition

    Get PDF
    Handwritten digits recognition has been treated as a multi-class classification problem in the machine learning context, where each of the ten digits (0-9) is viewed as a class and the machine learning task is essentially to train a classifier that can effectively discriminate the ten classes. In practice, it is very usual that the performance of a single classifier trained by using a standard learning algorithm is varied on different data sets, which indicates that the same learning algorithm may train strong classifiers on some data sets but weak classifiers may be trained on other data sets. It is also possible that the same classifier shows different performance on different test sets, especially when considering the case that image instances can be highly diverse due to the different handwriting styles of different people on the same digits. In order to address the above issue, development of ensemble learning approaches have been very necessary to improve the overall performance and make the performance more stable on different data sets. In this paper, we propose a framework that involves CNN based feature extraction from the MINST data set and algebraic fusion of multiple classifiers trained on different feature sets, which are prepared through feature selection applied to the original feature set extracted using CNN. The experimental results show that the classifiers fusion can achieve the classification accuracy of ≥ 98%

    Predictive modeling of die filling of the pharmaceutical granules using the flexible neural tree

    Get PDF
    In this work, a computational intelligence (CI) technique named flexible neural tree (FNT) was developed to predict die filling performance of pharmaceutical granules and to identify significant die filling process variables. FNT resembles feedforward neural network, which creates a tree-like structure by using genetic programming. To improve accuracy, FNT parameters were optimized by using differential evolution algorithm. The performance of the FNT-based CI model was evaluated and compared with other CI techniques: multilayer perceptron, Gaussian process regression, and reduced error pruning tree. The accuracy of the CI model was evaluated experimentally using die filling as a case study. The die filling experiments were performed using a model shoe system and three different grades of microcrystalline cellulose (MCC) powders (MCC PH 101, MCC PH 102, and MCC DG). The feed powders were roll-compacted and milled into granules. The granules were then sieved into samples of various size classes. The mass of granules deposited into the die at different shoe speeds was measured. From these experiments, a dataset consisting true density, mean diameter (d50), granule size, and shoe speed as the inputs and the deposited mass as the output was generated. Cross-validation (CV) methods such as 10FCV and 5x2FCV were applied to develop and to validate the predictive models. It was found that the FNT-based CI model (for both CV methods) performed much better than other CI models. Additionally, it was observed that process variables such as the granule size and the shoe speed had a higher impact on the predictability than that of the powder property such as d50. Furthermore, validation of model prediction with experimental data showed that the die filling behavior of coarse granules could be better predicted than that of fine granules

    The Modeling of Interval-Valued Time Series Using Possibility Measure-Based Encoding-Decoding Mechanism

    Get PDF
    Interval-valued time series (ITS) is a collection of interval-valued data whose entires are ordered by time. The modeling of ITS is an ongoing issue pursued by many researchers. There are diverse ITS models showing better performance. This paper proposes a new ITS model using possibility measure-based encoding-decoding mechanism involved in fuzzy theory. The proposed model consists of four modules, say, linguistic variable generation module, encoding module, inference module and decoding module. The linguistic variable generation module can provide a series of linguistic variables expressed in fuzzy sets used to described dynamic characteristics of ITS. The encoding module encodes ITS into some embedding vectors with semantics with the aid of possibility measure and linguistic variables formed by linguistic variable generation module. The inference module uses artificial neural network to capture relationship implied in those embedding vectors with semantic. The decoding module decodes for the outputs of the inference module to produce the output of linguistic and interval formats by using the possibility measure-based encoding-decoding mechanism. In comparison with existing ITS models, the proposed model can not only produce the output of linguistic format, but also exhibit better numeric performance

    Examples of Artificial Perceptions in Optical Character Recognition and Iris Recognition

    Full text link
    This paper assumes the hypothesis that human learning is perception based, and consequently, the learning process and perceptions should not be represented and investigated independently or modeled in different simulation spaces. In order to keep the analogy between the artificial and human learning, the former is assumed here as being based on the artificial perception. Hence, instead of choosing to apply or develop a Computational Theory of (human) Perceptions, we choose to mirror the human perceptions in a numeric (computational) space as artificial perceptions and to analyze the interdependence between artificial learning and artificial perception in the same numeric space, using one of the simplest tools of Artificial Intelligence and Soft Computing, namely the perceptrons. As practical applications, we choose to work around two examples: Optical Character Recognition and Iris Recognition. In both cases a simple Turing test shows that artificial perceptions of the difference between two characters and between two irides are fuzzy, whereas the corresponding human perceptions are, in fact, crisp.Comment: 5th Int. Conf. on Soft Computing and Applications (Szeged, HU), 22-24 Aug 201

    EGFC: Evolving Gaussian Fuzzy Classifier from Never-Ending Semi-Supervised Data Streams -- With Application to Power Quality Disturbance Detection and Classification

    Full text link
    Power-quality disturbances lead to several drawbacks such as limitation of the production capacity, increased line and equipment currents, and consequent ohmic losses; higher operating temperatures, premature faults, reduction of life expectancy of machines, malfunction of equipment, and unplanned outages. Real-time detection and classification of disturbances are deemed essential to industry standards. We propose an Evolving Gaussian Fuzzy Classification (EGFC) framework for semi-supervised disturbance detection and classification combined with a hybrid Hodrick-Prescott and Discrete-Fourier-Transform attribute-extraction method applied over a landmark window of voltage waveforms. Disturbances such as spikes, notching, harmonics, and oscillatory transient are considered. Different from other monitoring systems, which require offline training of models based on a limited amount of data and occurrences, the proposed online data-stream-based EGFC method is able to learn disturbance patterns autonomously from never-ending data streams by adapting the parameters and structure of a fuzzy rule base on the fly. Moreover, the fuzzy model obtained is linguistically interpretable, which improves model acceptability. We show encouraging classification results.Comment: 10 pages, 6 figures, 1 table, IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2020

    Multiple decomposition-aided long short-term memory network for enhanced short-term wind power forecasting.

    Get PDF
    With the increasing penetration of grid-scale wind energy systems, accurate wind power forecasting is critical to optimizing their integration into the power system, ensuring operational reliability, and enabling efficient system asset utilization. Addressing this challenge, this study proposes a novel forecasting model that combines the long-short-term memory (LSTM) neural network with two signal decomposition techniques. The EMD technique effectively extracts stable, stationary, and regular patterns from the original wind power signal, while the VMD technique tackles the most challenging high-frequency component. A deep learning-based forecasting model, i.e. the LSTM neural network, is used to take advantage of its ability to learn from longer sequences of data and its robustness to noise and outliers. The developed model is evaluated against LSTM models employing various decomposition methods using real wind power data from three distinct offshore wind farms. It is shown that the two-stage decomposition significantly enhances forecasting accuracy, with the proposed model achieving R2 values up to 9.5% higher than those obtained using standard LSTM models
    corecore