40,537 research outputs found

    Incremental kernel learning algorithms and applications.

    Get PDF
    Since the Support Vector Machines (SVMs) were introduced in 1995, SVMs have been recognized as essential tools for pattern classification and function approximation. Numerous publications show that SVMs outperform other learning methods in various areas. However, SVMs have a weak performance with large-scale data sets because of high computational complexity. One approach to overcome this limitation is the incremental learning approach where a large-scale data set is divided into several subsets and trained on those subsets updating the core information extracted from the previous subset. This approach also has a drawback that the core information is accumulated during the incremental procedure. When the large-scale data set has a special structure (e.g., in the case of unbalanced data set), the standard SVM might not perform properly. In this study, a novel approach based on the reduced convex hull concept is developed and applied in various applications. In addition, the developed concept is applied to the Support Vector Regression (SVR) to produce better performance. From the performed experiments, the incremental revised SVM significantly reduces the number of support vectors and requires less computing time. In addition the incremental revised SVR produces similar results with the standard SVR by reducing computing time significantly. Furthermore, the filter concept developed in this study may be utilized to reduce the computing time in other learning approach

    Forecasting of financial data: a novel fuzzy logic neural network based on error-correction concept and statistics

    Get PDF
    First, this paper investigates the effect of good and bad news on volatility in the BUX return time series using asymmetric ARCH models. Then, the accuracy of forecasting models based on statistical (stochastic), machine learning methods, and soft/granular RBF network is investigated. To forecast the high-frequency financial data, we apply statistical ARMA and asymmetric GARCH-class models. A novel RBF network architecture is proposed based on incorporation of an error-correction mechanism, which improves forecasting ability of feed-forward neural networks. These proposed modelling approaches and SVM models are applied to predict the high-frequency time series of the BUX stock index. We found that it is possible to enhance forecast accuracy and achieve significant risk reduction in managerial decision making by applying intelligent forecasting models based on latest information technologies. On the other hand, we showed that statistical GARCH-class models can identify the presence of leverage effects, and react to the good and bad news.Web of Science421049

    Effective pattern discovery for text mining

    Get PDF
    Many data mining techniques have been proposed for mining useful patterns in text documents. However, how to effectively use and update discovered patterns is still an open research issue, especially in the domain of text mining. Since most existing text mining methods adopted term-based approaches, they all suffer from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern (or phrase) based approaches should perform better than the term-based ones, but many experiments did not support this hypothesis. This paper presents an innovative technique, effective pattern discovery which includes the processes of pattern deploying and pattern evolving, to improve the effectiveness of using and updating discovered patterns for finding relevant and interesting information. Substantial experiments on RCV1 data collection and TREC topics demonstrate that the proposed solution achieves encouraging performance

    Weighted p-bits for FPGA implementation of probabilistic circuits

    Full text link
    Probabilistic spin logic (PSL) is a recently proposed computing paradigm based on unstable stochastic units called probabilistic bits (p-bits) that can be correlated to form probabilistic circuits (p-circuits). These p-circuits can be used to solve problems of optimization, inference and also to implement precise Boolean functions in an "inverted" mode, where a given Boolean circuit can operate in reverse to find the input combinations that are consistent with a given output. In this paper we present a scalable FPGA implementation of such invertible p-circuits. We implement a "weighted" p-bit that combines stochastic units with localized memory structures. We also present a generalized tile of weighted p-bits to which a large class of problems beyond invertible Boolean logic can be mapped, and how invertibility can be applied to interesting problems such as the NP-complete Subset Sum Problem by solving a small instance of this problem in hardware
    corecore