99,247 research outputs found

    Predicting Sparse Clients' Actions with CPOPT-Net in the Banking Environment

    Get PDF
    The digital revolution of the banking system with evolving European regulations have pushed the major banking actors to innovate by a newly use of their clients' digital information. Given highly sparse client activities, we propose CPOPT-Net, an algorithm that combines the CP canonical tensor decomposition, a multidimensional matrix decomposition that factorizes a tensor as the sum of rank-one tensors, and neural networks. CPOPT-Net removes efficiently sparse information with a gradient-based resolution while relying on neural networks for time series predictions. Our experiments show that CPOPT-Net is capable to perform accurate predictions of the clients' actions in the context of personalized recommendation. CPOPT-Net is the first algorithm to use non-linear conjugate gradient tensor resolution with neural networks to propose predictions of financial activities on a public data set

    Constructing Fuzzy Time Series Model Using Combination of Table Lookup and Singular Value Decomposition Methods and Its Application to Forecasting Inflation Rate

    Get PDF
    Fuzzy time series is a dynamic process with linguistic values as its observations. Modelling fuzzy time series data developed by some researchers used discrete membership functions and table lookup method from training data. This paper presents a new method to modelling fuzzy time series data combining table lookup and singular value decomposition methods using continuous membership functions. Table lookup method is used to construct fuzzy relations from training data. Singular value decomposition of firing strength matrix and QR factorization are used to reduce fuzzy relations. Furthermore, this method is applied to forecast inflation rate in Indonesia based on six-factors one-order fuzzy time series. This result is compared with neural network method and the proposed method gets a higher forecasting accuracy rate than the neural network method

    Neural Decomposition of Time-Series Data for Effective Generalization

    Get PDF
    We present a neural network technique for the analysis and extrapolation of time-series data called Neural Decomposition (ND). Units with a sinusoidal activation function are used to perform a Fourier-like decomposition of training samples into a sum of sinusoids, augmented by units with nonperiodic activation functions to capture linear trends and other nonperiodic components. We show how careful weight initialization can be combined with regularization to form a simple model that generalizes well. Our method generalizes effectively on the Mackey-Glass series, a dataset of unemployment rates as reported by the U.S. Department of Labor Statistics, a time-series of monthly international airline passengers, the monthly ozone concentration in downtown Los Angeles, and an unevenly sampled time-series of oxygen isotope measurements from a cave in north India. We find that ND outperforms popular time-series forecasting techniques including ARIMA, SARIMA, SVR with a radial basis function, Gashler and Ashmore’s model, and echo state networks

    Mathematical structure of perfect predictive reservoir computing for autoregressive type of time series data

    Full text link
    Reservoir Computing (RC) is a type of recursive neural network (RNN), and there can be no doubt that the RC will be more and more widely used for building future prediction models for time-series data, with low training cost, high speed and high computational power. However, research into the mathematical structure of RC neural networks has only recently begun. Bollt (2021) clarified the necessity of the autoregressive (AR) model for gaining the insight into the mathematical structure of RC neural networks, and indicated that the Wold decomposition theorem is the milestone for understanding of these. Keeping this celebrated result in mind, in this paper, we clarify hidden structures of input and recurrent weight matrices in RC neural networks, and show that such structures attain perfect prediction for the AR type of time series data
    • …
    corecore