10 research outputs found

    Distributed machine learning for IoT

    Get PDF
    In the modern world, big data is used in machine learning, which is quite difficult to process on a single computer, so various methods for parallel processing of such data are being developed. But what about microcontrollers? In a cloud system, microcontrollers are often found, thanks to which they make pacification of various devices, and sometimes you have to work with big data. In microcontrollers, the memory is quite small and the processor is not as productive as on modern supercomputers. Therefore, many scientists propose various methods for parallel processing of big data for embedded systems, one of such methods is proposed by the author of this article

    Optimising the smoothness and accuracy of moving average for stock price data

    Get PDF
    Smoothing time series allows removing noise. Moving averages are used in finance to smooth stock price series and forecast trend direction. We propose optimised custom moving average that is the most suitable for stock time series smoothing. Suitability criteria are defined by smoothness and accuracy. Previous research focused only on one of the two criteria in isolation. We define this as multi-criteria Pareto optimisation problem and compare the proposed method to the five most popular moving average methods on synthetic and real world stock data. The comparison was performed using unseen data. The new method outperforms other methods in 99.5% of cases on synthetic and in 91% on real world data. The method allows better time series smoothing with the same level of accuracy as traditional methods, or better accuracy with the same smoothness. Weights optimised on one stock are very similar to weights optimised for any other stock and can be used interchangeably. Traders can use the new method to detect trends earlier and increase the profitability of their strategies. The concept is also applicable to sensors, weather forecasting, and traffic prediction where both the smoothness and accuracy of the filtered signal are important

    Forecasting Detrended Volatility Risk and Financial Price Series Using LSTM Neural Networks and XGBoost Regressor

    No full text
    It is common practice to employ returns, price differences or log returns for financial risk estimation and time series forecasting. In De Prado’s 2018 book, it was argued that by using returns we lose memory of time series. In order to verify this statement, we examined the differences between fractional differencing and logarithmic transformations and their impact on data memory. We employed LSTM (long short-term memory) recurrent neural networks and an XGBoost regressor on the data using those transformations. We forecasted risk (volatility) and price value and compared the results of all models using original, unmodified prices. From the results, models showed that, on average, a logarithmic transformation achieved better volatility predictions in terms of mean squared error and accuracy. Logarithmic transformation was the most promising transformation in terms of profitability. Our results were controversial to Marco Lopez de Prado’s suggestion, as we managed to achieve the most accurate volatility predictions in terms of mean squared error and accuracy using logarithmic transformation instead of fractional differencing. This transformation was also most promising in terms of profitability

    Sustainable economy inspired large-scale feed-forward portfolio construction

    Get PDF
    To understand large-scale portfolio construction tasks we analyse sustainable economy problems by splitting up large tasks into smaller ones and offer an evolutional feed-forward system-based approach. The theoretical justification for our solution is based on multivariate statistical analysis of multidimensional investment tasks, particularly on relations between data size, algorithm complexity and portfolio efficacy. To reduce the dimensionality/sample size problem, a larger task is broken down into smaller parts by means of item similarity – clustering. Similar problems are given to smaller groups to solve. Groups, however, vary in many aspects. Pseudo randomly-formed groups compose a large number of modules of feed-forward decision-making systems. The evolution mechanism forms collections of the best modules for each single short time period. Final solutions are carried forward to the global scale where a collection of the best modules is chosen using a multiclass cost-sensitive perceptron. Collected modules are combined in a final solution in an equally weighted approach (1/N Portfolio). The efficacy of the novel decision-making approach was demonstrated through a financial portfolio optimization problem, which yielded adequate amounts of real world data. For portfolio construction, we used 11,730 simulated trading robot performances. The dataset covered the period from 2003 to 2012 when environmental changes were frequent and largely unpredictable. Walk-forward and out-of-sample experiments show that an approach based on sustainable economy principles outperforms benchmark methods and that shorter agent training history demonstrates better results in periods of a changing environment

    Dynamically Controlled Length of Training Data for Sustainable Portfolio Selection

    No full text
    In a constantly changing market environment, it is a challenge to construct a sustainable portfolio. One cannot use too long or too short training data to select the right portfolio of investments. When analyzing ten types of recent (up to April 2018) extremely high-dimensional time series from automated trading domains, it was discovered that there is no a priori ‘optimal’ length of training history that would fit all investment tasks. The optimal history length depends of the specificity of the data and varies with time. This statement was also confirmed by the analysis of dozens of multi-dimensional synthetic time series data generated by excitable medium models frequently considered in studies of chaos. An algorithm for determining the optimal length of training history to produce a sustainable portfolio is proposed. Monitoring the size of the learning data can be useful in data mining tasks used in the analysis of sustainability in other research disciplines

    Immunology-based sustainable portfolio management

    No full text
    Immunological principles can be used to build a sustainable investment portfolio. The theory of immunology states that information about recognized pathogens is stored in the memory of the immune system. Information about previous illnesses can be helpful when the pathogen re-enters the body. Real-time analysis of 11 automated financial trading datasets confirmed this phenomenon in financial time series. Therefore, in order to increase the sustainability of the portfolio, we propose to train the portfolio with the most similar segments of historical data. The segment size and offset may vary depending on the data set and time

    How (in)efficient is after-hours trading?

    Full text link
    corecore