257 research outputs found

    Using Particle Swarm Optimization for Market Timing Strategies

    Get PDF
    Market timing is the issue of deciding when to buy or sell a given asset on the market. As one of the core issues of algorithmic trading systems, designers of such system have turned to computational intelligence methods to aid them in this task. In this thesis, we explore the use of Particle Swarm Optimization (PSO) within the domain of market timing.nPSO is a search metaheuristic that was first introduced in 1995 [28] and is based on the behavior of birds in flight. Since its inception, the PSO metaheuristic has seen extensions to adapt it to a variety of problems including single objective optimization, multiobjective optimization, niching and dynamic optimization problems. Although popular in other domains, PSO has seen limited application to the issue of market timing. The current incumbent algorithm within the market timing domain is Genetic Algorithms (GA), based on the volume of publications as noted in [40] and [84]. In this thesis, we use PSO to compose market timing strategies using technical analysis indicators. Our first contribution is to use a formulation that considers both the selection of components and the tuning of their parameters in a simultaneous manner, and approach market timing as a single objective optimization problem. Current approaches only considers one of those aspects at a time: either selecting from a set of components with fixed values for their parameters or tuning the parameters of a preset selection of components. Our second contribution is proposing a novel training and testing methodology that explicitly exposes candidate market timing strategies to numerous price trends to reduce the likelihood of overfitting to a particular trend and give a better approximation of performance under various market conditions. Our final contribution is to consider market timing as a multiobjective optimization problem, optimizing five financial metrics and comparing the performance of our PSO variants against a well established multiobjective optimization algorithm. These algorithms address unexplored research areas in the context of PSO algorithms to the best of our knowledge, and are therefore original contributions. The computational results over a range of datasets shows that the proposed PSO algorithms are competitive to GAs using the same formulation. Additionally, the multiobjective variant of our PSO algorithm achieve statistically significant improvements over NSGA-II

    Forecasting Cryptocurrency Value by Sentiment Analysis: An HPC-Oriented Survey of the State-of-the-Art in the Cloud Era

    Get PDF
    This chapter surveys the state-of-the-art in forecasting cryptocurrency value by Sentiment Analysis. Key compounding perspectives of current challenges are addressed, including blockchains, data collection, annotation, and filtering, and sentiment analysis metrics using data streams and cloud platforms. We have explored the domain based on this problem-solving metric perspective, i.e., as technical analysis, forecasting, and estimation using a standardized ledger-based technology. The envisioned tools based on forecasting are then suggested, i.e., ranking Initial Coin Offering (ICO) values for incoming cryptocurrencies, trading strategies employing the new Sentiment Analysis metrics, and risk aversion in cryptocurrencies trading through a multi-objective portfolio selection. Our perspective is rationalized on the perspective on elastic demand of computational resources for cloud infrastructures

    Robust and Constrained Portfolio Optimization using Multiobjective Evolutionary Algorithms

    Get PDF
    Optimization plays an important role in many areas of science, management,economics and engineering. Many techniques in mathematics and operation research are available to solve such problems. However these techniques have many shortcomings to provide fast and accurate solution particularly when the optimization problem involves many variables and constraints. Investment portfolio optimization is one such important but complex problem in computational finance which needs effective and efficient solutions. In this problem each available asset is judiciously selected in such a way that the total profit is maximized while simultaneously minimizing the total risk. The literature survey reveals that due to non availability of suitable multi objective optimization tools, this problem is mostly being solved by viewing it as a single objective optimization problem

    Multiobjective genetic programming for financial portfolio management in dynamic environments

    Get PDF
    Multiobjective (MO) optimisation is a useful technique for evolving portfolio optimisation solutions that span a range from high-return/high-risk to low-return/low-risk. The resulting Pareto front would approximate the risk/reward Efficient Frontier [Mar52], and simplifies the choice of investment model for a given client’s attitude to risk. However, the financial market is continuously changing and it is essential to ensure that MO solutions are capturing true relationships between financial factors and not merely over fitting the training data. Research on evolutionary algorithms in dynamic environments has been directed towards adapting the algorithm to improve its suitability for retraining whenever a change is detected. Little research focused on how to assess and quantify the success of multiobjective solutions in unseen environments. The multiobjective nature of the problem adds a unique feature to be satisfied to judge robustness of solutions. That is, in addition to examining whether solutions remain optimal in the new environment, we need to ensure that the solutions’ relative positions previously identified on the Pareto front are not altered. This thesis investigates the performance of Multiobjective Genetic Programming (MOGP) in the dynamic real world problem of portfolio optimisation. The thesis provides new definitions and statistical metrics based on phenotypic cluster analysis to quantify robustness of both the solutions and the Pareto front. Focusing on the critical period between an environment change and when retraining occurs, four techniques to improve the robustness of solutions are examined. Namely, the use of a validation data set; diversity preservation; a novel variation on mating restriction; and a combination of both diversity enhancement and mating restriction. In addition, preliminary investigation of using the robustness metrics to quantify the severity of change for optimum tracking in a dynamic portfolio optimisation problem is carried out. Results show that the techniques used offer statistically significant improvement on the solutions’ robustness, although not on all the robustness criteria simultaneously. Combining the mating restriction with diversity enhancement provided the best robustness results while also greatly enhancing the quality of solutions

    Evolutionary multi-objective optimization in investment portfolio management

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Robust optimization of algorithmic trading systems

    Get PDF
    GAs (Genetic Algorithms) and GP (Genetic Programming) are investigated for finding robust Technical Trading Strategies (TTSs). TTSs evolved with standard GA/GP techniques tend to suffer from over-fitting as the solutions evolved are very fragile to small disturbances in the data. The main objective of this thesis is to explore optimization techniques for GA/GP which produce robust TTSs that have a similar performance during both optimization and evaluation, and are also able to operate in all market conditions and withstand severe market shocks. In this thesis, two novel techniques that increase the robustness of TTSs and reduce over-fitting are described and compared to standard GA/GP optimization techniques and the traditional investment strategy Buy & Hold. The first technique employed is a robust multi-market optimization methodology using a GA. Robustness is incorporated via the environmental variables of the problem, i.e. variablity in the dataset is introduced by conducting the search for the optimum parameters over several market indices, in the hope of exposing the GA to differing market conditions. This technique shows an increase in the robustness of the solutions produced, with results also showing an improvement in terms of performance when compared to those offered by conducting the optimization over a single market. The second technique is a random sampling method we use to discover robust TTSs using GP. Variability is introduced in the dataset by randomly sampling segments and evaluating each individual on different random samples. This technique has shown promising results, substantially beating Buy & Hold. Overall, this thesis concludes that Evolutionary Computation techniques such as GA and GP combined with robust optimization methods are very suitable for developing trading systems, and that the systems developed using these techniques can be used to provide significant economic profits in all market conditions

    Enhancing portfolio management using artificial intelligence: literature review

    Get PDF
    Building an investment portfolio is a problem that numerous researchers have addressed for many years. The key goal has always been to balance risk and reward by optimally allocating assets such as stocks, bonds, and cash. In general, the portfolio management process is based on three steps: planning, execution, and feedback, each of which has its objectives and methods to be employed. Starting from Markowitz's mean-variance portfolio theory, different frameworks have been widely accepted, which considerably renewed how asset allocation is being solved. Recent advances in artificial intelligence provide methodological and technological capabilities to solve highly complex problems, and investment portfolio is no exception. For this reason, the paper reviews the current state-of-the-art approaches by answering the core question of how artificial intelligence is transforming portfolio management steps. Moreover, as the use of artificial intelligence in finance is challenged by transparency, fairness and explainability requirements, the case study of post-hoc explanations for asset allocation is demonstrated. Finally, we discuss recent regulatory developments in the European investment business and highlight specific aspects of this business where explainable artificial intelligence could advance transparency of the investment process

    Geneettinen Algoritmi Optimaalisten Investointistrategioiden Määrittämiseen

    Get PDF
    Investors including banks, insurance companies and private investors are in a constant need for new investment strategies and portfolio selection methods. In this work we study the developed models, forecasting methods and portfolio management approaches. The information is used to create a decision-making system, or investment strategy, to form stock investment portfolios. The decision-making system is optimized using a genetic algorithm to find profitable low risk investment strategies. The constructed system is tested by simulating its performance with a large set of real stock market and economic data. The tests reveal that the constructed system requires a large sample of stock market and economic data before it finds well performing investment strategies. The parameters of the decision-making system converge surprisingly fast and the available computing capacity turned out to be sufficient even when a large amount of data is used in the system calibration. The model seems to find logics that govern stock market behavior. With a sufficient large amount of data for the calibration, the decision-making model finds strategies that work with regard to profit and portfolio diversification. The recommended strategies worked also outside the sample data that was used for system parameter identification (calibration). This work was done at Unisolver Ltd.Investoijat kuten pankit, vakuutusyhtiöt ja yksityissijoittajat tarvitsevat jatkuvasti uusia investointistrategioita portfolioiden määrittämiseen. Tässä työssä tutkitaan aiemmin kehitettyjä sijoitusmalleja, ennustemenetelmiä ja sijoitussalkun hallinnassa yleisesti käytettyjä lähestymistapoja. Löydettyä tietoa hyödyntäen kehitetään uusi päätöksentekomenetelmä (investointistrategia), jolla määritetään sijoitussalkun sisältö kunakin ajanhetkenä. Päätöksentekomalli optimoidaan geneettisellä algoritmilla. Tavoitteena on löytää tuottavia ja pienen riskin investointistrategioita. Kehitetyn mallin toimintaa simuloidaan suurella määrällä todellista pörssi- ja talousaineistoa. Testausvaihe osoittaakin, että päätöksentekomallin optimoinnissa tarvitaan suuri testiaineisto toimivien strategioiden löytämiseksi. Rakennetun mallin parametrit konvergoivat optimointivaiheessa nopeasti. Käytettävissä oleva laskentateho osoittautui riittäväksi niissäkin tilanteissa, joissa toisten menetelmien laskenta laajan aineiston takia hidastuu. Malli vaikuttaa löytävän logiikkaa, joka ymmärtää pörssikurssien käyttäytymistä. Riittävän suurella testiaineistolla malli löytää strategioita, joilla saavutetaan hyvä tuotto ja pieni riski. Strategiat toimivat myös mallin kalibroinnissa käytetyn aineiston ulkopuolella, tuottaen hyviä sijoitussalkkuja. Työ tehtiin Unisolver Oy:ssä

    Time series data mining: preprocessing, analysis, segmentation and prediction. Applications

    Get PDF
    Currently, the amount of data which is produced for any information system is increasing exponentially. This motivates the development of automatic techniques to process and mine these data correctly. Specifically, in this Thesis, we tackled these problems for time series data, that is, temporal data which is collected chronologically. This kind of data can be found in many fields of science, such as palaeoclimatology, hydrology, financial problems, etc. TSDM consists of several tasks which try to achieve different objectives, such as, classification, segmentation, clustering, prediction, analysis, etc. However, in this Thesis, we focus on time series preprocessing, segmentation and prediction. Time series preprocessing is a prerequisite for other posterior tasks: for example, the reconstruction of missing values in incomplete parts of time series can be essential for clustering them. In this Thesis, we tackled the problem of massive missing data reconstruction in SWH time series from the Gulf of Alaska. It is very common that buoys stop working for different periods, what it is usually related to malfunctioning or bad weather conditions. The relation of the time series of each buoy is analysed and exploited to reconstruct the whole missing time series. In this context, EANNs with PUs are trained, showing that the resulting models are simple and able to recover these values with high precision. In the case of time series segmentation, the procedure consists in dividing the time series into different subsequences to achieve different purposes. This segmentation can be done trying to find useful patterns in the time series. In this Thesis, we have developed novel bioinspired algorithms in this context. For instance, for paleoclimate data, an initial genetic algorithm was proposed to discover early warning signals of TPs, whose detection was supported by expert opinions. However, given that the expert had to individually evaluate every solution given by the algorithm, the evaluation of the results was very tedious. This led to an improvement in the body of the GA to evaluate the procedure automatically. For significant wave height time series, the objective was the detection of groups which contains extreme waves, i.e. those which are relatively large with respect other waves close in time. The main motivation is to design alert systems. This was done using an HA, where an LS process was included by using a likelihood-based segmentation, assuming that the points follow a beta distribution. Finally, the analysis of similarities in different periods of European stock markets was also tackled with the aim of evaluating the influence of different markets in Europe. When segmenting time series with the aim of reducing the number of points, different techniques have been proposed. However, it is an open challenge given the difficulty to operate with large amounts of data in different applications. In this work, we propose a novel statistically-driven CRO algorithm (SCRO), which automatically adapts its parameters during the evolution, taking into account the statistical distribution of the population fitness. This algorithm improves the state-of-the-art with respect to accuracy and robustness. Also, this problem has been tackled using an improvement of the BBPSO algorithm, which includes a dynamical update of the cognitive and social components in the evolution, combined with mathematical tricks to obtain the fitness of the solutions, which significantly reduces the computational cost of previously proposed coral reef methods. Also, the optimisation of both objectives (clustering quality and approximation quality), which are in conflict, could be an interesting open challenge, which will be tackled in this Thesis. For that, an MOEA for time series segmentation is developed, improving the clustering quality of the solutions and their approximation. The prediction in time series is the estimation of future values by observing and studying the previous ones. In this context, we solve this task by applying prediction over high-order representations of the elements of the time series, i.e. the segments obtained by time series segmentation. This is applied to two challenging problems, i.e. the prediction of extreme wave height and fog prediction. On the one hand, the number of extreme values in SWH time series is less with respect to the number of standard values. In this way, the prediction of these values cannot be done using standard algorithms without taking into account the imbalanced ratio of the dataset. For that, an algorithm that automatically finds the set of segments and then applies EANNs is developed, showing the high ability of the algorithm to detect and predict these special events. On the other hand, fog prediction is affected by the same problem, that is, the number of fog events is much lower tan that of non-fog events, requiring a special treatment too. A preprocessing of different data coming from sensors situated in different parts of the Valladolid airport are used for making a simple ANN model, which is physically corroborated and discussed. The last challenge which opens new horizons is the estimation of the statistical distribution of time series to guide different methodologies. For this, the estimation of a mixed distribution for SWH time series is then used for fixing the threshold of POT approaches. Also, the determination of the fittest distribution for the time series is used for discretising it and making a prediction which treats the problem as ordinal classification. The work developed in this Thesis is supported by twelve papers in international journals, seven papers in international conferences, and four papers in national conferences
    corecore