659 research outputs found

    Trading Strategies: Earning More in Investment

    Full text link
    Gold and bitcoin are not new to us, but with limited cash and time, given only the past stream of the daily price of gold and bitcoin, it is a kind of new problem for us to develop a certain model and determine the best strategy to get the most return. Here, our team members analyzed the data provided and finally made a unified system of models to predict the price and evaluate the risk and return in our act of investment, and we name this series of models and measurements as CTP Model. This is a model which can determine and describe what transaction should the trader make each day and what is the certain maximum return he will get under different risk levels

    Early portfolio pruning: a scalable approach to hybrid portfolio selection

    Get PDF
    Driving the decisions of stock market investors is among the most challenging financial research problems. Markowitz’s approach to portfolio selection models stock profitability and risk level through a mean–variance model, which involves estimating a very large number of parameters. In addition to requiring considerable computational effort, this raises serious concerns about the reliability of the model in real-world scenarios. This paper presents a hybrid approach that combines itemset extraction with portfolio selection. We propose to adapt Markowitz’s model logic to deal with sets of candidate portfolios rather than with single stocks. We overcome some of the known issues of the Markovitz model as follows: (i) Complexity: we reduce the model complexity, in terms of parameter estimation, by studying the interactions among stocks within a shortlist of candidate stock portfolios previously selected by an itemset mining algorithm. (ii) Portfolio-level constraints: we not only perform stock-level selection, but also support the enforcement of arbitrary constraints at the portfolio level, including the properties of diversification and the fundamental indicators. (iii) Usability: we simplify the decision-maker’s work by proposing a decision support system that enables flexible use of domain knowledge and human-in-the-loop feedback. The experimental results, achieved on the US stock market, confirm the proposed approach’s flexibility, effectiveness, and scalability

    Geneettinen Algoritmi Optimaalisten Investointistrategioiden Määrittämiseen

    Get PDF
    Investors including banks, insurance companies and private investors are in a constant need for new investment strategies and portfolio selection methods. In this work we study the developed models, forecasting methods and portfolio management approaches. The information is used to create a decision-making system, or investment strategy, to form stock investment portfolios. The decision-making system is optimized using a genetic algorithm to find profitable low risk investment strategies. The constructed system is tested by simulating its performance with a large set of real stock market and economic data. The tests reveal that the constructed system requires a large sample of stock market and economic data before it finds well performing investment strategies. The parameters of the decision-making system converge surprisingly fast and the available computing capacity turned out to be sufficient even when a large amount of data is used in the system calibration. The model seems to find logics that govern stock market behavior. With a sufficient large amount of data for the calibration, the decision-making model finds strategies that work with regard to profit and portfolio diversification. The recommended strategies worked also outside the sample data that was used for system parameter identification (calibration). This work was done at Unisolver Ltd.Investoijat kuten pankit, vakuutusyhtiöt ja yksityissijoittajat tarvitsevat jatkuvasti uusia investointistrategioita portfolioiden määrittämiseen. Tässä työssä tutkitaan aiemmin kehitettyjä sijoitusmalleja, ennustemenetelmiä ja sijoitussalkun hallinnassa yleisesti käytettyjä lähestymistapoja. Löydettyä tietoa hyödyntäen kehitetään uusi päätöksentekomenetelmä (investointistrategia), jolla määritetään sijoitussalkun sisältö kunakin ajanhetkenä. Päätöksentekomalli optimoidaan geneettisellä algoritmilla. Tavoitteena on löytää tuottavia ja pienen riskin investointistrategioita. Kehitetyn mallin toimintaa simuloidaan suurella määrällä todellista pörssi- ja talousaineistoa. Testausvaihe osoittaakin, että päätöksentekomallin optimoinnissa tarvitaan suuri testiaineisto toimivien strategioiden löytämiseksi. Rakennetun mallin parametrit konvergoivat optimointivaiheessa nopeasti. Käytettävissä oleva laskentateho osoittautui riittäväksi niissäkin tilanteissa, joissa toisten menetelmien laskenta laajan aineiston takia hidastuu. Malli vaikuttaa löytävän logiikkaa, joka ymmärtää pörssikurssien käyttäytymistä. Riittävän suurella testiaineistolla malli löytää strategioita, joilla saavutetaan hyvä tuotto ja pieni riski. Strategiat toimivat myös mallin kalibroinnissa käytetyn aineiston ulkopuolella, tuottaen hyviä sijoitussalkkuja. Työ tehtiin Unisolver Oy:ssä

    Evolutionary multi-objective optimization in investment portfolio management

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    A Business Intelligence Expert System for Predicting Market Price in Stock Trading using Data Analysis: Deep Learning Model With Feature Selection Mechanism

    Get PDF
    Because of the availability of data and reasonable processing capability, business intelligence methods are rapidly being used in finance, such as managing assets, trading using algorithms, credit financing, and blockchain-based financing. Machine learning (ML) algorithms use enormous amounts of data to automatically understand and enhance predictability and performance via knowledge and data without being programmed by someone. Due to the stock data’s dynamic, high-noise, non-parametric, non-linear, and chaotic qualities, the stock market prediction has been a challenge and has received much interest from scholars over the last decade. Some studies seek a method for accurately predicting stock prices; however, due to the high correlation between stock prices, stock market analysis is more complex. So, this paper proposes an improved stock price prediction (SPP) model using a novel optimal parameter tuned with cross entropy included bidirectional long short-term memory (OPCBLSTM) with efficient feature extraction and selection schemes. It starts with missing values imputation, and data standardization on the collected dataset. From the preprocessed dataset, the features are extracted using modified rectifier linear unit activation based residual network (MRResNet50). Then the optimal features are selected using the improved whale optimization algorithm (IWOA). Finally, the SPP is done using the OPCBLSTM. The experimental results proved that the proposed method achieves more high-level outcomes than the traditional methods

    Systematic Trading: Calibration Advances through Machine Learning

    Get PDF
    Systematic trading in finance uses computer models to define trade goals, risk controls and rules that can execute trade orders in a methodical way. This thesis investigates how performance in systematic trading can be crucially enhanced by both i) persistently reducing the bid-offer spread quoted by the trader through optimized and realistically backtested strategies and ii) improving the out-of-sample robustness of the strategy selected through the injection of theory into the typically data-driven calibration processes. While doing so it brings to the foreground sound scientific reasons that, for the first time to my knowledge, technically underpin popular academic observations about the recent nature of the financial markets. The thesis conducts consecutive experiments across strategies within the three important building blocks of systematic trading: a) execution, b) quoting and c) risk-reward allowing me to progressively generate more complex and accurate backtested scenarios as recently demanded in the literature (Cahan et al. (2010)). The three experiments conducted are: 1. Execution: an execution model based on support vector machines. The first experiment is deployed to improve the realism of the other two. It analyses a popular model of execution: the volume weighted average price (VWAP). The VWAP algorithm targets to split the size of an order along the trading session according to the expected intraday volume's profile since the activity in the markets typically resembles convex seasonality – with more activity around the open and the closing auctions than along the rest of the day. In doing so, the main challenge is to provide the model with a reasonable expected profile. After proving in my data sample that two simple static approaches to the profile overcome the PCA-ARMA from Bialkowski et al. (2008) (a popular two-fold model composed by a dynamic component around an unsupervised learning structure) a further combination of both through an index based on supervised learning is proposed. The Sample Sensitivity Index hence successfully allows estimating the expected volume's profile more accurately by selecting those ranges of time where the model shall be less sensitive to past data through the identification of patterns via support vector machines. Only once the intraday execution risk has been defined can the quoting policy of a mid-frequency (in general, up to a week) hedging strategy be accurately analysed. 2. Quoting: a quoting model built upon particle swarm optimization. The second experiment analyses for the first time to my knowledge how to achieve the disruptive 50% bid-offer spread discount observed in Menkveld (2013) without increasing the risk profile of a trading agent. The experiment depends crucially on a series of variables of which market impact and slippage are typically the most difficult to estimate. By adapting the market impact model in Almgren et al. (2005) to the VWAP developed in the previous experiment and by estimating its slippage through its errors' distribution a framework within which the bid-offer spread can be assessed is generated. First, a full-replication spread, (that set out following the strict definition of a product in order to hedge it completely) is calculated and fixed as a benchmark. Then, by allowing benefiting from a lower market impact at the cost of assuming deviation risk (tracking error and tail risk) a non-full-replication spread is calibrated through particle swarm optimization (PSO) as in Diez et al. (2012) and compared with the benchmark. Finally, it is shown that the latter can reach a discount of a 50% with respect to the benchmark if a certain number of trades is granted. This typically occurs on the most liquid securities. This result not only underpins Menkveld's observations but also points out that there is room for further reductions. When seeking additional performance, once the quoting policy has been defined, a further layer with a calibrated risk-reward policy shall be deployed. 3. Risk-Reward: a calibration model defined within a Q-learning framework. The third experiment analyses how the calibration process of a risk-reward policy can be enhanced to achieve a more robust out-of-sample performance – a cornerstone in quantitative trading. It successfully gives a response to the literature that recently focusses on the detrimental role of overfitting (Bailey et al. (2013a)). The experiment was motivated by the assumption that the techniques underpinned by financial theory shall show a better behaviour (a lower deviation between in-sample and out-of-sample performance) than the classical data-driven only processes. As such, both approaches are compared within a framework of active trading upon a novel indicator. The indicator, called the Expectations' Shift, is rooted on the expectations of the markets' evolution embedded in the dynamics of the prices. The crucial challenge of the experiment is the injection of theory within the calibration process. This is achieved through the usage of reinforcement learning (RL). RL is an area of ML inspired by behaviourist psychology concerned with how software agents take decisions in an specific environment incentivised by a policy of rewards. By analysing the Q-learning matrix that collects the set of state/actions learnt by the agent within the environment, defined by each combination of parameters considered within the calibration universe, the rationale that an autonomous agent would have learnt in terms of risk management can be generated. Finally, by then selecting the combination of parameters whose attached rationale is closest to that of the portfolio manager a data-driven solution that converges to the theory-driven solution can be found and this is shown to successfully outperform out-of-sample the classical approaches followed in Finance. The thesis contributes to science by addressing what techniques could underpin recent academic findings about the nature of the trading industry for which a scientific explanation was not yet given: • A novel agent-based approach that allows for a robust out-of-sampkle performance by crucially providing the trader with a way to inject financial insights into the generally data-driven only calibration processes. It this way benefits from surpassing the generic model limitations present in the literature (Bailey et al. (2013b), Schorfheid and Wolpin (2012), Van Belle and Kerr (2012) or Weiss and Kulikowski (1991)) by finding a point where theory-driven patterns (the trader's priors tend to enhance out-of-sample robustness) merge with data-driven ones (those that allow to exploit latent information). • The provision of a technique that, to the best of my knowledge, explains for the first time how to reduce the bid-offer spread quoted by a traditional trader without modifying her risk appetite. A reduction not previously addressed in the literature in spite of the fact that the increasing regulation against the assumption of risk by market makers (e.g. Dodd–Frank Wall Street Reform and Consumer Protection Act) does yet coincide with the aggressive discounts observed by Menkveld (2013). As a result, this thesis could further contribute to science by serving as a framework to conduct future analyses in the context of systematic trading. • The completion of a mid-frequency trading experiment with high frequency execution information. It is shown how the latter can have a significant effect on the former not only through the erosion of its performance but, more subtly, by changing its entire strategic design (both, optimal composition and parameterization). This tends to be highly disregarded by the financial literature. More importantly, the methodologies disclosed herein have been crucial to underpin the setup of a new unit in the industry, BBVA's Global Strategies & Data Science. This disruptive, global and cross-asset team gives an enhanced role to science by successfully becoming the main responsible for the risk management of the Bank's strategies both in electronic trading and electronic commerce. Other contributions include: the provision of a novel risk measure (flowVaR); the proposal of a novel trading indicator (Expectations’ Shift); and the definition of a novel index that allows to improve the estimation of the intraday volume’s profile (Sample Sensitivity Index)

    Can Deep Learning Techniques Improve the Risk Adjusted Returns from Enhanced Indexing Investment Strategies

    Get PDF
    Deep learning techniques have been widely applied in the field of stock market prediction particularly with respect to the implementation of active trading strategies. However, the area of portfolio management and passive portfolio management in particular has been much less well served by research to date. This research project conducts an investigation into the science underlying the implementation of portfolio management strategies in practice focusing on enhanced indexing strategies. Enhanced indexing is a passive management approach which introduces an element of active management with the aim of achieving a level of active return through small adjustments to the portfolio weights. It then proceeds to investigate current applications of deep learning techniques in the field of financial market predictions and also in the specific area of portfolio management. A series of successively deeper neural network models were then developed and assessed in terms of their ability to accurately predict whether a sample of stocks would either outperform or underperform the selected benchmark index. The predictions generated by these models were then used to guide the adjustment of portfolio weightings to implement and forward test an enhanced indexing strategy on a hypothetical stock portfolio

    A Comprehensive Review of Control Strategies and Optimization Methods for Individual and Community Microgrids

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Community Microgrid offers effective energy harvesting from distributed energy resources and efficient energy consumption by employing an energy management system (EMS). Therefore, the collaborative microgrids are essentially required to apply an EMS, underlying an operative control strategy in order to provide an efficient system. An EMS is apt to optimize the operation of microgrids from several points of view. Optimal production planning, optimal demand-side management, fuel and emission constraints, the revenue of trading spinning and non-spinning reserve capacity can effectively be managed by EMS. Consequently, the importance of optimization is explicit in microgrid applications. In this paper, the most common control strategies in the microgrid community with potential pros and cons are analyzed. Moreover, a comprehensive review of single objective and multi-objective optimization methods is performed by considering the practical and technical constraints, uncertainty, and intermittency of renewable energies sources. The Pareto-optimal solution as the most popular multi-objective optimization approach is investigated for the advanced optimization algorithms. Eventually, feature selection and neural network-based clustering algorithms in order to analyze the Pareto-optimal set are introduced.This work was supported by the Spanish Ministerio de Ciencia, Innovación y Universidades (MICINN)–Agencia Estatal de Investigación (AEI), and by the European Regional Development Funds (ERDF), a way of making Europe, under Grant PGC2018-098946-B-I00 funded by MCIN/AEI/10.13039/501100011033/.Peer ReviewedPostprint (published version
    corecore