180 research outputs found
Time-limited Metaheuristics for Cardinality-constrained Portfolio Optimisation
A financial portfolio contains assets that offer a return with a certain
level of risk. To maximise returns or minimise risk, the portfolio must be
optimised - the ideal combination of optimal quantities of assets must be
found. The number of possible combinations is vast. Furthermore, to make the
problem realistic, constraints can be imposed on the number of assets held in
the portfolio and the maximum proportion of the portfolio that can be allocated
to an asset. This problem is unsolvable using quadratic programming, which
means that the optimal solution cannot be calculated. A group of algorithms,
called metaheuristics, can find near-optimal solutions in a practical computing
time. These algorithms have been successfully used in constrained portfolio
optimisation. However, in past studies the computation time of metaheuristics
is not limited, which means that the results differ in both quality and
computation time, and cannot be easily compared. This study proposes a
different way of testing metaheuristics, limiting their computation time to a
certain duration, yielding results that differ only in quality. Given that in
some use cases the priority is the quality of the solution and in others the
speed, time limits of 1, 5 and 25 seconds were tested. Three metaheuristics -
simulated annealing, tabu search, and genetic algorithm - were evaluated on
five sets of historical market data with different numbers of assets. Although
the metaheuristics could not find a competitive solution in 1 second, simulated
annealing found a near-optimal solution in 5 seconds in all but one dataset.
The lowest quality solutions were obtained by genetic algorithm.Comment: 51 pages, 8 tables, 3 figure
Exploring the synergistic potential of quantum annealing and gate model computing for portfolio optimization
Portfolio optimization is one of the most studied problems for demonstrating
the near-term applications of quantum computing. However, large-scale problems
cannot be solved on today's quantum hardware. In this work, we extend upon a
study to use the best of both quantum annealing and gate-based quantum
computing systems to enable solving large-scale optimization problems
efficiently on the available hardware. The existing work uses a method called
Large System Sampling Approximation (LSSA) that involves dividing the large
problem into several smaller problems and then combining the multiple solutions
to approximate the solution to the original problem. This paper introduces a
novel technique to modify the sampling step of LSSA. We divide the portfolio
optimization problem into sub-systems of smaller sizes by selecting a diverse
set of assets that act as representatives of the entire market and capture the
highest correlations among assets. We conduct tests on real-world stock data
from the Indian stock market on up to 64 assets. Our experimentation shows that
the hybrid approach performs at par with the traditional classical optimization
methods with a good approximation ratio. We also demonstrate the effectiveness
of our approach on a range of portfolio optimization problems of different
sizes. We present the effects of different parameters on the proposed method
and compare its performance with the earlier work. Our findings suggest that
hybrid annealer-gate quantum computing can be a valuable tool for portfolio
managers seeking to optimize their investment portfolios in the near future.Comment: 12 pages, 4 figures, 1 tabl
Recommended from our members
Nature inspired computational intelligence for financial contagion modelling
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Financial contagion refers to a scenario in which small shocks, which initially affect only a few financial institutions or a particular region of the economy, spread to the rest of the financial sector and other countries whose economies were previously healthy. This resembles the âtransmissionâ of a medical disease. Financial contagion happens both at domestic level and international level. At domestic level, usually the failure of a domestic bank or financial intermediary triggers transmission by defaulting on inter-bank liabilities, selling assets in a fire sale, and undermining confidence in similar banks. An example of this phenomenon is the failure of Lehman Brothers and the subsequent turmoil in the US financial markets. International financial contagion happens in both advanced economies and developing economies, and is the transmission of financial crises across financial markets. Within the current globalise financial system, with large volumes of cash flow and cross-regional operations of large banks and hedge funds, financial contagion usually happens simultaneously among both domestic institutions and across countries. There is no conclusive definition of financial contagion, most research papers study contagion by analyzing the change in the variance-covariance matrix during the period of market turmoil. King and Wadhwani (1990) first test the correlations between the US, UK and Japan, during the US stock market crash of 1987. Boyer (1997) finds significant increases in correlation during financial crises, and reinforces a definition of financial contagion as a correlation changing during the crash period. Forbes and Rigobon (2002) give a definition of financial contagion. In their work, the term interdependence is used as the alternative to contagion. They claim that for the period they study, there is no contagion but only interdependence. Interdependence leads to common price movements during periods both of stability and turmoil. In the past two decades, many studies (e.g. Kaminsky et at., 1998; Kaminsky 1999) have developed early warning systems focused on the origins of financial crises rather than on financial contagion. Further authors (e.g. Forbes and Rigobon, 2002; Caporale et al, 2005), on the other hand, have focused on studying contagion or interdependence. In this thesis, an overall mechanism is proposed that simulates characteristics of propagating crisis through contagion. Within that scope, a new co-evolutionary market model is developed, where some of the technical traders change their behaviour during crisis to transform into herd traders making their decisions based on market sentiment rather than underlying strategies or factors. The thesis focuses on the transformation of market interdependence into contagion and on the contagion effects. The author first build a multi-national platform to allow different type of players to trade implementing their own rules and considering information from the domestic and a foreign market. Tradersâ strategies and the performance of the simulated domestic market are trained using historical prices on both markets, and optimizing artificial marketâs parameters through immune - particle swarm optimization techniques (I-PSO). The author also introduces a mechanism contributing to the transformation of technical into herd traders. A generalized auto-regressive conditional heteroscedasticity - copula (GARCH-copula) is further applied to calculate the tail dependence between the affected market and the origin of the crisis, and that parameter is used in the fitness function for selecting the best solutions within the evolving population of possible model parameters, and therefore in the optimization criteria for contagion simulation. The overall model is also applied in predictive mode, where the author optimize in the pre-crisis period using data from the domestic market and the crisis-origin foreign market, and predict in the crisis period using data from the foreign market and predicting the affected domestic market
A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications
Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms
Applied Metaheuristic Computing
For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
A Survey on Particle Swarm Optimization for Association Rule Mining
Association rule mining (ARM) is one of the core techniques of data mining to discover potentially valuable association relationships from mixed datasets. In the current research, various heuristic algorithms have been introduced into ARM to address the high computation time of traditional ARM. Although a more detailed review of the heuristic algorithms based on ARM is available, this paper differs from the existing reviews in that we expected it to provide a more comprehensive and multi-faceted survey of emerging research, which could provide a reference for researchers in the field to help them understand the state-of-the-art PSO-based ARM algorithms. In this paper, we review the existing research results. Heuristic algorithms for ARM were divided into three main groups, including biologically inspired, physically inspired, and other algorithms. Additionally, different types of ARM and their evaluation metrics are described in this paper, and the current status of the improvement in PSO algorithms is discussed in stages, including swarm initialization, algorithm parameter optimization, optimal particle update, and velocity and position updates. Furthermore, we discuss the applications of PSO-based ARM algorithms and propose further research directions by exploring the existing problems.publishedVersio
A hybrid multi-start metaheuristic scheduler for astronomical observations
In this paper, we investigate Astronomical Observations Scheduling which is a type of Multi-Objective Combinatorial Optimization Problem, and detail its specific challenges and requirements and propose the Hybrid Accumulative Planner (HAP), a hybrid multi-start metaheuristic scheduler able to adapt to the different variations and demands of the problem. To illustrate the capabilities of the proposal in a real-world scenario, HAP is tested on the Atmospheric Remote-sensing Infrared Exoplanet Large-survey (Ariel) mission of the European Space Agency (ESA), and compared with other studies on this subject including an Evolutionary Algorithm (EA) approach. The results show that the proposal outperforms the other methods in the evaluation and achieves better scientific goals than its peers. The consistency of HAP in obtaining better results on the available datasets for Ariel, with various sizes and constraints, demonstrates its competence in scalability and adaptability to different conditions of the problem.Peer ReviewedPostprint (published version
Quantum annealing for vehicle routing and scheduling problems
Metaheuristic approaches to solving combinatorial optimization problems have many attractions.
They sidestep the issue of combinatorial explosion; they return good results; they are often
conceptually simple and straight forward to implement. There are also shortcomings. Optimal
solutions are not guaranteed; choosing the metaheuristic which best fits a problem is a matter of
experimentation; and conceptual differences between metaheuristics make absolute comparisons
of performance difficult. There is also the difficulty of configuration of the algorithm - the process
of identifying precise values for the parameters which control the optimization process.
Quantum annealing is a metaheuristic which is the quantum counterpart of the well known
classical Simulated Annealing algorithm for combinatorial optimization problems. This research
investigates the application of quantum annealing to the Vehicle Routing Problem, a difficult
problem of practical significance within industries such as logistics and workforce scheduling. The
work devises spin encoding schemes for routing and scheduling problem domains, enabling an
effective quantum annealing algorithm which locates new solutions to widely used benchmarks.
The performance of the metaheuristic is further improved by the development of an enhanced
tuning approach using fitness clouds as behaviour models. The algorithm is shown to be further
enhanced by taking advantage of multiprocessor environments, using threading techniques to
parallelize the optimization workload. The work also shows quantum annealing applied successfully
in an industrial setting to generate solutions to complex scheduling problems, results which
created extra savings over an incumbent optimization technique. Components of the intellectual
property rendered in this latter effort went on to secure a patent-protected status
Investigating evolutionary computation with smart mutation for three types of Economic Load Dispatch optimisation problem
The Economic Load Dispatch (ELD) problem is an optimisation task concerned with how electricity generating stations can meet their customersâ demands while minimising under/over-generation, and minimising the operational costs of running the generating units. In the conventional or Static Economic Load Dispatch (SELD), an optimal solution is sought in terms of how much power to produce from each of the individual generating units at the power station, while meeting (predicted) customersâ load demands. With the inclusion of a more realistic dynamic view of demand over time and associated constraints, the Dynamic Economic Load Dispatch (DELD) problem is an extension of the SELD, and aims at determining the optimal power generation schedule on a regular basis, revising the power system configuration (subject to constraints) at intervals during the day as demand patterns change.
Both the SELD and DELD have been investigated in the recent literature with modern heuristic optimisation approaches providing excellent results in comparison with classical techniques. However, these problems are defined under the assumption of a regulated electricity market, where utilities tend to share their generating resources so as to minimise the total cost of supplying the demanded load. Currently, the electricity distribution scene is progressing towards a restructured, liberalised and competitive market. In this market the utility companies are privatised, and naturally compete with each other to increase their profits, while they also engage in bidding transactions with their customers. This formulation is referred to as: Bid-Based Dynamic Economic Load Dispatch (BBDELD).
This thesis proposes a Smart Evolutionary Algorithm (SEA), which combines a standard evolutionary algorithm with a âsmart mutationâ approach. The so-called âsmartâ mutation operator focuses mutation on genes contributing most to costs and penalty violations, while obeying operational constraints. We develop specialised versions of SEA for each of the SELD, DELD and BBDELD problems, and show that this approach is superior to previously published approaches in each case. The thesis also applies the approach to a new case study relevant to Nigerian electricity deregulation. Results on this case study indicate that our SEA is able to deal with larger scale energy optimisation tasks
- âŠ