4,268 research outputs found

    Machine Learning-Based Elastic Cloud Resource Provisioning in the Solvency II Framework

    Get PDF
    The Solvency II Directive (Directive 2009/138/EC) is a European Directive issued in November 2009 and effective from January 2016, which has been enacted by the European Union to regulate the insurance and reinsurance sector through the discipline of risk management. Solvency II requires European insurance companies to conduct consistent evaluation and continuous monitoring of risks—a process which is computationally complex and extremely resource-intensive. To this end, companies are required to equip themselves with adequate IT infrastructures, facing a significant outlay. In this paper we present the design and the development of a Machine Learning-based approach to transparently deploy on a cloud environment the most resource-intensive portion of the Solvency II-related computation. Our proposal targets DISAR®, a Solvency II-oriented system initially designed to work on a grid of conventional computers. We show how our solution allows to reduce the overall expenses associated with the computation, without hampering the privacy of the companies’ data (making it suitable for conventional public cloud environments), and allowing to meet the strict temporal requirements required by the Directive. Additionally, the system is organized as a self-optimizing loop, which allows to use information gathered from actual (useful) computations, thus requiring a shorter training phase. We present an experimental study conducted on Amazon EC2 to assess the validity and the efficiency of our proposal

    Adaptive value-at-risk policy optimization: a deep reinforcement learning approach for minimizing the capital charge

    Get PDF
    In 1995, the Basel Committee on Banking Supervision emitted an amendment to the first Basel Accord, allowing financial institutions to develop internal risk models, based on the value-at-risk (VaR), as opposed to using the regulator’s predefined model. From that point onwards, the scientific community has focused its efforts on improving the accuracy of the VaR models to reduce the capital requirements stipulated by the regulatory framework. In contrast, some authors proposed that the key towards disclosure optimization would not lie in improving the existing models, but in manipulating the estimated value. The most recent progress in this field employed dynamic programming (DP), based on Markov decision processes (MDPs), to create a daily report policy. However, the use of dynamic programming carries heavy costs for the solution; not only does the algorithm require an explicit transition probability matrix, the high computational storage requirements and inability to operate in continuous MDPs demand simplifying the problem. The purpose of this work is to introduce deep reinforcement learning as an alternative to solving problems characterized by a complex or continuous MDP. To this end, the author benchmarks the DP generated policy with one generated via proximal policy optimization. In conclusion, and despite the small number of employed learning iterations, the algorithm showcased a strong convergence with the optimal policy, allowing for the methodology to be used on the unrestricted problem, without incurring in simplifications such as action and state discretization.Em 1995 foi emitida uma adenda ao Acordo de Basileia vigente, o Basileia I, que permitiu que as instituições financeiras optassem por desenvolver modelos internos de medição de risco, tendo por base o value-at-risk (VaR), ao invés de recorrer ao modelo estipulado pelo regulador. Desde então, a comunidade científica focou os seus esforços na melhoria da precisão dos modelos de VaR procurando assim reduzir os requisitos de capital definidos na regulamentação. No entanto, alguns autores propuseram que a chave para a optimização do reporte não estaria na melhoria dos modelos existentes, mas na manipulação do valor estimado. O progresso mais recente recorreu ao uso de programação dinâmica (DP), baseada em processos de decisão de Markov (MDP) para atingir este fim, criando uma regra de reporte diária. No entanto, o uso de DP acarreta custos para a solução, uma vez que por um lado, o algoritmo requer uma matriz de probabilidades de transição definida, e por outro, os elevados requisitos de armazenamento computacional e incapacidade de lidar com processos de decisão de Markov (MDP) contínuos, exigem a simplificação do problema em questão. Este trabalho visa introduzir "deep reinforcement learning" como uma alternativa a problemas caracterizados por um MDP contínuo ou complexo. Para o efeito, é realizado um "benchmarking" com a "policy" criada por programação dinâmica, recorrendo ao algoritmo "proximal policy optimization". Em suma, e apesar do reduzido montante de iterações empregue, o algoritmo demonstrou fortes capacidades de convergência com a solução óptima, podendo ser empregue na estimativa do problema sem incorrer em simplificações

    Design of small-scale hybrid energy systems taking into account generation and demand uncertainties

    Get PDF
    The adoption of energy systems powered by renewable sources requires substantial economic investments. Hence, selecting system components of an appropriate size becomes a critical step, which is significantly influenced by their distinct characteristics. Furthermore, the availability of renewable energy varies over time, and estimating this availability introduces considerable uncertainty. In this paper, we present a technique for the optimal design of hybrid energy systems that accounts for the uncertainty associated with resource estimation. Our method is based on stochastic programming theory and employs a surrogate model to estimate battery lifespan using a feedforward neural network (FFNN). The optimization analysis for system design was conducted using a genetic algorithm (GA) and the poplar optimization algorithm (POA). We assessed the effectiveness of the proposed technique through a hypothetical case study. The introduction of a surrogate model, based on an FFNN, resulted in an approximation error of 9.6 % for cost estimation and 20.6 % for battery lifespan estimation. The probabilistic design indicates an energy system cost that is 25.7 % higher than that obtained using a deterministic approach. Both the GA and POA achieved solutions that likely represent the global optimum

    Multi-segment multi-criteria approach for selection of trenchless construction methods

    Get PDF
    The research work presented in this thesis has two broad objectives as well as five individual goals. The first objective is to search and determine the minimum cost and corresponding goodness-of-fit by using a different combination of methods that are capable of resolving the problem that exists in multiple segments. This approach can account for variations in unit price and the cost of the design and the inspection associated with multiple methods. The second objective is to calculate the minimum risk for the preferred solution set. The five individual goals are 1) reduction in total cost, 2) application of Genetic Algorithm (GA) for construction method selection with focus on trenchless technology, 3) application of Fuzzy Inference System for likelihood of risk, 4) risk assessment in HDD projects, and 5) Carbon footprint calculation. In most construction projects, multiple segments are involved in a single project. However, there is no single model developed yet to aid the selection of appropriate method(s) based on the consideration of multiple-criteria. In this study, a multi-segment conceptualizes a combination of individuals or groups of mainlines, manholes, and laterals. Multi-criteria takes into account the technical viability, direct cost, social cost, carbon footprint, and risks in the pipelines. Three different segments analyzed are 1) an 8 inch diameter, 280 foot long gravity sewer pipe, 2) a 21 inch diameter, 248 foot long gravity sewer pipe, and 3) a 12 inch diameter, 264 foot long gravity sewer pipe. It is found that GA would not only eliminate the shortcomings of competing mathematical approaches, but also enables complex optimization scenarios to be examined quickly to the optimization of multi-criteria for multi-segments. Furthermore, GA follows a uniform iterative procedure that is easy to code and decode for running the algorithm. Any trenchless installation project is associated with some level of risk. Due to the underground installation of trenchless technologies, the buried risk could be catastrophic if not assessed promptly. Therefore, risk management plays a key role in the construction of utilities. Conventional risk assessment approach quantifies risk as a product of likelihood and severity of risk, and does not consider the interrelation among different risk input variables. However, in real life installation projects, the input factors are interconnected, somewhat overlapped, and exist with fuzziness or vagueness. Fuzzy logic system surpasses this shortcoming and delivers the output through a process of fuzzification, fuzzy inference, fuzzy rules, and defuzzification. It is found in the study that Mamdani FIS has the potential to address the fuzziness, interconnection, and overlapping of different input variables and compute an overall risk output for a given scenario which is beyond the scope of conventional risk assessment

    Communities, Knowledge Creation, and Information Diffusion

    Get PDF
    In this paper, we examine how patterns of scientific collaboration contribute to knowledge creation. Recent studies have shown that scientists can benefit from their position within collaborative networks by being able to receive more information of better quality in a timely fashion, and by presiding over communication between collaborators. Here we focus on the tendency of scientists to cluster into tightly-knit communities, and discuss the implications of this tendency for scientific performance. We begin by reviewing a new method for finding communities, and we then assess its benefits in terms of computation time and accuracy. While communities often serve as a taxonomic scheme to map knowledge domains, they also affect how successfully scientists engage in the creation of new knowledge. By drawing on the longstanding debate on the relative benefits of social cohesion and brokerage, we discuss the conditions that facilitate collaborations among scientists within or across communities. We show that successful scientific production occurs within communities when scientists have cohesive collaborations with others from the same knowledge domain, and across communities when scientists intermediate among otherwise disconnected collaborators from different knowledge domains. We also discuss the implications of communities for information diffusion, and show how traditional epidemiological approaches need to be refined to take knowledge heterogeneity into account and preserve the system's ability to promote creative processes of novel recombinations of idea

    Algorithmic optimization and its application in finance

    Get PDF
    The goal of this thesis is to examine different issues in the area of finance and application of financial and mathematical models under consideration of optimization methods. Prior to the application of a model to its scope, the model results have to be adjusted according to the observed data. For this reason a target function is defined which is being minimized by using optimization algorithms. This allows finding the optimal model parameters. This procedure is called model calibration or model fitting and requires a suitable model for this application. In this thesis we apply financial and mathematical models such as Heston, CIR, geometric Brownian motion, as well as inverse transform sampling, and Chi-square test. Moreover, we test the following optimization methods: Genetic algorithms, Particle-Swarm, Levenberg-Marquardt, and Simplex algorithm. The first part of this thesis deals with the problem of finding a more accurate forecasting approach for market liquidity by using a calibrated Heston model for the simulation of the bid/ask paths instead of the standard Brownian motion and the inverse transformation method instead of compound Poisson process for the generation of the bid/ask volume distributions. We show that the simulated trading volumes converge to one single value which can be used as a liquidity estimator and we find that the calibrated Heston model as well as the inverse transform sampling are superior concerning the use of the standard Brownian motion, resp. compound Poisson process. In the second part, we examine the price markup for hedging or liquidity costs, that customers have to pay when they buy structured products by replicating the payoff of ten different structured products and comparing their fair values with the prices actually traded. For this purpose we use parallel computing, a new technology that was not possible in the past. This allows us to use a calibrated Heston model to calculate the fair values of structured products over a longer period of time. Our results show that the markup that clients pay for these ten products ranges from 0.9%-2.9%. We can also observe that products with higher payoff levels, or better capital protection, require higher costs. We also identify market volatility as a statistically significant driver of the markup. In the third part, we show that the tracking error of an passively managed ETF can be significantly reduced through the use of optimization methods if the correlation factor between Index and ETF is used as target function. By finding optimal weights of a self-constructed bond- and the DAX- index, the number of constituents can be reduced significantly, while keeping the tracking error small. In the fourth part, we develop a hedging strategy based on fuel prices that can be applied primarily to the end users of petrol and diesel fuels. This enables the fuel consumer to buy fuel at a certain price for a certain period of time by purchasing a call option. To price the American call option we use a geometric Brownian motion combined with a binomial model

    Price Variations in a Stock Market With Many Agents

    Get PDF
    Large variations in stock prices happen with sufficient frequency to raise doubts about existing models, which all fail to account for non-Gaussian statistics. We construct simple models of a stock market, and argue that the large variations may be due to a crowd effect, where agents imitate each other's behavior. The variations over different time scales can be related to each other in a systematic way, similar to the Levy stable distribution proposed by Mandelbrot to describe real market indices. In the simplest, least realistic case, exact results for the statistics of the variations are derived by mapping onto a model of diffusing and annihilating particles, which has been solved by quantum field theory methods. When the agents imitate each other and respond to recent market volatility, different scaling behavior is obtained. In this case the statistics of price variations is consistent with empirical observations. The interplay between ``rational'' traders whose behavior is derived from fundamental analysis of the stock, including dividends, and ``noise traders'', whose behavior is governed solely by studying the market dynamics, is investigated. When the relative number of rational traders is small, ``bubbles'' often occur, where the market price moves outside the range justified by fundamental market analysis. When the number of rational traders is larger, the market price is generally locked within the price range they define.Comment: 39 pages (Latex) + 20 Figures and missing Figure 1 (sorry), submitted to J. Math. Eco

    Inducing Liquidity in Thin Financial Markets through Combined-Value Trading Mechanisms

    Get PDF
    Previous experimental research has shown that thin financial markets fail to fully equilibrate, in contrast with thick markets. A specific type of market risk is conjectured to be the reason, namely, the risk of partial execution of desired portfolio rearrangements in a system of parallel, unconnected double auction markets. This market risk causes liquidity to dry up before equilibrium is reached. To verify the conjecture, we organized markets directly as a portfolio trading mechanism, allowing agents to better coordinate their orders across securities. The mechanism is an implementation of the combined-value trading (CVT) system. We present evidence that our portfolio trading mechanism facilitates equilibration to the same extent as thick markets do. Like in thick markets, the emergence of equilibrium pricing cannot be attributed to chance. Inspection of order submission and trade activity reveals that subjects manage to exploit the direct linkages between markets presented by the CVT system

    Real options modeling and valuation of price adjustment flexibility with an application to the leasing industry

    Get PDF
    Uncertainty poses not only threats but also opportunities. This study sought to build the scientific foundation for introducing a real options (ROs) methodology for price risk management to the leasing industry. A price risk management that allows for both coping with threats and taking advantage of opportunities. In the leasing industry, fixed rate long-term lease contracts help contract parties stabilize cash flows within volatile markets. The contract\u27s term, however, may be extended long enough that prevent capturing the opportunities of gaining greater profits or reducing expenses. Therefore, the flexibility that enables participants to take advantage of favorable market price is desirable. This discussion is dedicated to the study of three different forms of price adjustments flexibility: 1) single-sided price adjustment flexibility (SSPAF). 2) double-sided price adjustment flexibility (DSPAF) with the preemptive right to exercise. 3) DSPAF with the non-preemptive right to exercise. Each was designed to meet various participants flexibility requirements and budgets. An ROs methodology was developed to model, price, and optimize these flexibility clauses. The proposed approach was then tested in the example of Time Charter (TC) rate contracts from the maritime transport industry. Both the metric and the process for quantifying the benefit of the proposed flexibility clauses are provided. This work provides an alternative approach to the price risk management, which is accessible to all participants in the leasing industry. It is also the starting point in studying the multiple-party, multiple-exercisable price adjustment flexibility. Moreover, both the flexibility designs and the proposed ROs methodology for price risk management are applicable to not only other forms of lease contracts but also to other forms of contract relationships. --Abstract, page iii
    • …
    corecore