641 research outputs found

    Efficient Model Points Selection in Insurance by Parallel Global Optimization Using Multi CPU and Multi GPU

    Get PDF
    t In the insurance sector, Asset Liability Management refers to the joint management of the assets and liabilities of a company. The liabilities mainly consist of the policies portfolios of the insurance company, which usually contain a large amount of policies. In the article, the authors mainly develop a highly efficient automatic generation of model points portfolios to represent much larger real policies portfolios. The obtained model points portfolio must retain the market risk properties of the initial portfolio. For this purpose, the authors propose a risk measure that incorporates the uncertain evolution of interest rates to the portfolios of life insurance policies, following Ferri (Optimal model points portfolio in life, 2019, arXiv:1808.00866). This problem can be formulated as a minimization problem that has to be solved using global numerical optimization algorithms. The cost functional measures an appropriate distance between the original and the model point portfolios. In order to solve this problem in a reasonable computing time, sequential implementations become prohibitive, so that the authors speed up the computations by developing a high performance computing framework that uses hybrid architectures, which consist of multi CPUs together with accelerators (multi GPUs). Thus, in graphic processor units (GPUs) the evaluation of the cost function is parallelized, which requires a Monte Carlo method. For the optimization problem, the authors compare a metaheuristic stochastic differential evolution algorithm with a multi path variant of hybrid global optimization Basin Hopping algorithms, which combines Simulated Annealing with gradient local searchers (Ferreiro et al. in Appl Math Comput 356:282–298, 2019a). Both global optimizers are parallelized in a multi CPU together with a multi GPU setting

    Global Optimization for Automatic Model Points Selection in Life Insurance Portfolios

    Get PDF
    [Abstract] Starting from an original portfolio of life insurance policies, in this article we propose a methodology to select model points portfolios that reproduce the original one, preserving its market risk under a certain measure. In order to achieve this goal, we first define an appropriate risk functional that measures the market risk associated to the interest rates evolution. Although other alternative interest rate models could be considered, we have chosen the LIBOR (London Interbank Offered Rate) market model. Once we have selected the proper risk functional, the problem of finding the model points of the replicating portfolio is formulated as a problem of minimizing the distance between the original and the target model points portfolios, under the measure given by the proposed risk functional. In this way, a high-dimensional global optimization problem arises and a suitable hybrid global optimization algorithm is proposed for the efficient solution of this problem. Some examples illustrate the performance of a parallel multi-CPU implementation for the evaluation of the risk functional, as well as the efficiency of the hybrid Basin Hopping optimization algorithm to obtain the model points portfolio.This research has been partially funded by EU H2020 MSCA-ITN-EID-2014 (WAKEUPCALL Grant Agreement 643045), Spanish MINECO (Grant MTM2016-76497-R) and by Galician Government with the grant ED431C2018/033, both including FEDER financial support. A.F., J.G. and C.V. also acknowledge the support received from the Centro de Investigación de Galicia “CITIC”, funded by Xunta de Galicia and the European Union (European Regional Development Fund- Galicia 2014-2020 Program), by grant ED431G 2019/01Xunta de Galicia; ED431C2018/03Xunta de Galicia; ED431G 2019/0

    Adapting and Optimizing the Systemic Model of Banking Originated Losses (SYMBOL) Tool to the Multi-core Architecture

    Get PDF
    Currently, multi-core system is a predominant architecture in the computational word. This gives new possibilities to speedup statistical and numerical simulations, but it also introduce many challenges we need to deal with. In order to improve the performance metrics, we need to consider different key points as: core communications, data locality, dependencies, memory size, etc. This paper describes a series of optimization steps done on the SYMBOL model meant to enhance its performance and scalability. SYMBOL is a micro-funded statistical tool which analyses the consequences of bank failures, taking into account the available safety nets, such as deposit guarantee schemes or resolution funds. However, this tool, in its original version, has some computational weakness, because its execution time grows considerably, when we request to run with large input data (e.g. large banking systems) or if we wish to scale up the value of the stopping criterium, i.e. the number of default scenarios to be considered. Our intention is to develop a tool (extendable to other model having similar characteristics) where a set of serial (e.g. deleting redundancies, loop enrolling, etc.) and parallel strategies (e.g. OpenMP, and GPU programming) come together to obtain shorter execution time and scalability. The tool uses automatic configuration to make the best use of available resources on the basis of the characteristics of the input datasets. Experimental results, done varying the size of the input dataset and the stopping criterium, show a considerable improvement one can obtain by using the new tool, with execution time reduction up to 96 % of with respect to the original serial versionJRC.G.1-Financial and Economic Analysi

    Online Tensor Methods for Learning Latent Variable Models

    Get PDF
    We introduce an online tensor decomposition based approach for two latent variable modeling problems namely, (1) community detection, in which we learn the latent communities that the social actors in social networks belong to, and (2) topic modeling, in which we infer hidden topics of text articles. We consider decomposition of moment tensors using stochastic gradient descent. We conduct optimization of multilinear operations in SGD and avoid directly forming the tensors, to save computational and storage costs. We present optimized algorithm in two platforms. Our GPU-based implementation exploits the parallelism of SIMD architectures to allow for maximum speed-up by a careful optimization of storage and data transfer, whereas our CPU-based implementation uses efficient sparse matrix computations and is suitable for large sparse datasets. For the community detection problem, we demonstrate accuracy and computational efficiency on Facebook, Yelp and DBLP datasets, and for the topic modeling problem, we also demonstrate good performance on the New York Times dataset. We compare our results to the state-of-the-art algorithms such as the variational method, and report a gain of accuracy and a gain of several orders of magnitude in the execution time.Comment: JMLR 201

    Ant Colony Optimization

    Get PDF
    Ant Colony Optimization (ACO) is the best example of how studies aimed at understanding and modeling the behavior of ants and other social insects can provide inspiration for the development of computational algorithms for the solution of difficult mathematical problems. Introduced by Marco Dorigo in his PhD thesis (1992) and initially applied to the travelling salesman problem, the ACO field has experienced a tremendous growth, standing today as an important nature-inspired stochastic metaheuristic for hard optimization problems. This book presents state-of-the-art ACO methods and is divided into two parts: (I) Techniques, which includes parallel implementations, and (II) Applications, where recent contributions of ACO to diverse fields, such as traffic congestion and control, structural optimization, manufacturing, and genomics are presented

    Code Generation and Global Optimization Techniques for a Reconfigurable PRAM-NUMA Multicore Architecture

    Full text link

    Doctor of Philosophy

    Get PDF
    dissertationThe goal of this dissertation is to improve flood risk management by enhancing the computational capability of two-dimensional models and incorporating data and parameter uncertainty to more accurately represent flood risk. Improvement of computational performance is accomplished by using the Graphics Processing Unit (GPU) approach, programmed in NVIDIA's Compute Unified Development Architecture (CUDA), to create a new two-dimensional hydrodynamic model, Flood2D-GPU. The model, based on the shallow water equations, is designed to execute simulations faster than the same code programmed using a serial approach (i.e., using a Central Processing Unit (CPU)). Testing the code against an identical CPU-based version demonstrated the improved computational efficiency of the GPU-based version (approximate speedup of more than 80 times). Given the substantial computational efficiency of Flood2D-GPU, a new Monte Carlo based flood risk modeling framework was created. The framework developed operates by performing many Flood2D-GPU simulations using randomly sampled model parameters and input variables. The Monte Carlo flood risk modeling framework is demonstrated in this dissertation by simulating the flood risk associated with a 1% annual probability flood event occurring in the Swannanoa River in Buncombe County near Asheville, North Carolina. The Monte Carlo approach is able to represent a wide range of possible scenarios, thus leading to the identification of areas outside a single simulation inundation extent that are susceptible to flood hazards. Further, the single simulation results underestimated the degree of flood hazard for the case study region when compared to the flood hazard map produced by the Monte Carlo approach. The Monte Carlo flood risk modeling framework is also used to determine the relative benefits of flood management alternatives for flood risk reduction. The objective of the analysis is to investigate the possibility of identifying specific annual exceedance probability flood events that will have greater benefits in terms of annualized flood risk reduction compared to an arbitrarily-selected discrete annual probability event. To test the hypothesis, a study was conducted on the Swannanoa River to determine the distribution of annualized risk as a function of average annual probability. Simulations of samples of flow rate from a continuous flow distribution provided the range of annual probability events necessary. The results showed a variation in annualized risk as a function of annual probability. And as hypothesized, a maximum annualized risk reduction could be identified for a specified annual probability. For the Swannanoa case study, the continuous flow distribution suggested targeting flood proofing to control the 12% exceedance probability event to maximize the reduction of annualized risk. This suggests that the arbitrary use of a specified risk of 1% exceedance may not in some cases be the most efficient allocation of resources to reduce annualized risk

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Mining a Small Medical Data Set by Integrating the Decision Tree and t-test

    Get PDF
    [[abstract]]Although several researchers have used statistical methods to prove that aspiration followed by the injection of 95% ethanol left in situ (retention) is an effective treatment for ovarian endometriomas, very few discuss the different conditions that could generate different recovery rates for the patients. Therefore, this study adopts the statistical method and decision tree techniques together to analyze the postoperative status of ovarian endometriosis patients under different conditions. Since our collected data set is small, containing only 212 records, we use all of these data as the training data. Therefore, instead of using a resultant tree to generate rules directly, we use the value of each node as a cut point to generate all possible rules from the tree first. Then, using t-test, we verify the rules to discover some useful description rules after all possible rules from the tree have been generated. Experimental results show that our approach can find some new interesting knowledge about recurrent ovarian endometriomas under different conditions.[[journaltype]]國外[[incitationindex]]EI[[booktype]]紙本[[countrycodes]]FI

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    • …
    corecore