400 research outputs found

    Parallel Simulations for Analysing Portfolios of Catastrophic Event Risk

    Full text link
    At the heart of the analytical pipeline of a modern quantitative insurance/reinsurance company is a stochastic simulation technique for portfolio risk analysis and pricing process referred to as Aggregate Analysis. Support for the computation of risk measures including Probable Maximum Loss (PML) and the Tail Value at Risk (TVAR) for a variety of types of complex property catastrophe insurance contracts including Cat eXcess of Loss (XL), or Per-Occurrence XL, and Aggregate XL, and contracts that combine these measures is obtained in Aggregate Analysis. In this paper, we explore parallel methods for aggregate risk analysis. A parallel aggregate risk analysis algorithm and an engine based on the algorithm is proposed. This engine is implemented in C and OpenMP for multi-core CPUs and in C and CUDA for many-core GPUs. Performance analysis of the algorithm indicates that GPUs offer an alternative HPC solution for aggregate risk analysis that is cost effective. The optimised algorithm on the GPU performs a 1 million trial aggregate simulation with 1000 catastrophic events per trial on a typical exposure set and contract structure in just over 20 seconds which is approximately 15x times faster than the sequential counterpart. This can sufficiently support the real-time pricing scenario in which an underwriter analyses different contractual terms and pricing while discussing a deal with a client over the phone.Comment: Proceedings of the Workshop at the International Conference for High Performance Computing, Networking, Storage and Analysis (SC), 2012, 8 page

    Efficient Model Points Selection in Insurance by Parallel Global Optimization Using Multi CPU and Multi GPU

    Get PDF
    t In the insurance sector, Asset Liability Management refers to the joint management of the assets and liabilities of a company. The liabilities mainly consist of the policies portfolios of the insurance company, which usually contain a large amount of policies. In the article, the authors mainly develop a highly efficient automatic generation of model points portfolios to represent much larger real policies portfolios. The obtained model points portfolio must retain the market risk properties of the initial portfolio. For this purpose, the authors propose a risk measure that incorporates the uncertain evolution of interest rates to the portfolios of life insurance policies, following Ferri (Optimal model points portfolio in life, 2019, arXiv:1808.00866). This problem can be formulated as a minimization problem that has to be solved using global numerical optimization algorithms. The cost functional measures an appropriate distance between the original and the model point portfolios. In order to solve this problem in a reasonable computing time, sequential implementations become prohibitive, so that the authors speed up the computations by developing a high performance computing framework that uses hybrid architectures, which consist of multi CPUs together with accelerators (multi GPUs). Thus, in graphic processor units (GPUs) the evaluation of the cost function is parallelized, which requires a Monte Carlo method. For the optimization problem, the authors compare a metaheuristic stochastic differential evolution algorithm with a multi path variant of hybrid global optimization Basin Hopping algorithms, which combines Simulated Annealing with gradient local searchers (Ferreiro et al. in Appl Math Comput 356:282–298, 2019a). Both global optimizers are parallelized in a multi CPU together with a multi GPU setting

    The GPU vs Phi Debate: Risk Analytics Using Many-Core Computing

    Get PDF
    The risk of reinsurance portfolios covering globally occurring natural catastrophes, such as earthquakes and hurricanes, is quantified by employing simulations. These simulations are computationally intensive and require large amounts of data to be processed. The use of many-core hardware accelerators, such as the Intel Xeon Phi and the NVIDIA Graphics Processing Unit (GPU), are desirable for achieving high-performance risk analytics. In this paper, we set out to investigate how accelerators can be employed in risk analytics, focusing on developing parallel algorithms for Aggregate Risk Analysis, a simulation which computes the Probable Maximum Loss of a portfolio taking both primary and secondary uncertainties into account. The key result is that both hardware accelerators are useful in different contexts; without taking data transfer times into account the Phi had lowest execution times when used independently and the GPU along with a host in a hybrid platform yielded best performance.Comment: A modified version of this article is accepted to the Computers and Electrical Engineering Journal under the title - "The Hardware Accelerator Debate: A Financial Risk Case Study Using Many-Core Computing"; Blesson Varghese, "The Hardware Accelerator Debate: A Financial Risk Case Study Using Many-Core Computing," Computers and Electrical Engineering, 201

    Global Optimization for Automatic Model Points Selection in Life Insurance Portfolios

    Get PDF
    [Abstract] Starting from an original portfolio of life insurance policies, in this article we propose a methodology to select model points portfolios that reproduce the original one, preserving its market risk under a certain measure. In order to achieve this goal, we first define an appropriate risk functional that measures the market risk associated to the interest rates evolution. Although other alternative interest rate models could be considered, we have chosen the LIBOR (London Interbank Offered Rate) market model. Once we have selected the proper risk functional, the problem of finding the model points of the replicating portfolio is formulated as a problem of minimizing the distance between the original and the target model points portfolios, under the measure given by the proposed risk functional. In this way, a high-dimensional global optimization problem arises and a suitable hybrid global optimization algorithm is proposed for the efficient solution of this problem. Some examples illustrate the performance of a parallel multi-CPU implementation for the evaluation of the risk functional, as well as the efficiency of the hybrid Basin Hopping optimization algorithm to obtain the model points portfolio.This research has been partially funded by EU H2020 MSCA-ITN-EID-2014 (WAKEUPCALL Grant Agreement 643045), Spanish MINECO (Grant MTM2016-76497-R) and by Galician Government with the grant ED431C2018/033, both including FEDER financial support. A.F., J.G. and C.V. also acknowledge the support received from the Centro de Investigación de Galicia “CITIC”, funded by Xunta de Galicia and the European Union (European Regional Development Fund- Galicia 2014-2020 Program), by grant ED431G 2019/01Xunta de Galicia; ED431C2018/03Xunta de Galicia; ED431G 2019/0

    Corpus-Based Machine Translation : A Study Case for the e-Government of Costa Rica Corpus-Based Machine Translation: A Study Case for the e-Government of Costa Rica

    Get PDF
    Esta investigación pretende estudiar el estado del arte en las tecnologías de la traducción automática. Se explorará la teoría fundamental de los sistemas estadísticos basados en frases (PB-SMT) y neuronales (NMT): su arquitectura y funcionamiento. Luego, nos concentraremos en un caso de estudio que pondrá a prueba la capacidad del traductor para aprovechar al máximo el potencial de estas tecnologías. Este caso de estudio incita al traductor a poner en práctica todos sus conocimientos y habilidades profesionales para llevar a cabo la preparación de datos, entrenamiento, evaluación y ajuste de los motores.This research paper aims to approach the state-of-the-art technologies in machine translation. Following an overview of the architecture and mechanisms underpinning PB-SMT and NMT systems, we will focus on a specific use-case that would attest the translator's agency at maximizing the cutting-edge potential of these technologies, particularly the PB-SMT's capacity. The use-case urges the translator to dig out of his/her toolbox the best practices possible to improve the translation output text by means of data preparation, training, assessment and refinement tasks

    G-CSC Report 2010

    Get PDF
    The present report gives a short summary of the research of the Goethe Center for Scientific Computing (G-CSC) of the Goethe University Frankfurt. G-CSC aims at developing and applying methods and tools for modelling and numerical simulation of problems from empirical science and technology. In particular, fast solvers for partial differential equations (i.e. pde) such as robust, parallel, and adaptive multigrid methods and numerical methods for stochastic differential equations are developed. These methods are highly adanvced and allow to solve complex problems.. The G-CSC is organised in departments and interdisciplinary research groups. Departments are localised directly at the G-CSC, while the task of interdisciplinary research groups is to bridge disciplines and to bring scientists form different departments together. Currently, G-CSC consists of the department Simulation and Modelling and the interdisciplinary research group Computational Finance

    Inspection of power transmission lines using UAVs

    Get PDF
    Power transmission line inspection is an essential task carried out by power companies for the maintenance of a power network. The lines are exposed to the elements which increases the rate of deterioration of small faults and when not repaired in a time efficient manner they can become a serious problem. Current methods for inspection are labour intensive, expensive and are tedious and error prone for humans to perform. A possible solution is a UAV to perform the dull, dirty and dangerous task of power line inspections. As of August 2013, 16 companies were listed with CASA as licensed UAV operators for the purpose of power line inspection. With the maturing of the UAV and related sensors the technology is now accessible to researchers to adapt UAV technology to power transmission line inspections. The next step in the maturing of the technology will be to prove reliability and gain human acceptance. This dissertation will research on UAV technology and present a method for controlling a UAV autonomously using visual navigation and analysing pictures taken in real time for the inspection of power transmission lines. The dissertation will research the combination of automation and picture analysis in a field environment

    Machine and deep learning applications for improving the measurement of key indicators for financial institutions: stock market volatility and general insurance reserving risk

    Get PDF
    Esta tesis trata de lograr mejoras en los modelos de estimación de los riesgo financieros y actuariales a través del uso de técnicas punteras en el campo del aprendizaje automático y profundo (machine y deep learning), de manera que los modelos de riesgo generen resultados que den un mejor soporte al proceso de toma de decisiones de las instituciones financieras. Para ello, se fijan dos objetivos. En primer lugar, traer al campo financiero y actuarial los mecanismos más punteros del campo del aprendizaje automático y profundo. Los algoritmos más novedosos de este campo son de amplia aplicación en robótica, conducción autónoma o reconocimiento facial, entre otros. En segundo lugar, se busca aprovechar la gran capacidad predictiva de los algoritmos anteriormente adaptados para construir modelos de riesgo más precisos y que, por tanto, sean capaces de generar resultados que puedan dar un mejor soporte a la toma de decisiones de las instituciones financieras. Dentro del universo de modelos de riesgos financieros, esta tesis se centra en los modelos de riesgo de renta variable y reservas de siniestros. Esta tesis introduce dos modelos de riesgo de renta variable y otros dos de reservas. Por lo que se refiere a la renta variable, el primero de los modelos apila algoritmos tales como redes neuronales, bosques aleatorios o regresiones aditivas múltiples con árboles con el objetivo de mejorar la estimación de la volatilidad y, por tanto, generar modelos de riesgo más precisos. El segundo de los modelos de riesgo adapta al mundo financiero y actuarial los Transformer, un tipo de red neuronal que, debido a su alta precisión, ha apartado al resto de algoritmos en el campo del procesamiento del lenguaje natural. Adicionalmente, se propone una extensión de esta arquitectura, llamada Multi-Transformer y cuyo objetivo es mejorar el rendimiento del algoritmo inicial mediante el ensamblaje y aleatorización de los mecanismos de atención. En lo relativo a los dos modelos de reservas introducidos por esta tesis el primero de ellos trata de mejorar la estimación de reservas y generar modelos de riesgo más precisos apilando algoritmos de aprendizaje automático con modelos de reservas basados en estadística bayesiana y Chain Ladder. El segundo modelo de reservas trata de mejorar los resultados de un modelo de uso habitual, como es el modelo de Mack, a través de la aplicación de redes neuronales recurrentes y conexiones residuales
    corecore