91 research outputs found

    Valuation and Risk Management of Some Longevity and P&C Insurance Products

    Get PDF
    Numerous insurance products linked to risky assets have emerged rapidly in the last couple of decades. These products have option-embedded features and typically involve at least two risk factors, namely interest and mortality risks. The need for models to capture risk factors\u27 behaviours accurately is enormous and critical for insurance companies. The primary objective of this thesis is to develop pricing and hedging frameworks for option-embedded longevity products addressing correlated risk factors. Various methods are employed to facilitate the computation of prices and risk measures of longevity products including those with maturity benefits. Furthermore, in order to be prepared for the implementation of the new International Financial Reporting Standards (IFRS) 17, the thesis\u27s secondary objective is to provide a methodology for computing risk margins under the impending regulatory requirements. This is demonstrated using a property and casualty (P&C) insurance example and taking advantage of P&C data availability. To accomplish the above-mentioned objectives, five self-contained but related research works are undertaken and described as follows. (i) A pricing framework for annuities is constructed, where interest and mortality rates are both stochastic and dependent. The short-rate process and the force of mortality follow the two-factor Hull-White model and Lee-Carter model, respectively. (ii) The framework in (i) is further developed by adopting the Cox-Ingersoll-Ross model for the short-rate process to price guarantee annuity options (GAOs). The change of measure technique together with the comonotonicity theory is utilised to facilitate the computation of GAO prices. (iii) A further modelling framework extension is attained by considering a two-decrement model for GAO\u27s valuation and risk measurement. Interest rate, mortality and lapse risks are assumed correlated and they are all modelled as affine-diffusion processes. Risk measures are calculated via the moment-based density method. (iv) We introduce a regime-switching set up for the valuation of guaranteed minimum maturity benefits (GMMBs). A hidden Markov model (HMM) modulates the evolution of risk processes and the HMM-based filtering technique is employed to generate the risk-factor models\u27 parameter estimates. An analytical expression for GMMB value is derived with the aid of the change of measure technique in combination with a Fourier-transform approach. (v) Finally, a paid-incurred chain method is customised to model Ontario\u27s automobile claim development triangular data set over a 15-year period, and the moment-based density method is applied to approximate the distributions of outstanding claim liabilities. The risk margins are determined through risk measures as prescribed by the IFRS 17. Sensitivity analysis is performed for risk margins using the bootstrap method

    Advertising strategy for profit-maximization: a novel practice on Tmall's online ads manager platforms

    Full text link
    Ads manager platform gains popularity among numerous e-commercial vendors/advertisers. It helps advertisers to facilitate the process of displaying their ads to target customers. One of the main challenges faced by advertisers, especially small and medium-sized enterprises, is to configure their advertising strategy properly. An ineffective advertising strategy will bring too many ``just looking'' clicks and, eventually, generate high advertising expenditure unproportionally to the growth of sales. In this paper, we present a novel profit-maximization model for online advertising optimization. The optimization problem is constructed to find optimal set of features to maximize the probability that target customers buy advertising products. We further reformulate the optimization problem to a knapsack problem with changeable parameters, and introduce a self-adjusted algorithm for finding the solution to the problem. Numerical experiment based on statistical data from Tmall show that our proposed method can optimize the advertising strategy given expenditure budget effectively.Comment: Online advertising campaign

    Environment-Centric Safety Requirements forAutonomous Unmanned Systems

    Get PDF
    Autonomous unmanned systems (AUS) emerge to take place of human operators in harsh or dangerous environments. However, such environments are typically dynamic and uncertain, causing unanticipated accidents when autonomous behaviours are no longer safe. Even though safe autonomy has been considered in the literature, little has been done to address the environmental safety requirements of AUS systematically. In this work, we propose a taxonomy of environment-centric safety requirements for AUS, and analyse the neglected issues to suggest several new research directions towards the vision of environment-centric safe autonomy

    Melodic Phrase Segmentation By Deep Neural Networks

    Full text link
    Automated melodic phrase detection and segmentation is a classical task in content-based music information retrieval and also the key towards automated music structure analysis. However, traditional methods still cannot satisfy practical requirements. In this paper, we explore and adapt various neural network architectures to see if they can be generalized to work with the symbolic representation of music and produce satisfactory melodic phrase segmentation. The main issue of applying deep-learning methods to phrase detection is the sparse labeling problem of training sets. We proposed two tailored label engineering with corresponding training techniques for different neural networks in order to make decisions at a sequential level. Experiment results show that the CNN-CRF architecture performs the best, being able to offer finer segmentation and faster to train, while CNN, Bi-LSTM-CNN and Bi-LSTM-CRF are acceptable alternatives

    LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields

    Full text link
    We introduce a new task, novel view synthesis for LiDAR sensors. While traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views, they fall short of producing accurate and realistic LiDAR patterns because the renderers rely on explicit 3D reconstruction and exploit game engines, that ignore important attributes of LiDAR points. We address this challenge by formulating, to the best of our knowledge, the first differentiable end-to-end LiDAR rendering framework, LiDAR-NeRF, leveraging a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points. However, simply employing NeRF cannot achieve satisfactory results, as it only focuses on learning individual pixels while ignoring local information, especially at low texture areas, resulting in poor geometry. To this end, we have taken steps to address this issue by introducing a structural regularization method to preserve local structural details. To evaluate the effectiveness of our approach, we establish an object-centric multi-view LiDAR dataset, dubbed NeRF-MVL. It contains observations of objects from 9 categories seen from 360-degree viewpoints captured with multiple LiDAR sensors. Our extensive experiments on the scene-level KITTI-360 dataset, and on our object-level NeRF-MVL show that our LiDAR-NeRF surpasses the model-based algorithms significantly.Comment: This paper introduces a new task of novel LiDAR view synthesis, and proposes a differentiable framework called LiDAR-NeRF with a structural regularization, as well as an object-centric multi-view LiDAR dataset called NeRF-MV

    LEVA : Using large language models to enhance visual analytics

    Get PDF
    Visual analytics supports data analysis tasks within complex domain problems. However, due to the richness of data types, visual designs, and interaction designs, users need to recall and process a significant amount of information when they visually analyze data. These challenges emphasize the need for more intelligent visual analytics methods. Large language models have demonstrated the ability to interpret various forms of textual data, offering the potential to facilitate intelligent support for visual analytics. We propose LEVA, a framework that uses large language models to enhance users' VA workflows at multiple stages: onboarding, exploration, and summarization. To support onboarding, we use large language models to interpret visualization designs and view relationships based on system specifications. For exploration, we use large language models to recommend insights based on the analysis of system status and data to facilitate mixed-initiative exploration. For summarization, we present a selective reporting strategy to retrace analysis history through a stream visualization and generate insight reports with the help of large language models. We demonstrate how LEVA can be integrated into existing visual analytics systems. Two usage scenarios and a user study suggest that LEVA effectively aids users in conducting visual analytics

    Ab initio identification of transcription start sites in the Rhesus macaque genome by histone modification and RNA-Seq

    Get PDF
    Rhesus macaque is a widely used primate model organism. Its genome annotations are however still largely comparative computational predictions derived mainly from human genes, which precludes studies on the macaque-specific genes, gene isoforms or their regulations. Here we took advantage of histone H3 lysine 4 trimethylation (H3K4me3)’s ability to mark transcription start sites (TSSs) and the recently developed ChIP-Seq and RNA-Seq technology to survey the transcript structures. We generated 14 013 757 sequence tags by H3K4me3 ChIP-Seq and obtained 17 322 358 paired end reads for mRNA, and 10 698 419 short reads for sRNA from the macaque brain. By integrating these data with genomic sequence features and extending and improving a state-of-the-art TSS prediction algorithm, we ab initio predicted and verified 17 933 of previously electronically annotated TSSs at 500-bp resolution. We also predicted approximately 10 000 novel TSSs. These provide an important rich resource for close examination of the species-specific transcript structures and transcription regulations in the Rhesus macaque genome. Our approach exemplifies a relatively inexpensive way to generate a reasonably reliable TSS map for a large genome. It may serve as a guiding example for similar genome annotation efforts targeted at other model organisms

    Ultra-short lifetime isomer studies from photonuclear reactions using laser-driven ultra-intense {\gamma}-ray

    Full text link
    Isomers, ubiquitous populations of relatively long-lived nuclear excited states, play a crucial role in nuclear physics. However, isomers with half-life times of several seconds or less barely had experimental cross section data due to the lack of a suitable measuring method. We report a method of online {\gamma} spectroscopy for ultra-short-lived isomers from photonuclear reactions using laser-driven ultra-intense {\gamma}-rays. The fastest time resolution can reach sub-ps level with {\gamma}-ray intensities >10^{19}/s ({\geqslant} 8 MeV). The ^{115}In({\gamma}, n)^{114m2}In reaction (T_{1/2} = 43.1 ms) was first measured in the high-energy region which shed light on the nuclear structure studies of In element. Simulations showed it would be an efficient way to study ^{229m}Th (T_{1/2} = 7 {\mu}s), which is believed to be the next generation of nuclear clock. This work offered a unique way of gaining insight into ultra-short lifetimes and promised an effective way to fill the gap in relevant experimental data
    corecore