157 research outputs found

    Mortality Risk Management

    Get PDF
    This is a multi–essay dissertation in the area of mortality risk management. The first essay investigates natural hedging between life insurance and annuities and then proposes a mortality swap between a life insurer and an annuity insurer. Compared with reinsurance, capital markets have a greater capacity to absorb insurance shocks, and they may offer more flexibility to meet insurers’ needs. Therefore, my second essay studies securitization of mortality risks in life annuities. Specifically I design a mortality bond to transfer longevity risks inherent in annuities or pension plans to financial markets. By explicitly taking into account the jumps in mortality stochastic processes, my third essay fills a gap in the mortality securitization modeling literature by pricing mortality securities in an incomplete market framework. Using the Survey of Consumer Finances, my fourth essay creates a new financial vulnerability index to examine a household’s life cycle demand for different types of life insurance

    A Fast Sensitivity-Based Preventive Control Selection Method for Online Voltage Stability Assessment

    Get PDF

    The Securitization of Longevity Risk and its Implications for Retirement Security

    Get PDF
    The economic significance of longevity risk for governments, corporations, and individuals has begun to be recognized and quantified. The traditional insurance route for managing this risk has serious limitations due to capacity constraints that are becoming more and more binding. If the 2010 U.S. population lived three years longer than expected then the government would have to set aside 50% of the U.S. 2010 GDP or approximately $7.37 trillion to fully fund that increased social security liability. This is just one way of gauging the size of the risk. Due to the much larger capacity of capital markets more attention is being devoted to transforming longevity risk from its pure risk form to a speculative risk form so that it can be traded in the capital markets. This transformation has implications for governments, corporations and individuals that will be explored here. The analysis will view the management of longevity risk by considering how defined contribution plans can be managed to increase the sustainable length of retirement and by considering how defined benefit plans can be managed to reduce pension risk using longevity risk hedging schemes

    Adapting a Language Model While Preserving its General Knowledge

    Full text link
    Domain-adaptive pre-training (or DA-training for short), also known as post-training, aims to train a pre-trained general-purpose language model (LM) using an unlabeled corpus of a particular domain to adapt the LM so that end-tasks in the domain can give improved performances. However, existing DA-training methods are in some sense blind as they do not explicitly identify what knowledge in the LM should be preserved and what should be changed by the domain corpus. This paper shows that the existing methods are suboptimal and proposes a novel method to perform a more informed adaptation of the knowledge in the LM by (1) soft-masking the attention heads based on their importance to best preserve the general knowledge in the LM and (2) contrasting the representations of the general and the full (both general and domain knowledge) to learn an integrated representation with both general and domain-specific knowledge. Experimental results will demonstrate the effectiveness of the proposed approach.Comment: EMNLP 202

    Class Incremental Learning via Likelihood Ratio Based Task Prediction

    Full text link
    Class incremental learning (CIL) is a challenging setting of continual learning, which learns a series of tasks sequentially. Each task consists of a set of unique classes. The key feature of CIL is that no task identifier (or task-id) is provided at test time. Predicting the task-id for each test sample is a challenging problem. An emerging theory-guided approach (called TIL+OOD) is to train a task-specific model for each task in a shared network for all tasks based on a task-incremental learning (TIL) method to deal with catastrophic forgetting. The model for each task is an out-of-distribution (OOD) detector rather than a conventional classifier. The OOD detector can perform both within-task (in-distribution (IND)) class prediction and OOD detection. The OOD detection capability is the key to task-id prediction during inference. However, this paper argues that using a traditional OOD detector for task-id prediction is sub-optimal because additional information (e.g., the replay data and the learned tasks) available in CIL can be exploited to design a better and principled method for task-id prediction. We call the new method TPL (Task-id Prediction based on Likelihood Ratio). TPL markedly outperforms strong CIL baselines and has negligible catastrophic forgetting. The code of TPL is publicly available at https://github.com/linhaowei1/TPL

    A Transformer-Based Model With Self-Distillation for Multimodal Emotion Recognition in Conversations

    Full text link
    Emotion recognition in conversations (ERC), the task of recognizing the emotion of each utterance in a conversation, is crucial for building empathetic machines. Existing studies focus mainly on capturing context- and speaker-sensitive dependencies on the textual modality but ignore the significance of multimodal information. Different from emotion recognition in textual conversations, capturing intra- and inter-modal interactions between utterances, learning weights between different modalities, and enhancing modal representations play important roles in multimodal ERC. In this paper, we propose a transformer-based model with self-distillation (SDT) for the task. The transformer-based model captures intra- and inter-modal interactions by utilizing intra- and inter-modal transformers, and learns weights between modalities dynamically by designing a hierarchical gated fusion strategy. Furthermore, to learn more expressive modal representations, we treat soft labels of the proposed model as extra training supervision. Specifically, we introduce self-distillation to transfer knowledge of hard and soft labels from the proposed model to each modality. Experiments on IEMOCAP and MELD datasets demonstrate that SDT outperforms previous state-of-the-art baselines.Comment: 13 pages, 10 figures. Accepted by IEEE Transactions on Multimedia (TMM

    Optimal Day-Ahead Operation Considering Power Quality for Active Distribution Networks

    Get PDF
    • …
    corecore