2,691 research outputs found

    On the Asymptotic Efficiency of Approximate Bayesian Computation Estimators

    Get PDF
    Many statistical applications involve models for which it is difficult to evaluate the likelihood, but from which it is relatively easy to sample. Approximate Bayesian computation is a likelihood-free method for implementing Bayesian inference in such cases. We present results on the asymptotic variance of estimators obtained using approximate Bayesian computation in a large-data limit. Our key assumption is that the data are summarized by a fixed-dimensional summary statistic that obeys a central limit theorem. We prove asymptotic normality of the mean of the approximate Bayesian computation posterior. This result also shows that, in terms of asymptotic variance, we should use a summary statistic that is the same dimension as the parameter vector, p; and that any summary statistic of higher dimension can be reduced, through a linear transformation, to dimension p in a way that can only reduce the asymptotic variance of the posterior mean. We look at how the Monte Carlo error of an importance sampling algorithm that samples from the approximate Bayesian computation posterior affects the accuracy of estimators. We give conditions on the importance sampling proposal distribution such that the variance of the estimator will be the same order as that of the maximum likelihood estimator based on the summary statistics used. This suggests an iterative importance sampling algorithm, which we evaluate empirically on a stochastic volatility model.Comment: Main text shortened and proof revised. To appear in Biometrik

    Three essays on asset pricing

    Get PDF
    This thesis encompasses three original research studies in asset pricing, accompanied by an introduction, a comprehensive literature review, and concluding remarks. The first two studies delve into the term structure of equity returns, while the third study centres on tax-loss harvesting with ETFs. The term structure of equity return volatility exhibits temporal variability, affecting the term structure of equity returns through the volatility feedback effect and explaining the cyclicality of the equity return term structure. By analysing the dividend strip futures, the first study finds that the volatility feedback effects of dividend strips decrease with the horizon. Using realised and implied volatilities as business cycle indicators, the study substantiates the pro-cyclical nature of the term structure of equity returns. The decomposition of cyclicality shows that the pro-cyclicality comes from the high relative sensitivity of short-duration volatility. Notably, the predictable cyclicality presents a novel criterion for testing macro-finance models, ultimately leading to the rejection of the rare disaster model proposed by Gabaix (2012). The subsequent original research uncovers a puzzling phenomenon termed the "short-duration equity return puzzle"—short-duration dividend strips exhibit high conditional Sharpe ratios during crises, surpassing theoretical upper bounds. Using dividend prices and forecasts, this study calculates the required rate of return and conditional Sharpe ratio for dividend strips from 2002 to 2021. Notably, during crisis periods, the required rate of return for 1-year dividends peaks at 55\%, accompanied by a conditional Sharpe ratio exceeding 14, far surpassing theoretical predictions by mainstream macrofinance models. This anomaly persists across various horizons and remains robust to measurement errors and transaction costs. The third study uncovers a new source of tax efficiency for ETFs—using highly correlated ETFs to harvest capital losses without violating the wash-sale rule. By exploiting the tax loophole, investors can potentially earn a better return than the index. This research reveals that the tax-loss trading volume of highly correlated ETFs accounts for 20.7% of their total trading volume. Tax-loss harvesting is negatively related to past returns, especially for recent and negative ones. ETFs with high past volatility have higher tax-loss trading volumes, while smaller and less liquid ones have lower tax-loss trading volumes. A parsimonious model is developed to elucidate the relationship between tax-loss harvesting and past price movements. Simulations with the model predict an annual tax revenue loss of 0.52% of assets under management for highly correlated ETFs, equivalent to approximately 25 billion USD in 2021

    An effective likelihood-free approximate computing method with statistical inferential guarantees

    Get PDF
    Approximate Bayesian computing is a powerful likelihood-free method that has grown increasingly popular since early applications in population genetics. However, complications arise in the theoretical justification for Bayesian inference conducted from this method with a non-sufficient summary statistic. In this paper, we seek to re-frame approximate Bayesian computing within a frequentist context and justify its performance by standards set on the frequency coverage rate. In doing so, we develop a new computational technique called approximate confidence distribution computing, yielding theoretical support for the use of non-sufficient summary statistics in likelihood-free methods. Furthermore, we demonstrate that approximate confidence distribution computing extends the scope of approximate Bayesian computing to include data-dependent priors without damaging the inferential integrity. This data-dependent prior can be viewed as an initial `distribution estimate' of the target parameter which is updated with the results of the approximate confidence distribution computing method. A general strategy for constructing an appropriate data-dependent prior is also discussed and is shown to often increase the computing speed while maintaining statistical inferential guarantees. We supplement the theory with simulation studies illustrating the benefits of the proposed method, namely the potential for broader applications and the increased computing speed compared to the standard approximate Bayesian computing methods

    Transcription Coupled DNA Repair in Saccharomyces Cerevisiae: the Interplay of Facilitators and Repressors

    Get PDF
    Nucleotide excision repair (NER) is a multi-step cellular process that removes bulky and/or helix-distorting DNA lesions, such as UV induced cyclobutane pyrimidine dimers (CPDs) and bulky chemical adducts. Transcription coupled repair (TCR) is a subpathway of NER dedicated to rapid removal of lesions in the transcribed strand of actively transcribed genes. The TCR mechanism in bacteria has been relatively well elucidated. However, TCR in eukaryotic cells appears to be extremely complicated. The exact nature of the TCR signal and the mechanism of the transcription-repair coupling have been long-standing enigmas. This dissertation focused on how the TCR repressors and facilitators interplay with RNA polymerase II (RNAP II) to carry out TCR in yeast Saccharomyces cerevisiae. By site-specific incorporation of the unnatural amino acid p-benzoyl-L-phenylalanine, we mapped interactions between Spt5 and RNAP II in S. cerevisiae. Through its KOW4-5 domains, Spt5 extensively interacts with Rpb4/7. Spt5 also interacts with Rpb1 and Rpb2, two largest subunits of RNAP II, at the clamp, protrusion and wall domains. Deletion of Spt5 KOW4-5 domains decreases transcription elongation and derepresses TCR. Our findings suggest that Spt5 is a key coordinator for holding the RNAP II complex in a closed conformation that is highly competent for transcription elongation but repressive to TCR. We also demonstrated that E1103G mutation of Rpb1, the largest subunit of RNAP II, which promotes transcription bypass of UV-induced CPDs, increases survival of UV irradiated yeast cells but attenuates TCR. In contrast, G730D mutation of Rpb1, which abolishes transcription bypass of CPDs, enhances TCR. Our findings suggest that transcription bypass of lesions attenuates TCR but enhances cell tolerance to DNA lesions. Efficient stalling of RNAP II is essential for efficient TCR. Sen1 is an RNA/DNA helicase that has been shown to mediate termination of noncoding RNAs and some mRNAs. Like deletion of Rad26 or Rpb9, the Sen1 N-terminal deletion (1-975 residues) increases the UV sensitivity of the GGR-deficient cells. Moreover, the Sen1 N-terminal deletion decreases TCR in rad7Δ and rad7Δ rad26Δ cells but not that in rad7Δ rpb9Δ cells. Our findings suggest that the N-terminal domain of Sen1 contributes to Rad26-independent TCR

    Rewriting Flash Memories by Message Passing

    Get PDF
    This paper constructs WOM codes that combine rewriting and error correction for mitigating the reliability and the endurance problems in flash memory. We consider a rewriting model that is of practical interest to flash applications where only the second write uses WOM codes. Our WOM code construction is based on binary erasure quantization with LDGM codes, where the rewriting uses message passing and has potential to share the efficient hardware implementations with LDPC codes in practice. We show that the coding scheme achieves the capacity of the rewriting model. Extensive simulations show that the rewriting performance of our scheme compares favorably with that of polar WOM code in the rate region where high rewriting success probability is desired. We further augment our coding schemes with error correction capability. By drawing a connection to the conjugate code pairs studied in the context of quantum error correction, we develop a general framework for constructing error-correction WOM codes. Under this framework, we give an explicit construction of WOM codes whose codewords are contained in BCH codes.Comment: Submitted to ISIT 201

    Co-occurrence Feature Learning for Skeleton based Action Recognition using Regularized Deep LSTM Networks

    Full text link
    Skeleton based action recognition distinguishes human actions using the trajectories of skeleton joints, which provide a very good representation for describing actions. Considering that recurrent neural networks (RNNs) with Long Short-Term Memory (LSTM) can learn feature representations and model long-term temporal dependencies automatically, we propose an end-to-end fully connected deep LSTM network for skeleton based action recognition. Inspired by the observation that the co-occurrences of the joints intrinsically characterize human actions, we take the skeleton as the input at each time slot and introduce a novel regularization scheme to learn the co-occurrence features of skeleton joints. To train the deep LSTM network effectively, we propose a new dropout algorithm which simultaneously operates on the gates, cells, and output responses of the LSTM neurons. Experimental results on three human action recognition datasets consistently demonstrate the effectiveness of the proposed model.Comment: AAAI 2016 conferenc
    • …
    corecore