7 research outputs found

    Solving Euler equations via two-stage nonparametric penalized splines

    Get PDF
    动态随机一般均衡(DSGE)和资本资产定价模型是近年来在宏观经济学和宏观金融市场研究中广泛采用的模型方法。DSGE模型要求对未知政策函数(比如价格红利比率等)进行计算求解,从而得到模型结论和政策建言。然而,现有的经济、金融文献大多在金融数据服从某些特定概率分布的假设下,依赖数值方法来对DSGE模型做近似求解,而这会带来模型误设和近似误差问题。这种模型误设和近似误差可能会严重影响到求解动态均衡结果的有效性和可信度。该文首次提出运用非参数估计方法对DSGE模型中的政策函数进行求解,从而替代传统的近似求解法,这克服了过去文献存在的模型误设和近似误差的问题。该文是洪永淼教授牵头,以厦门大学为依托单位,联合中国科学院数学与系统科学研究院共同立项的“计量建模与经济政策研究”国家自然科学基础科学中心项目的阶段性成果。【Abstract】This study proposes a novel estimation-based approach to solving asset pricing models for both stationary and time-varying observations. Our method is robust to misspecification errors while inheriting a closed-form solution. By representing the Euler equation into a well-posed integral equation of the second kind, we propose a penalized twostage nonparametric estimation method and establish its optimal convergence under mild conditions. With the merit of penalized splines, our estimate is less sensitive to the spline setting and we also design a fast data-driven algorithm to effectively tune the key smoother, i.e. the penalty amount. Our approach exhibits excellent finite sample performance. Using the US data from 1947 to 2017, we reinvestigate the return predictability and find that the estimated implied dividend yield significantly predicts lower future cash flows and higher interest rates at short horizons.Hong's research is supported by National Science Foundation of China (NSFC) (No. 71988101), which is the Basic Scientific Center Project entitled as Econometric Modelling and Economic Policy Studies. Cui’s research is supported by the Research Grants Council of Hong Kong, China (No. 11500119 and 21504818) and NSFC (No. 71803166). Li’s research is supported by NSFC (No. 71571154 and No. 71631004)

    On sliced methods in dimension reduction

    No full text
    abstractpublished_or_final_versionStatistics and Actuarial ScienceMasterMaster of Philosoph

    Bayesian inferences for beta semiparametric-mixed models to analyze longitudinal neuroimaging data

    No full text
    NIH [UL1 RR024989]; NSF of China [11201390]; NSF of Fujian [2013J01024]Diffusion tensor imaging (DTI) is a quantitative magnetic resonance imaging technique that measures the three-dimensional diffusion of water molecules within tissue through the application of multiple diffusion gradients. This technique is rapidly increasing in popularity for studying white matter properties and structural connectivity in the living human brain. One of the major outcomes derived from the DTI process is known as fractional anisotropy, a continuous measure restricted on the interval (0,1). Motivated from a longitudinal DTI study of multiple sclerosis, we use a beta semiparametric-mixed regression model for the neuroimaging data. This work extends the generalized additive model-methodology with beta distribution family and random effects. We describe two estimation methods with penalized splines, which are formalized under a Bayesian inferential perspective. The first one is carried out by Markov chain Monte Carlo (MCMC) simulations while the second one uses a relatively new technique called integrated nested Laplace approximation (INLA). Simulations and the neuroimaging data analysis show that the estimates obtained from both approaches are stable and similar, while the INLA method provides an efficient alternative to the computationally expensive MCMC method

    Fast bivariate P-splines: the sandwich smoother

    No full text
    National Science Foundation [DMS-0805975]; National Institutes of Health [R01-NS060910]; National Center for Research Resources [UL1-RR024996]; National Natural Science Foundation of China [11201390]We propose a fast penalized spline method for bivariate smoothing. Univariate P-spline smoothers are applied simultaneously along both co-ordinates. The new smoother has a sandwich form which suggested the name sandwich smoother' to a referee. The sandwich smoother has a tensor product structure that simplifies an asymptotic analysis and it can be fast computed. We derive a local central limit theorem for the sandwich smoother, with simple expressions for the asymptotic bias and variance, by showing that the sandwich smoother is asymptotically equivalent to a bivariate kernel regression estimator with a product kernel. As far as we are aware, this is the first central limit theorem for a bivariate spline estimator of any type. Our simulation study shows that the sandwich smoother is orders of magnitude faster to compute than other bivariate spline smoothers, even when the latter are computed by using a fast generalized linear array model algorithm, and comparable with them in terms of mean integrated squared errors. We extend the sandwich smoother to array data of higher dimensions, where a generalized linear array model algorithm improves the computational speed of the sandwich smoother. One important application of the sandwich smoother is to estimate covariance functions in functional data analysis. In this application, our numerical results show that the sandwich smoother is orders of magnitude faster than local linear regression. The speed of the sandwich formula is important because functional data sets are becoming quite large

    Uncertainty Analysis for Computationally Expensive Models with Multiple Outputs

    No full text
    Bayesian MCMC calibration and uncertainty analysis for computationally expensive models is implemented using the SOARS (Statistical and Optimization Analysis using Response Surfaces) methodology. SOARS uses a radial basis function interpolator as a surrogate, also known as an emulator or meta-model, for the logarithm of the posterior density. To prevent wasteful evaluations of the expensive model, the emulator is built only on a high posterior density region (HPDR), which is located by a global optimization algorithm. The set of points in the HPDR where the expensive model is evaluated is determined sequentially by the GRIMA algorithm described in detail in another paper but outlined here. Enhancements of the GRIMA algorithm were introduced to improve efficiency. A case study uses an eight-parameter SWAT2005 (Soil and Water Assessment Tool) model where daily stream flows and phosphorus concentrations are modeled for the Town Brook watershed which is part of the New York City water supply. A Supplemental Material file available online contains additional technica

    DOES INDEX FUTURES TRADING REDUCE VOLATILITY IN THE CHINESE STOCK MARKET? A PANEL DATA EVALUATION APPROACH

    No full text
    This study investigates the effect of introducing index futures trading on the spot price volatility in the Chinese stock market. We employ a recently developed panel data policy evaluation approach (Hsiao, Ching, and Wan, 2011) to construct counterfactuals of the spot market volatility, based mainly on cross-sectional correlations between the Chinese and international stock markets. This new method does not need to specify a particular regression or a time-series model for the volatility process around the introduction date of index futures trading, and thus avoids the potential omitted variable bias caused by uncontrolled market factors in the existing literature. Our results provide empirical evidence that the introduction of index futures trading significantly reduces the volatility of the Chinese stock market, which is robust to different model selection criteria and various prediction approaches. (c) 2012 Wiley Periodicals, Inc. Jrl Fut Mark 33:1167-1190, 201
    corecore