2,761 research outputs found

    The generalized shrinkage estimator for the analysis of functional connectivity of brain signals

    Full text link
    We develop a new statistical method for estimating functional connectivity between neurophysiological signals represented by a multivariate time series. We use partial coherence as the measure of functional connectivity. Partial coherence identifies the frequency bands that drive the direct linear association between any pair of channels. To estimate partial coherence, one would first need an estimate of the spectral density matrix of the multivariate time series. Parametric estimators of the spectral density matrix provide good frequency resolution but could be sensitive when the parametric model is misspecified. Smoothing-based nonparametric estimators are robust to model misspecification and are consistent but may have poor frequency resolution. In this work, we develop the generalized shrinkage estimator, which is a weighted average of a parametric estimator and a nonparametric estimator. The optimal weights are frequency-specific and derived under the quadratic risk criterion so that the estimator, either the parametric estimator or the nonparametric estimator, that performs better at a particular frequency receives heavier weight. We validate the proposed estimator in a simulation study and apply it on electroencephalogram recordings from a visual-motor experiment.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS396 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Local Quantile Regression

    Get PDF
    Quantile regression is a technique to estimate conditional quantile curves. It provides a comprehensive picture of a response contingent on explanatory variables. In a flexible modeling framework, a specific form of the conditional quantile curve is not a priori fixed. % Indeed, the majority of applications do not per se require specific functional forms. This motivates a local parametric rather than a global fixed model fitting approach. A nonparametric smoothing estimator of the conditional quantile curve requires to balance between local curvature and stochastic variability. In this paper, we suggest a local model selection technique that provides an adaptive estimator of the conditional quantile regression curve at each design point. Theoretical results claim that the proposed adaptive procedure performs as good as an oracle which would minimize the local estimation risk for the problem at hand. We illustrate the performance of the procedure by an extensive simulation study and consider a couple of applications: to tail dependence analysis for the Hong Kong stock market and to analysis of the distributions of the risk factors of temperature dynamics

    Uniform confidence bands for pricing kernels

    Get PDF
    Pricing kernels implicit in option prices play a key role in assessing the risk aversion over equity returns. We deal with nonparametric estimation of the pricing kernel (Empirical Pricing Kernel) given by the ratio of the risk-neutral density estimator and the subjective density estimator. The former density can be represented as the second derivative w.r.t. the European call option price function, which we estimate by nonparametric regression. The subjective density is estimated nonparametrically too. In this framework, we develop the asymptotic distribution theory of the EPK in the L1 sense. Particularly, to evaluate the overall variation of the pricing kernel, we develop a uniform confidence band of the EPK. Furthermore, as an alternative to the asymptotic approach, we propose a bootstrap confidence band. The developed theory is helpful for testing parametric specifications of pricing kernels and has a direct extension to estimating risk aversion patterns. The established results are assessed and compared in a Monte-Carlo study. As a real application, we test risk aversion over time induced by the EPK.Empirical Pricing Kernel, Confidence band, Bootstrap; Kernel Smoothing; Nonparametric

    On kernel smoothing for extremal quantile regression

    Get PDF
    Nonparametric regression quantiles obtained by inverting a kernel estimator of the conditional distribution of the response are long established in statistics. Attention has been, however, restricted to ordinary quantiles staying away from the tails of the conditional distribution. The purpose of this paper is to extend their asymptotic theory far enough into the tails. We focus on extremal quantile regression estimators of a response variable given a vector of covariates in the general setting, whether the conditional extreme-value index is positive, negative, or zero. Specifically, we elucidate their limit distributions when they are located in the range of the data or near and even beyond the sample boundary, under technical conditions that link the speed of convergence of their (intermediate or extreme) order with the oscillations of the quantile function and a von-Mises property of the conditional distribution. A simulation experiment and an illustration on real data were presented. The real data are the American electric data where the estimation of conditional extremes is found to be of genuine interest.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ466 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Methods and software for nonparametric estimation in multistate models.

    Get PDF
    Multistate models are a type of multi-variate survival data which provide a framework for describing a complex system where individuals transition through a series of distinct states. This research focuses on nonparametric inference for general multistate models with directed tree topology. In this dissertation, we developed an R package, msSurv, which calculates the marginal stage occupation probabilities and stage entry and exit time distributions for a general, possibly non-Markov, multistage system under left-truncation and right censoring. Dependent censoring is handled via modeling the censoring hazard through observable covariates. Pointwise confidence intervals for the above mentioned quantities are obtained and returned for independent censoring from closed-form variance estimators and for dependent censoring using the bootstrap. We also develop novel nonparametric estimators of state occupation probabilities, state entry time distributions and state exit time distributions for interval censored data using a combination of weighted isotonic regression and kernel smoothing with product limit estimation. Structural assumptions about the multistate system are avoided when possible. We evaluate the performance of our estimators through simulation studies and real data analysis of a UNOS (United Network for Organ Sharing) data set

    Nonparametric Inferences for the Hazard Function with Right Truncation

    Get PDF
    Incompleteness is a major feature of time-to-event data. As one type of incompleteness, truncation refers to the unobservability of the time-to-event variable because it is smaller (or greater) than the truncation variable. A truncated sample always involves left and right truncation. Left truncation has been studied extensively while right truncation has not received the same level of attention. In one of the earliest studies on right truncation, Lagakos et al. (1988) proposed to transform a right truncated variable to a left truncated variable and then apply existing methods to the transformed variable. The reverse-time hazard function is introduced through transformation. However, this quantity does not have a natural interpretation. There exist gaps in the inferences for the regular forward-time hazard function with right truncated data. This dissertation discusses variance estimation of the cumulative hazard estimator, one-sample log-rank test, and comparison of hazard rate functions among finite independent samples under the context of right truncation. First, the relation between the reverse- and forward-time cumulative hazard functions is clarified. This relation leads to the nonparametric inference for the cumulative hazard function. Jiang (2010) recently conducted a research on this direction and proposed two variance estimators of the cumulative hazard estimator. Some revision to the variance estimators is suggested in this dissertation and evaluated in a Monte-Carlo study. Second, this dissertation studies the hypothesis testing for right truncated data. A series of tests is developed with the hazard rate function as the target quantity. A one-sample log-rank test is first discussed, followed by a family of weighted tests for comparison between finite KK-samples. Particular weight functions lead to log-rank, Gehan, Tarone-Ware tests and these three tests are evaluated in a Monte-Carlo study. Finally, this dissertation studies the nonparametric inference for the hazard rate function for the right truncated data. The kernel smoothing technique is utilized in estimating the hazard rate function. A Monte-Carlo study investigates the uniform kernel smoothed estimator and its variance estimator. The uniform, Epanechnikov and biweight kernel estimators are implemented in the example of blood transfusion infected AIDS data

    The Foresight Bias in Monte-Carlo Pricing of Options with Early

    Get PDF
    In this paper we investigate the so called foresight bias that may appear in the Monte-Carlo pricing of Bermudan and compound options if the exercise criteria is calculated by the same Monte-Carlo simulation as the exercise values. The standard approach to remove the foresight bias is to use two independent Monte-Carlo simulations: One simulation is used to estimate the exercise criteria (as a function of some state variable), the other is used to calculate the exercise price based on this exercise criteria. We shall call this the numerical removal of the foresight bias. In this paper we give an exact definition of the foresight bias in closed form and show how to apply an analytical correction for the foresight bias. Our numerical results show that the analytical removal of the foresight bias gives similar results as the standard numerical removal of the foresight bias. The analytical correction allows for a simpler coding and faster pricing, compared to a numerical removal of the foresight bias. Our analysis may also be used as an indication of when to neglect the foresight bias removal altogether. While this is sometimes possible, neglecting foresight bias will break the possibility of parallelization of Monte-Carlo simulation and may be inadequate for Bermudan options with many exercise dates (for which the foresight bias may become a Bermudan option on the Monte-Carlo error) or for portfolios of Bermudan options (for which the foresight bias grows faster than the Monte-Carlo error). In addition to an analytical removal of the foresight bias we derive an analytical correction for the suboptimal exercise due to the uncertainty induced by the Monte-Carlo error. The combined correction for foresight bias (biased high) and suboptimal exercise (biased low) removed the systematic bias even for Monte-Carlo simulations with very small number of paths.Monte Carlo, Bermudan, Early Exercise, Regression, Least Square Approximation of Conditional Expectation, Least Square Monte Carlo, Longstaff-Schwartz, Perfect Foresight, Foresight Bias
    corecore