2,969 research outputs found

    SEQUENTIAL METHODS FOR NON-PARAMETRIC HYPOTHESIS TESTING

    Get PDF
    In today’s world, many applications are characterized by the availability of large amounts of complex-structured data. It is not always possible to fit the data to predefined models or distributions. Model dependent signal processing approaches are often susceptible to mismatches between the data and the assumed model. In cases where the data does not conform to the assumed model, providing sufficient performance guarantees becomes a challenging task. Therefore, it is important to devise methods that are model-independent, robust, provide sufficient performance guarantees for the task at hand and, at the same time, are simple to implement. The goal of this dissertation is to develop such algorithms for two-sided sequential binary hypothesis testing. In this dissertation, we propose two algorithms for sequential non-parametric hypothesis testing. The proposed algorithms are based on the random distortion testing (RDT) framework. The RDT framework addresses the problem of testing whether a random signal observed in additive noise deviates by more than a specified tolerance from a fixed model. The data-based approach is non-parametric in the sense that the underlying signal distributions under each hypothesis are assumed to be unknown. Importantly, we show that the proposed algorithms are not only robust but also provide performance guarantees in the non-asymptotic regimes in contrast to the popular non-parametric likelihood ratio based approaches which provide only asymptotic performance guarantees. In the first part of the dissertation, we develop a sequential algorithm SeqRDT. We first introduce a few mild assumptions required to control the error probabilities of the algorithm. We then analyze the asymptotic properties of the algorithm along with the behavior of the thresholds. Finally, we derive the upper bounds on the probabilities of false alarm (PFA) and missed detection (PMD) and demonstrate how to choose the algorithm parameters such that PFA and PMD can be guaranteed to stay below pre-specified levels. Specifically, we present two ways to design the algorithm: We first introduce the notion of a buffer and show that with the help of a few mild assumptions we can choose an appropriate buffer size such that PFA and PMD can be controlled. Later, we eliminate the buffer by introducing additional parameters and show that with the choice of appropriate parameters we can still control the probabilities of error of the algorithm. In the second part of the dissertation, we propose a truncated (finite horizon) algorithm, TSeqRDT, for the two-sided binary hypothesis testing problem. We first present the optimal fixed-sample-size (FSS) test for the hypothesis testing problem and present a few important preliminary results required to design the truncated algorithm. Similar, to the non-truncated case we first analyze the properties of the thresholds and then derive the upper bounds on PFA and PMD. We then choose the thresholds such that the proposed algorithm not only guarantees the error probabilities to be below pre-specified levels but at the same time makes a decision faster on average compared to its optimal FSS counterpart. We show that the truncated algorithm requires fewer assumptions on the signal model compared to the non-truncated case. We also derive bounds on the average stopping times of the algorithm. Importantly, we study the trade-off between the stopping time and the error probabilities of the algorithm and propose a method to choose the algorithm parameters. Finally, via numerical simulations, we compare the performance of T-SeqRDT and SeqRDT to sequential probability ratio test (SPRT) and composite sequential probability ratio tests. We also show the robustness of the proposed approaches compared to the standard likelihood ratio based approaches

    Bootstrap Methods for Heavy-Tail or Autocorrelated Distributions with an Empirical Application

    Get PDF
    Chapter One: The Truncated Wild Bootstrap for the Asymmetric Infinite Variance Case The wild bootstrap method proposed by Cavaliere et al. (2013) to perform hypothesis testing for the location parameter in the location model, with errors in the domain of attraction of asymmetric stable law, is inappropriate. Hence, we are introducing a new bootstrap test procedure that overcomes the failure of Efron’s (1979) resampling bootstrap. This bootstrap test exploits the Wild Bootstrap of Cavaliere et al. (2013) and the central limit theorem of trimmed variables of Berkes et al. (2012) to deliver confidence sets with correct asymptotic coverage probabilities for asymmetric heavy-tailed data. The methodology of this bootstrap method entails locating cut-off values such that all data between these two values satisfy the central limit theorem conditions. Therefore, the proposed bootstrap will be termed the Truncated Wild Bootstrap (TWB) since it takes advantage of both findings. Simulation evidence to assess the quality of inference of available bootstrap tests for this particular model reveals that, on most occasions, the TWB performs better than the Parametric bootstrap (PB) of Cornea-Madeira & Davidson (2015). In addition, TWB test scheme is superior to the PB because this procedure can test the location parameter when the index of stability is below one, whereas the PB has no power in such a case. Moreover, the TWB is also superior to the PB when the tail index is close to 1 and the distribution is heavily skewed, unless the tail index is exactly 1 and the scale parameter is very high. Chapter Two: A frequency domain wild bootstrap for dependent data In this chapter a resampling method is proposed for a stationary dependent time series, based on Rademacher wild bootstrap draws from the Fourier transform of the data. The main distinguishing feature of our method is that the bootstrap draws share their periodogram identically with the sample, implying sound properties under dependence of arbitrary form. A drawback of the basic procedure is that the bootstrap distribution of the mean is degenerate. We show that a simple Gaussian augmentation overcomes this difficulty. Monte Carlo evidence indicates a favourable comparison with alternative methods in tests of location and significance in a regression model with autocorrelated shocks, and also of unit roots. Chapter 3: Frequency-based Bootstrap Methods for DC Pension Plan Strategy Evaluation The use of conventional bootstrap methods, such as Standard Bootstrap and Moving Block Bootstrap, to produce long run returns to rank one strategy over the others based on its associated reward and risk, might be misleading. Therefore, in this chapter, we will use a simple pension model that is mainly concerned with long-term accumulation wealth to assess, for the first time in pension literature, different bootstrap methods. We find that the Multivariate Fourier Bootstrap gives the most satisfactory result in its ability to mimic the true distribution using Cramer-von-mises statistics. We also address the disagreement in the pension literature on selecting the best pension plan strategy. We present a comprehensive study to compare different strategies using a different bootstrap procedures with different Cash-flow performance measures across a range of countries. We find that bootstrap methods play a critical role in determining the optimal strategy. Additionally, different CFP measures rank pension plans differently across countries and bootstrap methods.ESR

    Alternation bias and reduction in St. Petersburg gambles:an experimental investigation

    Get PDF
    Reduction of compound lotteries is implicit both in the statement of the St. Petersburg Paradox and in its resolution by Expected Utility (EU).We report three real-money choice experiments between truncated compound-form St. Petersburg gambles and their reduced-form equivalents. The first tests for differences in elicited Certainty Equivalents. The second develops the distinction between ‘weak-form’ and ‘strong-form’ rejection of Reduction, as well as a novel experimental task that verifiably implements Vernon Smith’s dominance precept. The third experiment checks for robustness against range and increment manipulation. In all three experiments the null hypothesis of Reduction is rejected, with systematic deprecation of the compound form in favor of the reduced form. This is consistent with the predictions of alternation bias. Together these experiments offer evidence that the Reduction assumption may have limited descriptive validity in modelling St. Petersburg gambles, whether by EU or non-EU theories

    Causation Delays and Causal Neutralization: The Money-Output Relationship Revisited

    Get PDF
    In this paper, we develop a parametric test procedure for multiple horizon ”Granger” causality and apply the procedure to the well established problem of determining causal patterns in aggregate monthly U.S. money and output. As opposed to most papers in the parametric causality literature, we are interested in whether money ever "causes" (can ever be used to forecast) output, when causation occurs, and how (through which causal chains). For brevity, we consider only causal patterns up to horizon h = 3. Our tests are based on new recursive parametric characterizations of causality chains which help to distinguish between mere noncausation (the total absence of indirect causal routes) and causal neutralization, in which several causal routes exists that cancel each other out such that noncausation occurs. In many cases the recursive characterizations imply greatly simplified linear compound hypotheses for multi-step ahead causation, and permit Wald tests with the usual asymptotic chi-square distribution. A simulation study demonstrates that a sequential test method does not generate the type of size distortions typically reported in the literature, and null rejection frequencies depend entirely on how we define the "null hypothesis" of non-causality (at which horizon, if any). Using monthly data employed in Stock and Watson (1989), and others, we demonstrate that while Friedman and Kuttner’s (1993) result that detrended money growth fails to cause output one month ahead continues into the third quarter of 2003, a significant causal lag may exist through a variety of short-term interest rates: money appears to cause output after at least one month passes, although in some cases using recent data conflicting evidence suggests money may never cause output and be truly irrelevant in matters of real decisions.multiple horizon causation, multivariate time series, sequential tests

    Causation Delays and Causal Neutralization up to Three Steps Ahead: The Money-Output Relationship Revisited

    Get PDF
    In this paper, we develop a parametric test procedure for multiple horizon "Granger" causality and apply the procedure to the well established problem of determining causal patterns in aggregate monthly U.S. money and output. As opposed to most papers in the parametric causality literature, we are interested in whether money ever "causes" (can ever be used to forecast) output, when causation occurs, and how (through which causal chains). For brevity, we consider only causal patterns up to horizon h = 3. Our tests are based on new recursive parametric characterizations of causality chains which help to distinguish between mere noncausation (the total absence of indirect causal routes) and causal neutralization, in which several causal routes exists that cancel each other out such that noncausation occurs. In many cases the recursive characterizations imply greatly simplified linear compound hypotheses for multi-step ahead causation, and permit Wald tests with the usual asymptotic ÷²-distribution. A simulation study demonstrates that a sequential test method does not generate the type of size distortions typically reported in the literature, and null rejection frequencies depend entirely on how we define the "null hypothesis" of non-causality (at which horizon, if any). Using monthly data employed in Stock and Watson (1989), and others, we demonstrate that while Friedman and Kuttner's (1993) result that detrended money growth fails to cause output one month ahead continues into the third quarter of 2003, a significant causal lag may exist through a variety of short-term interest rates: money appears to cause output after at least one month passes, although in some cases using recent data conflicting evidence suggests money may never cause output and be truly irrelevant in matters of real decisions.multiple horizon causation; multivariate time series; sequential tests.

    Unit Roots and Cointegration in Panels

    Get PDF
    This paper provides a review of the literature on unit roots and cointegration in panels where the time dimension (T), and the cross section dimension (N) are relatively large. It distinguishes between the first generation tests developed on the assumption of the cross section independence, and the second generation tests that allow, in a variety of forms and degrees, the dependence that might prevail across the different units in the panel. In the analysis of cointegration the hypothesis testing and estimation problems are further complicated by the possibility of cross section cointegration which could arise if the unit roots in the different cross section units are due to common random walk components
    corecore