18 research outputs found

    A Multivariate CLT for Local Dependence withn−1/2 log nRate and Applications to Multivariate Graph Related Statistics

    Get PDF
    AbstractThis paper concerns the rate of convergence in the central limit theorem for certain local dependence structures. The main goal of the paper is to obtain estimates of the rate in the multidimensional case. Certain one-dimensional results are also improved by using some more flexible characteristics of dependence. Assuming the summands are bounded, we obtain rates close to those for independent variables. As an application we study the rate of the normal approximation of certain graph related statistics which arise in testing equality of several multivariate distribution

    On Stein's method for products of normal random variables and zero bias couplings

    Get PDF
    In this paper, we extend Stein’s method to the distribution of the product of nn independent mean zero normal random variables. A Stein equation is obtained for this class of distributions, which reduces to the classical normal Stein equation in the case n=1n=1. This Stein equation motivates a generalisation of the zero bias transformation. We establish properties of this new transformation, and illustrate how they may be used together with the Stein equation to assess distributional distances for statistics that are asymptotically distributed as the product of independent central normal random variables. We end by proving some product normal approximation theorems

    Distributional approximations and set-valued sublinear expectations

    Get PDF
    This dissertation is composed by two blocks. The first part is concerned with several types of distributional approximations, namely multivariate Poisson, Poisson process and Gaussian approximation. Employing the solution of the Stein equation for Poisson distribution, we obtain an explicit bound for the multivariate Poisson approximation of random vectors in the Wasserstein distance. The bound is then utilized in the context of point processes, to provide a Poisson process approximation result in terms of a new metric called dπ, defined as the supremum overall Wasserstein distances between random vectors obtained evaluating the point processes on arbitrary collections of disjoint sets. As applications, the multivariate Poisson approximation of the sum of m-dependent Bernoulli random vectors, the Poisson process approximation of point processes of U-statistic structure and the Poisson process approximation of point processes with Papangelou intensity are considered. Next, we consider a variant of the classical Johnson Mehl birth-growth model with random growth speed and prove Gaussian approximation results. In this model, seeds appear at random times and locations and start growing instantaneously in all directions with random speeds. The location, birth time and growth speed of the seeds are given by a Poisson process. Under suitable conditions on the random growth speed and birth time distribution, we establish quantitative central limit theorems for the sum of given weights at the exposed points, which are those seeds in the model that are not covered at the time of their birth. Such models have previously been considered, albeit with deterministic growth speed. In the second part of the dissertation, we propose general construction of convex closed sets obtained by applying sublinear expectations to random vectors in Euclidean space. We show that many well-known transforms in convex geometry (in particular, centroid body, convex floating body, and Ulam floating body) are special instances of our construction. Further, we identify the dual representation of such convex bodies and identify one map that serves as a building block for all so defined convex bodies. Several further properties are investigated

    Central limit theorems and bootstrap in high dimensions

    Full text link

    Stability Analysis of Plates and Shells

    Get PDF
    This special publication contains the papers presented at the special sessions honoring Dr. Manuel Stein during the 38th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference held in Kissimmee, Florida, Apdl 7-10, 1997. This volume, and the SDM special sessions, are dedicated to the memory of Dr. Manuel Stein, a major pioneer in structural mechanics, plate and shell buckling, and composite structures. Many of the papers presented are the work of Manny's colleagues and co-workers and are a result, directly or indirectly, of his influence. Dr. Stein earned his Ph.D. in Engineering Mechanics from Virginia Polytechnic Institute and State University in 1958. He worked in the Structural Mechanics Branch at the NASA Langley Research Center from 1943 until 1989. Following his retirement, Dr. Stein continued his involvement with NASA as a Distinguished Research Associate

    Topics in particle physics beyond the Standard Model

    Get PDF
    We present new models of particle physics beyond the Standard Model. These models include extensions to the ideas of extra dimensions, deconstruction, supersymmetry, and Higgsless electroweak symmetry breaking. Besides introducing new models and discussing their consequences, we also discuss how galaxy cluster surveys can be used to constrain new physics beyond the Standard Model.;We find that an ultraviolet completion of gauge theories in the Randall-Sundrum model can be found in a deconstructed theory. The warping of the extra dimension is reproduced in the low energy theory by considering a general potential for the link fields with translational invariance broken only by boundary terms. The mass spectrum for the gauge and link fields is found to deviate from the Randall-Sundrum case after the first couple modes. By extending this model to a supersymmetric theory space, we find that supersymmetry is broken by the generation of a cosmological constant. Unless the theory is coupled to gravity or messenger fields, the spectrum remains supersymmetric.;We also present a hybrid Randall-Sundrum model in which an infinite slice of warped space is added to the extra dimension of the original theory. The hybrid model has a continuous gravitational spectrum with resonances at the Kaluza-Klein excitations of the original orbifolded model. A similar model is considered where the infinite space is cutoff by the addition of a negative tension brane. SU(2)L x SU(2)R x U(1)B-L gauge fields are added to the bulk of our hybrid model and we find that electroweak symmetry is broken with an appropriate choice of boundary conditions. By varying the size of the extra dimension, we find that the S parameter can be decreased by as much as 60%.;Finally we review models of structure formation and discuss the possibility of constraining new physics with galaxy cluster surveys. We find that for a large scatter in the luminosity-temperature relation, the cosmological parameters favored by galaxy cluster counts from the 400 Square Degree ROSAT survey are in agreement with the values found in the WMAP-3 year analysis. We explain why X-Ray surveys of galaxy cluster number counts are insensitive to new physics that would produce a dimming mechanism

    On-Line Learning and Wavelet-Based Feature Extraction Methodology for Process Monitoring using High-Dimensional Functional Data

    Get PDF
    The recent advances in information technology, such as the various automatic data acquisition systems and sensor systems, have created tremendous opportunities for collecting valuable process data. The timely processing of such data for meaningful information remains a challenge. In this research, several data mining methodology that will aid information streaming of high-dimensional functional data are developed. For on-line implementations, two weighting functions for updating support vector regression parameters were developed. The functions use parameters that can be easily set a priori with the slightest knowledge of the data involved and have provision for lower and upper bounds for the parameters. The functions are applicable to time series predictions, on-line predictions, and batch predictions. In order to apply these functions for on-line predictions, a new on-line support vector regression algorithm that uses adaptive weighting parameters was presented. The new algorithm uses varying rather than fixed regularization constant and accuracy parameter. The developed algorithm is more robust to the volume of data available for on-line training as well as to the relative position of the available data in the training sequence. The algorithm improves prediction accuracy by reducing uncertainty in using fixed values for the regression parameters. It also improves prediction accuracy by reducing uncertainty in using regression values based on some experts’ knowledge rather than on the characteristics of the incoming training data. The developed functions and algorithm were applied to feedwater flow rate data and two benchmark time series data. The results show that using adaptive regression parameters performs better than using fixed regression parameters. In order to reduce the dimension of data with several hundreds or thousands of predictors and enhance prediction accuracy, a wavelet-based feature extraction procedure called step-down thresholding procedure for identifying and extracting significant features for a single curve was developed. The procedure involves transforming the original spectral into wavelet coefficients. It is based on multiple hypothesis testing approach and it controls family-wise error rate in order to guide against selecting insignificant features without any concern about the amount of noise that may be present in the data. Therefore, the procedure is applicable for data-reduction and/or data-denoising. The procedure was compared to six other data-reduction and data-denoising methods in the literature. The developed procedure is found to consistently perform better than most of the popular methods and performs at the same level with the other methods. Many real-world data with high-dimensional explanatory variables also sometimes have multiple response variables; therefore, the selection of the fewest explanatory variables that show high sensitivity to predicting the response variable(s) and low sensitivity to the noise in the data is important for better performance and reduced computational burden. In order to select the fewest explanatory variables that can predict each of the response variables better, a two-stage wavelet-based feature extraction procedure is proposed. The first stage uses step-down procedure to extract significant features for each of the curves. Then, representative features are selected out of the extracted features for all curves using voting selection strategy. Other selection strategies such as union and intersection were also described and implemented. The essence of the first stage is to reduce the dimension of the data without any consideration for whether or not they can predict the response variables accurately. The second stage uses Bayesian decision theory approach to select some of the extracted wavelet coefficients that can predict each of the response variables accurately. The two stage procedure was implemented using near-infrared spectroscopy data and shaft misalignment data. The results show that the second stage further reduces the dimension and the prediction results are encouraging

    New methods for econometric inference

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Economics, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 201-208).Monotonicity is a key qualitative prediction of a wide array of economic models derived via robust comparative statics. It is therefore important to design effective and practical econometric methods for testing this prediction in empirical analysis. Chapter 1 develops a general nonparametric framework for testing monotonicity of a regression function. Using this framework, a broad class of new tests is introduced, which gives an empirical researcher a lot of flexibility to incorporate ex ante information she might have. Chapter 1 also develops new methods for simulating critical values, which are based on the combination of a bootstrap procedure and new selection algorithms. These methods yield tests that have correct asymptotic size and are asymptotically nonconservative. It is also shown how to obtain an adaptive rate optimal test that has the best attainable rate of uniform consistency against models whose regression function has Lipschitz-continuous first-order derivatives and that automatically adapts to the unknown smoothness of the regression function. Simulations show that the power of the new tests in many cases significantly exceeds that of some prior tests, e.g. that of Ghosal, Sen, and Van der Vaart (2000). An application of the developed procedures to the dataset of Ellison and Ellison (2011) shows that there is some evidence of strategic entry deterrence in pharmaceutical industry where incumbents may use strategic investment to prevent generic entries when their patents expire. Many economic models yield conditional moment inequalities that can be used for inference on parameters of these models. In chapter 2, I construct a new test of conditional moment inequalities based on studentized kernel estimates of moment functions. The test automatically adapts to the unknown smoothness of the moment functions, has uniformly correct asymptotic size, and is rate optimal against certain classes of alternatives. Some existing tests have nontrivial power against n-1/2 -local alternatives of a certain type whereas my method only allows for nontrivial testing against (n/ log n)-1/2-local alternatives of this type. There exist, however, large classes of sequences of well-behaved alternatives against which the test developed in this paper is consistent and those tests are not. In chapter 3 (coauthored with Victor Chernozhukov and Kengo Kato), we derive a central limit theorem for the maximum of a sum of high dimensional random vectors. Specifically, we establish conditions under which the distribution of the maximum is approximated by that of the maximum of a sum of the Gaussian random vectors with the same covariance matrices as the original vectors. The key innovation of this result is that it applies even when the dimension of random vectors (p) is large compared to the sample size (n); in fact, p can be much larger than n. We also show that the distribution of the maximum of a sum of the random vectors with unknown covariance matrices can be consistently estimated by the distribution of the maximum of a sum of the conditional Gaussian random vectors obtained by multiplying the original vectors with i.i.d. Gaussian multipliers. This is the multiplier bootstrap procedure. Here too, p can be large or even much larger than n. These distributional approximations, either Gaussian or conditional Gaussian, yield a high-quality approximation to the distribution of the original maximum, often with approximation error decreasing polynomially in the sample size, and hence are of interest in many applications. We demonstrate how our central limit theorem and the multiplier bootstrap can be used for high dimensional estimation, multiple hypothesis testing, and adaptive specification testing. All these results contain non-asymptotic bounds on approximation errors.by Denis Chetverikov.Ph.D
    corecore