65,888 research outputs found

    Sparse Identification and Estimation of Large-Scale Vector AutoRegressive Moving Averages

    Full text link
    The Vector AutoRegressive Moving Average (VARMA) model is fundamental to the theory of multivariate time series; however, in practice, identifiability issues have led many authors to abandon VARMA modeling in favor of the simpler Vector AutoRegressive (VAR) model. Such a practice is unfortunate since even very simple VARMA models can have quite complicated VAR representations. We narrow this gap with a new optimization-based approach to VARMA identification that is built upon the principle of parsimony. Among all equivalent data-generating models, we seek the parameterization that is "simplest" in a certain sense. A user-specified strongly convex penalty is used to measure model simplicity, and that same penalty is then used to define an estimator that can be efficiently computed. We show that our estimator converges to a parsimonious element in the set of all equivalent data-generating models, in a double asymptotic regime where the number of component time series is allowed to grow with sample size. Further, we derive non-asymptotic upper bounds on the estimation error of our method relative to our specially identified target. Novel theoretical machinery includes non-asymptotic analysis of infinite-order VAR, elastic net estimation under a singular covariance structure of regressors, and new concentration inequalities for quadratic forms of random variables from Gaussian time series. We illustrate the competitive performance of our methods in simulation and several application domains, including macro-economic forecasting, demand forecasting, and volatility forecasting

    Testing Alternative Theories of Gravity using LISA

    Full text link
    We investigate the possible bounds which could be placed on alternative theories of gravity using gravitational wave detection from inspiralling compact binaries with the proposed LISA space interferometer. Specifically, we estimate lower bounds on the coupling parameter \omega of scalar-tensor theories of the Brans-Dicke type and on the Compton wavelength of the graviton \lambda_g in hypothetical massive graviton theories. In these theories, modifications of the gravitational radiation damping formulae or of the propagation of the waves translate into a change in the phase evolution of the observed gravitational waveform. We obtain the bounds through the technique of matched filtering, employing the LISA Sensitivity Curve Generator (SCG), available online. For a neutron star inspiralling into a 10^3 M_sun black hole in the Virgo Cluster, in a two-year integration, we find a lower bound \omega > 3 * 10^5. For lower-mass black holes, the bound could be as large as 2 * 10^6. The bound is independent of LISA arm length, but is inversely proportional to the LISA position noise error. Lower bounds on the graviton Compton wavelength ranging from 10^15 km to 5 * 10^16 km can be obtained from one-year observations of massive binary black hole inspirals at cosmological distances (3 Gpc), for masses ranging from 10^4 to 10^7 M_sun. For the highest-mass systems (10^7 M_sun), the bound is proportional to (LISA arm length)^{1/2} and to (LISA acceleration noise)^{-1/2}. For the others, the bound is independent of these parameters because of the dominance of white-dwarf confusion noise in the relevant part of the frequency spectrum. These bounds improve and extend earlier work which used analytic formulae for the noise curves.Comment: 16 pages, 9 figures, submitted to Classical & Quantum Gravit

    Empirical Bayes selection of wavelet thresholds

    Full text link
    This paper explores a class of empirical Bayes methods for level-dependent threshold selection in wavelet shrinkage. The prior considered for each wavelet coefficient is a mixture of an atom of probability at zero and a heavy-tailed density. The mixing weight, or sparsity parameter, for each level of the transform is chosen by marginal maximum likelihood. If estimation is carried out using the posterior median, this is a random thresholding procedure; the estimation can also be carried out using other thresholding rules with the same threshold. Details of the calculations needed for implementing the procedure are included. In practice, the estimates are quick to compute and there is software available. Simulations on the standard model functions show excellent performance, and applications to data drawn from various fields of application are used to explore the practical performance of the approach. By using a general result on the risk of the corresponding marginal maximum likelihood approach for a single sequence, overall bounds on the risk of the method are found subject to membership of the unknown function in one of a wide range of Besov classes, covering also the case of f of bounded variation. The rates obtained are optimal for any value of the parameter p in (0,\infty], simultaneously for a wide range of loss functions, each dominating the L_q norm of the \sigmath derivative, with \sigma\ge0 and 0<q\le2.Comment: Published at http://dx.doi.org/10.1214/009053605000000345 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Mean reversion in stock index futures markets: a nonlinear analysis

    Get PDF
    Several stylized theoretical models of futures basis behavior under nonzero transactions costs predict nonlinear mean reversion of the futures basis towards its equilibrium value. Nonlinearly mean-reverting models are employed to characterize the basis of the SandP 500 and the FTSE 100 indices over the post-1987 crash period, capturing empirically these theoretical predictions and examining the view that the degree of mean reversion in the basis is a function of the size of the deviation from equilibrium. The estimated half lives of basis shocks, obtained using Monte Carlo integration methods, suggest that for smaller shocks to the basis level the basis displays substantial persistence, while for larger shocks the basis exhibits highly nonlinear mean reversion towards its equilibrium value. © 2002 Wiley Periodicals, Inc

    On the Coverage Bound Problem of Empirical Likelihood Methods For Time Series

    Full text link
    The upper bounds on the coverage probabilities of the confidence regions based on blockwise empirical likelihood [Kitamura (1997)] and nonstandard expansive empirical likelihood [Nordman et al. (2013)] methods for time series data are investigated via studying the probability for the violation of the convex hull constraint. The large sample bounds are derived on the basis of the pivotal limit of the blockwise empirical log-likelihood ratio obtained under the fixed-b asymptotics, which has been recently shown to provide a more accurate approximation to the finite sample distribution than the conventional chi-square approximation. Our theoretical and numerical findings suggest that both the finite sample and large sample upper bounds for coverage probabilities are strictly less than one and the blockwise empirical likelihood confidence region can exhibit serious undercoverage when (i) the dimension of moment conditions is moderate or large; (ii) the time series dependence is positively strong; or (iii) the block size is large relative to sample size. A similar finite sample coverage problem occurs for the nonstandard expansive empirical likelihood. To alleviate the coverage bound problem, we propose to penalize both empirical likelihood methods by relaxing the convex hull constraint. Numerical simulations and data illustration demonstrate the effectiveness of our proposed remedies in terms of delivering confidence sets with more accurate coverage

    A Novel Method for the Absolute Pose Problem with Pairwise Constraints

    Full text link
    Absolute pose estimation is a fundamental problem in computer vision, and it is a typical parameter estimation problem, meaning that efforts to solve it will always suffer from outlier-contaminated data. Conventionally, for a fixed dimensionality d and the number of measurements N, a robust estimation problem cannot be solved faster than O(N^d). Furthermore, it is almost impossible to remove d from the exponent of the runtime of a globally optimal algorithm. However, absolute pose estimation is a geometric parameter estimation problem, and thus has special constraints. In this paper, we consider pairwise constraints and propose a globally optimal algorithm for solving the absolute pose estimation problem. The proposed algorithm has a linear complexity in the number of correspondences at a given outlier ratio. Concretely, we first decouple the rotation and the translation subproblems by utilizing the pairwise constraints, and then we solve the rotation subproblem using the branch-and-bound algorithm. Lastly, we estimate the translation based on the known rotation by using another branch-and-bound algorithm. The advantages of our method are demonstrated via thorough testing on both synthetic and real-world dataComment: 10 pages, 7figure

    Graph Sample and Hold: A Framework for Big-Graph Analytics

    Full text link
    Sampling is a standard approach in big-graph analytics; the goal is to efficiently estimate the graph properties by consulting a sample of the whole population. A perfect sample is assumed to mirror every property of the whole population. Unfortunately, such a perfect sample is hard to collect in complex populations such as graphs (e.g. web graphs, social networks etc), where an underlying network connects the units of the population. Therefore, a good sample will be representative in the sense that graph properties of interest can be estimated with a known degree of accuracy. While previous work focused particularly on sampling schemes used to estimate certain graph properties (e.g. triangle count), much less is known for the case when we need to estimate various graph properties with the same sampling scheme. In this paper, we propose a generic stream sampling framework for big-graph analytics, called Graph Sample and Hold (gSH). To begin, the proposed framework samples from massive graphs sequentially in a single pass, one edge at a time, while maintaining a small state. We then show how to produce unbiased estimators for various graph properties from the sample. Given that the graph analysis algorithms will run on a sample instead of the whole population, the runtime complexity of these algorithm is kept under control. Moreover, given that the estimators of graph properties are unbiased, the approximation error is kept under control. Finally, we show the performance of the proposed framework (gSH) on various types of graphs, such as social graphs, among others
    • …
    corecore