3,704 research outputs found

    Tree-Structured Nonlinear Adaptive Signal Processing

    Get PDF
    In communication systems, nonlinear adaptive filtering has become increasingly popular in a variety of applications such as channel equalization, echo cancellation and speech coding. However, existing nonlinear adaptive filters such as polynomial (truncated Volterra series) filters and multilayer perceptrons suffer from a number of problems. First, although high Order polynomials can approximate complex nonlinearities, they also train very slowly. Second, there is no systematic and efficient way to select their structure. As for multilayer perceptrons, they have a very complicated structure and train extremely slowly Motivated by the success of classification and regression trees on difficult nonlinear and nonparametfic problems, we propose the idea of a tree-structured piecewise linear adaptive filter. In the proposed method each node in a tree is associated with a linear filter restricted to a polygonal domain, and this is done in such a way that each pruned subtree is associated with a piecewise linear filter. A training sequence is used to adaptively update the filter coefficients and domains at each node, and to select the best pruned subtree and the corresponding piecewise linear filter. The tree structured approach offers several advantages. First, it makes use of standard linear adaptive filtering techniques at each node to find the corresponding Conditional linear filter. Second, it allows for efficient selection of the subtree and the corresponding piecewise linear filter of appropriate complexity. Overall, the approach is computationally efficient and conceptually simple. The tree-structured piecewise linear adaptive filter bears some similarity to classification and regression trees. But it is actually quite different from a classification and regression tree. Here the terminal nodes are not just assigned a region and a class label or a regression value, but rather represent: a linear filter with restricted domain, It is also different in that classification and regression trees are determined in a batch mode offline, whereas the tree-structured adaptive filter is determined recursively in real-time. We first develop the specific structure of a tree-structured piecewise linear adaptive filter and derive a stochastic gradient-based training algorithm. We then carry out a rigorous convergence analysis of the proposed training algorithm for the tree-structured filter. Here we show the mean-square convergence of the adaptively trained tree-structured piecewise linear filter to the optimal tree-structured piecewise linear filter. Same new techniques are developed for analyzing stochastic gradient algorithms with fixed gains and (nonstandard) dependent data. Finally, numerical experiments are performed to show the computational and performance advantages of the tree-structured piecewise linear filter over linear and polynomial filters for equalization of high frequency channels with severe intersymbol interference, echo cancellation in telephone networks and predictive coding of speech signals

    Predicting Volatility: Getting the Most out of Return Data Sampled at Different Frequencies

    Get PDF
    We consider various MIDAS (Mixed Data Sampling) regression models to predict volatility. The models differ in the specification of regressors (squared returns, absolute returns, realized volatility, realized power, and return ranges), in the use of daily or intra-daily (5-minute) data, and in the length of the past history included in the forecasts. The MIDAS framework allows us to compare models across all these dimensions in a very tightly parameterized fashion. Using equity return data, we find that daily realized power (involving 5-minute absolute returns) is the best predictor of future volatility (measured by increments in quadratic variation) and outperforms model based on realized volatility (i.e. past increments in quadratic variation). Surprisingly, the direct use of high-frequency (5-minute) data does not improve volatility predictions. Finally, daily lags of one to two months are sucient to capture the persistence in volatility. These findings hold both in- and out-of-sample.

    Regularization and Model Selection with Categorial Predictors and Effect Modifiers in Generalized Linear Models

    Get PDF
    We consider varying-coefficient models with categorial effect modifiers in the framework of generalized linear models. We distinguish between nominal and ordinal effect modifiers, and propose adequate Lasso-type regularization techniques that allow for (1) selection of relevant covariates, and (2) identification of coefficient functions that are actually varying with the level of a potentially effect modifying factor. We investigate the estimators’ large sample properties, and show in simulation studies that the proposed approaches perform very well for finite samples, too. Furthermore, the presented methods are compared with alternative procedures, and applied to real-world medical data

    A Reproducing Kernel Perspective of Smoothing Spline Estimators

    Get PDF
    Spline functions have a long history as smoothers of noisy time series data, and several equivalent kernel representations have been proposed in terms of the Green's function solving the related boundary value problem. In this study we make use of the reproducing kernel property of the Green's function to obtain an hierarchy of time-invariant spline kernels of different order. The reproducing kernels give a good representation of smoothing splines for medium and long length filters, with a better performance of the asymmetric weights in terms of signal passing, noise suppression and revisions. Empirical comparisons of time-invariant filters are made with the classical non linear ones. The former are shown to loose part of their optimal properties when we fixed the length of the filter according to the noise to signal ratio as done in nonparametric seasonal adjustment procedures.equivalent kernels, nonparametric regression, Hilbert spaces, time series filtering, spectral properties

    Functional Regression

    Full text link
    Functional data analysis (FDA) involves the analysis of data whose ideal units of observation are functions defined on some continuous domain, and the observed data consist of a sample of functions taken from some population, sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the development of this field, which has accelerated in the past 10 years to become one of the fastest growing areas of statistics, fueled by the growing number of applications yielding this type of data. One unique characteristic of FDA is the need to combine information both across and within functions, which Ramsay and Silverman called replication and regularization, respectively. This article will focus on functional regression, the area of FDA that has received the most attention in applications and methodological development. First will be an introduction to basis functions, key building blocks for regularization in functional regression methods, followed by an overview of functional regression methods, split into three types: [1] functional predictor regression (scalar-on-function), [2] functional response regression (function-on-scalar) and [3] function-on-function regression. For each, the role of replication and regularization will be discussed and the methodological development described in a roughly chronological manner, at times deviating from the historical timeline to group together similar methods. The primary focus is on modeling and methodology, highlighting the modeling structures that have been developed and the various regularization approaches employed. At the end is a brief discussion describing potential areas of future development in this field

    Predicting Volatility: Getting the Most out of Return Data Sampled at Different Frequencies

    Get PDF
    We consider various MIDAS (Mixed Data Sampling) regression models to predict volatility. The models differ in the specification of regressors (squared returns, absolute returns, realized volatility, realized power, and return ranges), in the use of daily or intra-daily (5-minute) data, and in the length of the past history included in the forecasts. The MIDAS framework allows us to compare models across all these dimensions in a very tightly parameterized fashion. Using equity return data, we find that daily realized power (involving 5-minute absolute returns) is the best predictor of future volatility (measured by increments in quadratic variation) and outperforms model based on realized volatility (i.e. past increments in quadratic variation). Surprisingly, the direct use of high-frequency (5-minute) data does not improve volatility predictions. Finally, daily lags of one to two months are sufficient to capture the persistence in volatility. These findings hold both in- and out-of-sample. Nous utilisons les régressions MIDAS (Mixed Data Sampling) dans le contexte de prévision de volatilité mesurée par incréments de la variation quadratique. Nous trouvons que la 'realized power' (Barndorff-Nielsen and Shephard) est le meilleur régresseur pour prévoir la variation quadratique future.realized variance, power variation, MIDAS regression, variance réalisée, 'power variation', régression MIDAS
    corecore