334 research outputs found

    Bayesian Lattice Filters for Time-Varying Autoregression and Time-Frequency Analysis

    Full text link
    Modeling nonstationary processes is of paramount importance to many scientific disciplines including environmental science, ecology, and finance, among others. Consequently, flexible methodology that provides accurate estimation across a wide range of processes is a subject of ongoing interest. We propose a novel approach to model-based time-frequency estimation using time-varying autoregressive models. In this context, we take a fully Bayesian approach and allow both the autoregressive coefficients and innovation variance to vary over time. Importantly, our estimation method uses the lattice filter and is cast within the partial autocorrelation domain. The marginal posterior distributions are of standard form and, as a convenient by-product of our estimation method, our approach avoids undesirable matrix inversions. As such, estimation is extremely computationally efficient and stable. To illustrate the effectiveness of our approach, we conduct a comprehensive simulation study that compares our method with other competing methods and find that, in most cases, our approach performs superior in terms of average squared error between the estimated and true time-varying spectral density. Lastly, we demonstrate our methodology through three modeling applications; namely, insect communication signals, environmental data (wind components), and macroeconomic data (US gross domestic product (GDP) and consumption).Comment: 49 pages, 16 figure

    A study on adaptive filtering for noise and echo cancellation.

    Get PDF
    The objective of this thesis is to investigate the adaptive filtering technique on the application of noise and echo cancellation. As a relatively new area in Digital Signal Processing (DSP), adaptive filters have gained a lot of popularity in the past several decades due to the advantages that they can deal with time-varying digital system and they do not require a priori knowledge of the statistics of the information to be processed. Adaptive filters have been successfully applied in a great many areas such as communications, speech processing, image processing, and noise/echo cancellation. Since Bernard Widrow and his colleagues introduced adaptive filter in the 1960s, many researchers have been working on noise/echo cancellation by using adaptive filters with different algorithms. Among these algorithms, normalized least mean square (NLMS) provides an efficient and robust approach, in which the model parameters are obtained on the base of mean square error (MSE). The choice of a structure for the adaptive filters also plays an important role on the performance of the algorithm as a whole. For this purpose, two different filter structures: finite impulse response (FIR) filter and infinite impulse response (IIR) filter have been studied. The adaptive processes with two kinds of filter structures and the aforementioned algorithm have been implemented and simulated using Matlab.Dept. of Electrical and Computer Engineering. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .J53. Source: Masters Abstracts International, Volume: 44-01, page: 0472. Thesis (M.A.Sc.)--University of Windsor (Canada), 2005

    Hierarchical nonlinear, multivariate, and spatially-dependent time-frequency functional methods

    Get PDF
    Notions of time and frequency are important constituents of most scientific inquiries, providing complimentary information. In an era of "big data," methodology for analyzing functional and/or image data is increasingly important. This dissertation develops methodology at the cross-section of time-frequency analysis and functional data and consists of three distinct, but related, contributions. First, we propose nonparametric methodology for nonlinear multivariate time-frequency functional data. In particular, we consider polynomial nonlinear functional data models that accommodate higher dimensional functional covariates, including time-frequency images, along with their interactions. The necessary dimension reduction for model estimation proceeds through carefully chosen basis expansions (empirical orthogonal functions) and feature-extraction stochastic search variable selection (SSVS). Properties of the methodology are examined through an extensive simulation study. Finally, we illustrate the approach through an application that attempts to characterize spawning behavior of shovelnose sturgeon in terms of high-density depth and temperature profiles. The second contribution proposes model-based time-frequency estimation through Bayesian lattice filter time-varying autoregressive models. In this context, we take a fully Bayesian approach and allow both the autoregressive coefficients and innovation variance to vary over time. Importantly, our model is estimated within the partial autocorrelation domain (i.e., through the partial autocorrelation coefficients). Additionally, all of the full conditional distributions required for our algorithm are of standard form and thus can be easily implemented using a Gibbs sampler. Further, as a by-product of the lattice filter recursions, our approach avoids undesirable matrix inversions. As such, estimation is computationally efficient and stable. We conduct a comprehensive simulation study that compares our method with other competing methods and find that, in most cases, our approach performs superior in terms of average squared error between the estimated and true time-varying spectral density. Lastly, we demonstrate our methodology through several real case studies. The final project of the dissertation develops models that accommodate spatially dependent functional responses with spatially dependent image predictors. The methodology is motivated by a soil science study that seeks to model spatially correlated water content functionals as a function of electro-conductivity images. The water content curves are measured at different locations within the study field and at various depths, whereas the electro-conductivity images are spatially referenced images of wavelength by depth. Estimation is facilitated by taking a Bayesian approach, where the necessary dimension reduction for model implementation proceeds using basis function expansions along with SSVS. Finally, the methodology is illustrated through an application to our motivating data.Includes bibliographical references (pages 122-133

    Multiresolution models of financial time series

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.Includes bibliographical references (leaves 89-92).by Paul Stephan Mashikian.M.Eng

    Hyper-parameter tuning and feature extraction for asynchronous action detection from sub-thalamic nucleus local field potentials

    Get PDF
    Introduction: Decoding brain states from subcortical local field potentials (LFPs) indicative of activities such as voluntary movement, tremor, or sleep stages, holds significant potential in treating neurodegenerative disorders and offers new paradigms in brain-computer interface (BCI). Identified states can serve as control signals in coupled human-machine systems, e.g., to regulate deep brain stimulation (DBS) therapy or control prosthetic limbs. However, the behavior, performance, and efficiency of LFP decoders depend on an array of design and calibration settings encapsulated into a single set of hyper-parameters. Although methods exist to tune hyper-parameters automatically, decoders are typically found through exhaustive trial-and-error, manual search, and intuitive experience. Methods: This study introduces a Bayesian optimization (BO) approach to hyper-parameter tuning, applicable through feature extraction, channel selection, classification, and stage transition stages of the entire decoding pipeline. The optimization method is compared with five real-time feature extraction methods paired with four classifiers to decode voluntary movement asynchronously based on LFPs recorded with DBS electrodes implanted in the subthalamic nucleus of Parkinson’s disease patients. Results: Detection performance, measured as the geometric mean between classifier specificity and sensitivity, is automatically optimized. BO demonstrates improved decoding performance from initial parameter setting across all methods. The best decoders achieve a maximum performance of 0.74 ± 0.06 (mean ± SD across all participants) sensitivity-specificity geometric mean. In addition, parameter relevance is determined using the BO surrogate models. Discussion: Hyper-parameters tend to be sub-optimally fixed across different users rather than individually adjusted or even specifically set for a decoding task. The relevance of each parameter to the optimization problem and comparisons between algorithms can also be difficult to track with the evolution of the decoding problem. We believe that the proposed decoding pipeline and BO approach is a promising solution to such challenges surrounding hyper-parameter tuning and that the study’s findings can inform future design iterations of neural decoders for adaptive DBS and BCI

    Sequential Monte Carlo pricing of American-style options under stochastic volatility models

    Full text link
    We introduce a new method to price American-style options on underlying investments governed by stochastic volatility (SV) models. The method does not require the volatility process to be observed. Instead, it exploits the fact that the optimal decision functions in the corresponding dynamic programming problem can be expressed as functions of conditional distributions of volatility, given observed data. By constructing statistics summarizing information about these conditional distributions, one can obtain high quality approximate solutions. Although the required conditional distributions are in general intractable, they can be arbitrarily precisely approximated using sequential Monte Carlo schemes. The drawback, as with many Monte Carlo schemes, is potentially heavy computational demand. We present two variants of the algorithm, one closely related to the well-known least-squares Monte Carlo algorithm of Longstaff and Schwartz [The Review of Financial Studies 14 (2001) 113-147], and the other solving the same problem using a "brute force" gridding approach. We estimate an illustrative SV model using Markov chain Monte Carlo (MCMC) methods for three equities. We also demonstrate the use of our algorithm by estimating the posterior distribution of the market price of volatility risk for each of the three equities.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS286 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore