9,404 research outputs found

    Estimation of the complex frequency of a harmonic signal based on a linear least squares method

    Get PDF
    AbstractIn this study, we propose a simple linear least squares estimation method (LLS) based on a Fourier transform to estimate the complex frequency of a harmonic signal. We first use a synthetically-generated noisy time series to validate the accuracy and effectiveness of LLS by comparing it with the commonly used linear autoregressive method (AR). For an input frequency of 0.5 mHz, the calculated deviations from the theoretical value were 0.004‰ and 0.008‰ for the LLS and AR methods respectively; and for an input 5 × 10−6 attenuation, the calculated deviations for the LLS and AR methods were 2.4% and 1.6%. Though the theory of the AR method is more complex than that of LLS, the results show LLS is a useful alternative method. Finally, we use LLS to estimate the complex frequencies of the five singlets of the 0S2 mode of the Earth's free oscillation. Not only are the results consistent with previous studies, the method has high estimation precisions, which may prove helpful in determining constraints on the Earth's interior structures

    Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization

    Full text link
    As a powerful statistical image modeling technique, sparse representation has been successfully used in various image restoration applications. The success of sparse representation owes to the development of l1-norm optimization techniques, and the fact that natural images are intrinsically sparse in some domain. The image restoration quality largely depends on whether the employed sparse domain can represent well the underlying image. Considering that the contents can vary significantly across different images or different patches in a single image, we propose to learn various sets of bases from a pre-collected dataset of example image patches, and then for a given patch to be processed, one set of bases are adaptively selected to characterize the local sparse domain. We further introduce two adaptive regularization terms into the sparse representation framework. First, a set of autoregressive (AR) models are learned from the dataset of example image patches. The best fitted AR models to a given patch are adaptively selected to regularize the image local structures. Second, the image non-local self-similarity is introduced as another regularization term. In addition, the sparsity regularization parameter is adaptively estimated for better image restoration performance. Extensive experiments on image deblurring and super-resolution validate that by using adaptive sparse domain selection and adaptive regularization, the proposed method achieves much better results than many state-of-the-art algorithms in terms of both PSNR and visual perception.Comment: 35 pages. This paper is under review in IEEE TI

    Integrated Pre-Processing for Bayesian Nonlinear System Identification with Gaussian Processes

    Full text link
    We introduce GP-FNARX: a new model for nonlinear system identification based on a nonlinear autoregressive exogenous model (NARX) with filtered regressors (F) where the nonlinear regression problem is tackled using sparse Gaussian processes (GP). We integrate data pre-processing with system identification into a fully automated procedure that goes from raw data to an identified model. Both pre-processing parameters and GP hyper-parameters are tuned by maximizing the marginal likelihood of the probabilistic model. We obtain a Bayesian model of the system's dynamics which is able to report its uncertainty in regions where the data is scarce. The automated approach, the modeling of uncertainty and its relatively low computational cost make of GP-FNARX a good candidate for applications in robotics and adaptive control.Comment: Proceedings of the 52th IEEE International Conference on Decision and Control (CDC), Firenze, Italy, December 201

    Dynamic modeling of mean-reverting spreads for statistical arbitrage

    Full text link
    Statistical arbitrage strategies, such as pairs trading and its generalizations, rely on the construction of mean-reverting spreads enjoying a certain degree of predictability. Gaussian linear state-space processes have recently been proposed as a model for such spreads under the assumption that the observed process is a noisy realization of some hidden states. Real-time estimation of the unobserved spread process can reveal temporary market inefficiencies which can then be exploited to generate excess returns. Building on previous work, we embrace the state-space framework for modeling spread processes and extend this methodology along three different directions. First, we introduce time-dependency in the model parameters, which allows for quick adaptation to changes in the data generating process. Second, we provide an on-line estimation algorithm that can be constantly run in real-time. Being computationally fast, the algorithm is particularly suitable for building aggressive trading strategies based on high-frequency data and may be used as a monitoring device for mean-reversion. Finally, our framework naturally provides informative uncertainty measures of all the estimated parameters. Experimental results based on Monte Carlo simulations and historical equity data are discussed, including a co-integration relationship involving two exchange-traded funds.Comment: 34 pages, 6 figures. Submitte

    Measuring information-transfer delays

    Get PDF
    In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics

    Dollarization Persistence and Individual Heterogeneity

    Get PDF
    The most salient feature of financial dollarization, and the one that causes more concern to policy makers, is its persistence: even after successful macroeconomic stabilizations, dollarization ratios often remain high. In this paper we claim that this persistence is connected to the fact that the participants in the dollar deposit market are fairly heterogenous, and so is the way they form their optimal currency portfolio.We develop as simple model when agents differ in their ability to process information, which turns out to be enough to generate persistence up on aggregation. We find empirical support for this claim with data from three Latin American countries and Poland.Dollarization, individual heterogeneity, persistence, aggregation

    Modeling sparse connectivity between underlying brain sources for EEG/MEG

    Full text link
    We propose a novel technique to assess functional brain connectivity in EEG/MEG signals. Our method, called Sparsely-Connected Sources Analysis (SCSA), can overcome the problem of volume conduction by modeling neural data innovatively with the following ingredients: (a) the EEG is assumed to be a linear mixture of correlated sources following a multivariate autoregressive (MVAR) model, (b) the demixing is estimated jointly with the source MVAR parameters, (c) overfitting is avoided by using the Group Lasso penalty. This approach allows to extract the appropriate level cross-talk between the extracted sources and in this manner we obtain a sparse data-driven model of functional connectivity. We demonstrate the usefulness of SCSA with simulated data, and compare to a number of existing algorithms with excellent results.Comment: 9 pages, 6 figure
    • 

    corecore