2,798 research outputs found

    Long Memory and Non-Linearities in International Inflation

    Get PDF
    This paper investigates inflation dynamics in a panel of 20 OECD economies using an approach based on the sample autocorrelation function (ACF). We find that inflation is characterized by long-lasting fluctuations, which are similar across countries and that eventually revert to a potentially time-varying mean. The cyclical and persistent behavior of inflation does not belong to the class of linear autoregressive processes but rather to a more general class of nonlinear and long memory models. Recent theoretical contributions on heterogeneity in price setting and aggregation offer a rationale to our results. Finally, we draw the monetary policy implications of our findings.AutoCorrelation Function, long-memory, inflation persistence, inflation targeting, heavy tails.

    Theory, design and applications of linear transforms for information transmission

    Get PDF
    The aim of this dissertation is to study the common features of block transforms, subband filter banks, and wavelets, and demonstrate how discrete uncertainty can be applied to evaluate these different decomposition techniques. In particular, we derive an uncertainty bound for discrete-time functions. It is shown that this bound is the same as that for continuous-time functions, if the discrete-time functions have a certain degree of regularity. This dissertation also deals with spectral modeling in filter banks. It is shown, both theoretically and experimentally, that subspectral modeling is superior to full spectrum modeling if performed before the rate change. The price paid for this performance improvement is an increase of computations. A few different signal sources were considered in this study. It is shown that the performances of AR and ARMA modeling techniques are comparable in subspectral modeling. The first is desired because of its simplicity. As an application of AR modeling, a coding algorithm of speech, namely CELP embedded in a filter bank structure was also studied. We found that there were no improvements of subband CELP technique over the full band one. The theoretical reasonings of the experimental results are also given. This dissertation also addresses the problems of what type of transform to be used and to what extent an image should be decomposed. To this aim, an objective and subjective evaluations of different transform bases were done. We propose a smart algorithm for the decomposition of a channel into its sub-channels in the discrete multitone communications. This algorithm evaluates the unevenness and energy distribution of the channel spectrum in order to get its Variable adaptive partitioning. It is shown that the proposed algorithm leads to a near optimal performance of the discrete multitone transceiver. This flexible splitting of the channel suffers less from the aliasing problem that exists in blind decompositions using fixed transforms. This dissertation extends the discrete multitone to the flexible multiband concept which brings significant performance improvements for digital communications

    Hyper-parameter tuning and feature extraction for asynchronous action detection from sub-thalamic nucleus local field potentials

    Get PDF
    Introduction: Decoding brain states from subcortical local field potentials (LFPs) indicative of activities such as voluntary movement, tremor, or sleep stages, holds significant potential in treating neurodegenerative disorders and offers new paradigms in brain-computer interface (BCI). Identified states can serve as control signals in coupled human-machine systems, e.g., to regulate deep brain stimulation (DBS) therapy or control prosthetic limbs. However, the behavior, performance, and efficiency of LFP decoders depend on an array of design and calibration settings encapsulated into a single set of hyper-parameters. Although methods exist to tune hyper-parameters automatically, decoders are typically found through exhaustive trial-and-error, manual search, and intuitive experience. Methods: This study introduces a Bayesian optimization (BO) approach to hyper-parameter tuning, applicable through feature extraction, channel selection, classification, and stage transition stages of the entire decoding pipeline. The optimization method is compared with five real-time feature extraction methods paired with four classifiers to decode voluntary movement asynchronously based on LFPs recorded with DBS electrodes implanted in the subthalamic nucleus of Parkinson’s disease patients. Results: Detection performance, measured as the geometric mean between classifier specificity and sensitivity, is automatically optimized. BO demonstrates improved decoding performance from initial parameter setting across all methods. The best decoders achieve a maximum performance of 0.74 ± 0.06 (mean ± SD across all participants) sensitivity-specificity geometric mean. In addition, parameter relevance is determined using the BO surrogate models. Discussion: Hyper-parameters tend to be sub-optimally fixed across different users rather than individually adjusted or even specifically set for a decoding task. The relevance of each parameter to the optimization problem and comparisons between algorithms can also be difficult to track with the evolution of the decoding problem. We believe that the proposed decoding pipeline and BO approach is a promising solution to such challenges surrounding hyper-parameter tuning and that the study’s findings can inform future design iterations of neural decoders for adaptive DBS and BCI

    Information Loss and Anti-Aliasing Filters in Multirate Systems

    Full text link
    This work investigates the information loss in a decimation system, i.e., in a downsampler preceded by an anti-aliasing filter. It is shown that, without a specific signal model in mind, the anti-aliasing filter cannot reduce information loss, while, e.g., for a simple signal-plus-noise model it can. For the Gaussian case, the optimal anti-aliasing filter is shown to coincide with the one obtained from energetic considerations. For a non-Gaussian signal corrupted by Gaussian noise, the Gaussian assumption yields an upper bound on the information loss, justifying filter design principles based on second-order statistics from an information-theoretic point-of-view.Comment: 12 pages; a shorter version of this paper was published at the 2014 International Zurich Seminar on Communication

    The Cyclical Dynamics and Volatility of Australian Output and Employment

    Get PDF
    In this paper we examine the volatility of aggregate output and employment in Australia with the aid of a frequency filtering method (the Butterworth filter) that allows each time series to be decomposed into trend, cycle and noise components. This analysis is compared with more traditional methods based simply on the examination of first differences in the logs of the raw data using cointegration-VAR modelling. We show that the application of univariate AR and bivariate VECM methods to the data results in a detrended series which is dominated by noise rather than cyclical variation and gives break points which are not robust to alternative decomposition methods. Also, our conclusions challenge accepted wisdom in relation to output volatility in Australia which holds that there was a once and for all sustained reduction in output volatility in or around 1984. We do not find any convincing evidence for a sustained reduction in the cyclical volatility of the GDP (or employment) series at that time, but we do find evidence of a sustained reduction in the cyclical volatility of the GDP (and employment) series in 1993/4. We also find that there is a clear association between output volatility and employment volatility. We discuss the key features of the business cycle we have identified as well as some of the policy implications of our results.Business cycles, volatility, inflation targeting, Australia

    Contagion effects of the US Subprime Crisis on Developed Countries

    Get PDF
    This study assesses whether capital markets of developed countries reflect the effects of financial contagion from the US subprime crisis and, in such case, if the intensity of contagion differs across countries. Adopting a definition of contagion that relates the phenomenon to an increase of cross-market linkages following a shock, copula models are used to analyse how the connections between the US and each market in the sample, evolved from the pre-crisis to the crisis period. The results suggest that markets in Canada, Japan, Italy, France and the United Kingdom display significant levels of contagion, which are less relevant in Germany. Canada appears to be the country where the highest intensity of contagion is observed.G7, subprime crisis, contagion, copula, event study.
    • 

    corecore