1,392 research outputs found

    Combining long memory and level shifts in modeling and forecasting the volatility of asset returns

    Full text link
    We propose a parametric state space model of asset return volatility with an accompanying estimation and forecasting framework that allows for ARFIMA dynamics, random level shifts and measurement errors. The Kalman filter is used to construct the state-augmented likelihood function and subsequently to generate forecasts, which are mean- and path-corrected. We apply our model to eight daily volatility series constructed from both high-frequency and daily returns. Full sample parameter estimates reveal that random level shifts are present in all series. Genuine long memory is present in high-frequency measures of volatility whereas there is little remaining dynamics in the volatility measures constructed using daily returns. From extensive forecast evaluations, we find that our ARFIMA model with random level shifts consistently belongs to the 10% Model Confidence Set across a variety of forecast horizons, asset classes, and volatility measures. The gains in forecast accuracy can be very pronounced, especially at longer horizons

    Combining long memory and level shifts in modeling and forecasting the volatility of asset returns

    Full text link
    We propose a parametric state space model of asset return volatility with an accompanying estimation and forecasting framework that allows for ARFIMA dynamics, random level shifts and measurement errors. The Kalman filter is used to construct the state-augmented likelihood function and subsequently to generate forecasts, which are mean and path-corrected. We apply our model to eight daily volatility series constructed from both high-frequency and daily returns. Full sample parameter estimates reveal that random level shifts are present in all series. Genuine long memory is present in most high-frequency measures of volatility, whereas there is little remaining dynamics in the volatility measures constructed using daily returns. From extensive forecast evaluations, we find that our ARFIMA model with random level shifts consistently belongs to the 10% Model Confidence Set across a variety of forecast horizons, asset classes and volatility measures. The gains in forecast accuracy can be very pronounced, especially at longer horizons

    Fast adaptive algorithms for signal separation

    Get PDF
    LMS and RLS type algorithms are suggested for decorrelation of multi-channel systems outputs. These algorithms act as signal separators when applied to unknown linear combinations of the inputs. The performance of the suggested algorithms is compared with that of the conventional LMS and RLS algorithms that minimize the mean square error. It. is shown that the correlation matrix eigenvalue spread associated with the LMS decorrelator is always smaller than the eigenvalue spread corresponding to the conventional LMS. resulting in faster convergence speed for the decorrelator. A new RLS type decorrelator algorithm is suggested. The RLS decorrelator is shown to be faster than the LMS decorrelator. not affected by the eigenvalue spread, and comparable in speed with the conventional RLS algorithm. Convergence analysis by simulation shows that the RLS algorithms and the LMS decorrelator have wider regions of convergence than the conventional LMS

    A Recursive Least M-Estimate Algorithm for Robust Adaptive Filtering in Impulsive Noise: Fast Algorithm and Convergence Performance Analysis

    Get PDF
    This paper studies the problem of robust adaptive filtering in impulsive noise environment using a recursive least M-estimate algorithm (RLM). The RLM algorithm minimizes a robust M-estimator-based cost function instead of the conventional mean square error function (MSE). Previous work has showed that the RLM algorithm offers improved robustness to impulses over conventional recursive least squares (RLS) algorithm. In this paper, the mean and mean square convergence behaviors of the RLM algorithm under the contaminated Gaussian impulsive noise model is analyzed. A lattice structure-based fast RLM algorithm, called the Huber Prior Error Feedback-Least Squares Lattice (H-PEF-LSL) algorithm1 is derived. It has an order O(N) arithmetic complexity, where N is the length of the adaptive filter, and can be viewed as a fast implementation of the RLM algorithm based on the modified Huber M-estimate function and the conventional PEF-LSL adaptive filtering algorithm. Simulation results show that the transversal RLM and the H-PEF-LSL algorithms have better performance than the conventional RLS and other RLS-like robust adaptive algorithms tested when the desired and input signals are corrupted by impulsive noise. Furthermore, the theoretical and simulation results on the convergence behaviors agree very well with each other.published_or_final_versio

    Learning An Invariant Speech Representation

    Get PDF
    Recognition of speech, and in particular the ability to generalize and learn from small sets of labelled examples like humans do, depends on an appropriate representation of the acoustic input. We formulate the problem of finding robust speech features for supervised learning with small sample complexity as a problem of learning representations of the signal that are maximally invariant to intraclass transformations and deformations. We propose an extension of a theory for unsupervised learning of invariant visual representations to the auditory domain and empirically evaluate its validity for voiced speech sound classification. Our version of the theory requires the memory-based, unsupervised storage of acoustic templates -- such as specific phones or words -- together with all the transformations of each that normally occur. A quasi-invariant representation for a speech segment can be obtained by projecting it to each template orbit, i.e., the set of transformed signals, and computing the associated one-dimensional empirical probability distributions. The computations can be performed by modules of filtering and pooling, and extended to hierarchical architectures. In this paper, we apply a single-layer, multicomponent representation for phonemes and demonstrate improved accuracy and decreased sample complexity for vowel classification compared to standard spectral, cepstral and perceptual features.Comment: CBMM Memo No. 022, 5 pages, 2 figure

    Spatio-temporal learning with the online finite and infinite echo-state Gaussian processes

    Get PDF
    Successful biological systems adapt to change. In this paper, we are principally concerned with adaptive systems that operate in environments where data arrives sequentially and is multivariate in nature, for example, sensory streams in robotic systems. We contribute two reservoir inspired methods: 1) the online echostate Gaussian process (OESGP) and 2) its infinite variant, the online infinite echostate Gaussian process (OIESGP) Both algorithms are iterative fixed-budget methods that learn from noisy time series. In particular, the OESGP combines the echo-state network with Bayesian online learning for Gaussian processes. Extending this to infinite reservoirs yields the OIESGP, which uses a novel recursive kernel with automatic relevance determination that enables spatial and temporal feature weighting. When fused with stochastic natural gradient descent, the kernel hyperparameters are iteratively adapted to better model the target system. Furthermore, insights into the underlying system can be gleamed from inspection of the resulting hyperparameters. Experiments on noisy benchmark problems (one-step prediction and system identification) demonstrate that our methods yield high accuracies relative to state-of-the-art methods, and standard kernels with sliding windows, particularly on problems with irrelevant dimensions. In addition, we describe two case studies in robotic learning-by-demonstration involving the Nao humanoid robot and the Assistive Robot Transport for Youngsters (ARTY) smart wheelchair

    Combinations of adaptive filters

    Get PDF
    Adaptive filters are at the core of many signal processing applications, ranging from acoustic noise supression to echo cancelation [1], array beamforming [2], channel equalization [3], to more recent sensor network applications in surveillance, target localization, and tracking. A trending approach in this direction is to recur to in-network distributed processing in which individual nodes implement adaptation rules and diffuse their estimation to the network [4], [5].The work of Jerónimo Arenas-García and Luis Azpicueta-Ruiz was partially supported by the Spanish Ministry of Economy and Competitiveness (under projects TEC2011-22480 and PRI-PIBIN-2011-1266. The work of Magno M.T. Silva was partially supported by CNPq under Grant 304275/2014-0 and by FAPESP under Grant 2012/24835-1. The work of Vítor H. Nascimento was partially supported by CNPq under grant 306268/2014-0 and FAPESP under grant 2014/04256-2. The work of Ali Sayed was supported in part by NSF grants CCF-1011918 and ECCS-1407712. We are grateful to the colleagues with whom we have shared discussions and coauthorship of papers along this research line, especially Prof. Aníbal R. Figueiras-Vidal

    A STUDY OF MODEL-BASED CONTROL STRATEGY FOR A GASOLINE TURBOCHARGED DIRECT INJECTION SPARK IGNITED ENGINE

    Get PDF
    To meet increasingly stringent fuel economy and emissions legislation, more advanced technologies have been added to spark-ignition (SI) engines, thus exponentially increase the complexity and calibration work of traditional map-based engine control. To achieve better engine performance without introducing significant calibration efforts and make the developed control system easily adapt to future engines upgrades and designs, this research proposes a model-based optimal control system for cycle-by-cycle Gasoline Turbocharged Direct Injection (GTDI) SI engine control, which aims to deliver the requested torque output and operate the engine to achieve the best achievable fuel economy and minimum emission under wide range of engine operating conditions. This research develops a model-based ignition timing prediction strategy for combustion phasing (crank angle of fifty percent of the fuel burned, CA50) control. A control-oriented combustion model is developed to predict burn duration from ignition timing to CA50. Using the predicted burn duration, the ignition timing needed for the upcoming cycle to track optimal target CA50 is calculated by a dynamic ignition timing prediction algorithm. A Recursive-Least-Square (RLS) with Variable Forgetting Factor (VFF) based adaptation algorithm is proposed to handle operating-point-dependent model errors caused by inherent errors resulting from modeling assumptions and limited calibration points, which helps to ensure the proper performance of model-based ignition timing prediction strategy throughout the entire engine lifetime. Using the adaptive combustion model, an Adaptive Extended Kalman Filter (AEKF) based CA50 observer is developed to provide filtered CA50 estimation from cyclic variations for the closed-loop combustion phasing control. An economic nonlinear model predictive controller (E-NMPC) based GTDI SI engine control system is developed to simultaneously achieve three objectives: tracking the requested net indicated mean effective pressure (IMEPn), minimizing the SFC, and reducing NOx emissions. The developed E-NMPC engine control system can achieve the above objectives by controlling throttle position, IVC timing, CA50, exhaust valve opening (EVO) timing, and wastegate position at the same time without violating engine operating constraints. A control-oriented engine model is developed and integrated into the E-NMPC to predict future engine behaviors. A high-fidelity 1-D GT-POWER engine model is developed and used as the plant model to tune and validate the developed control system. The performance of the entire model-based engine control system is examined through the software-in-the-loop (SIL) simulation using on-road vehicle test data
    corecore