9,344 research outputs found

    Discussion of “An analysis of global warming in the Alpine region based on nonlinear nonstationary time series models” by F. Battaglia and M. K. Protopapas

    Get PDF
    The annual temperatures recorded for the last two centuries in fifteen european stations around the Alps are analyzed. They show a global warming whose growth rate is not however constant in time. An analysis based on linear Arima models does not provide accurate results. Thus, we propose threshold nonlinear nonstationary models based on several regimes both in time and in levels. Such models fit all series satisfactorily, allow a closer description of the temperature changes evolution, and help to discover the essential differences in the behavior of the different stations

    Measuring causality between volatility and returns with high-frequency data

    Get PDF
    We use high-frequency data to study the dynamic relationship between volatility and equity returns. We provide evidence on two alternative mechanisms of interaction between returns and volatilities: the leverage effect and the volatility feedback effect. The leverage hypothesis asserts that return shocks lead to changes in conditional volatility, while the volatility feedback effect theory assumes that return shocks can be caused by changes in conditional volatility through a time-varying risk premium. On observing that a central difference between these alternative explanations lies in the direction of causality, we consider vector autoregressive models of returns and realized volatility and we measure these effects along with the time lags involved through short-run and long-run causality measures proposed in Dufour and Taamouti (2008), as opposed to simple correlations. We analyze 5-minute observations on S&P 500 Index futures contracts, the associated realized volatilities (before and after filtering jumps through the bispectrum) and implied volatilities. Using only returns and realized volatility, we find a weak dynamic leverage effect for the first four hours at the hourly frequency and a strong dynamic leverage effect for the first three days at the daily frequency. The volatility feedback effect appears to be negligible at all horizons. By contrast, when implied volatility is considered, a volatility feedback becomes apparent, whereas the leverage effect is almost the same. We interpret these results as evidence that implied volatility contains important information on future volatility, through its nonlinear relation with option prices which are themselves forwardlooking. In addition, we study the dynamic impact of news on returns and volatility, again through causality measures. First, to detect possible dynamic asymmetry, we separate good from bad return news and find a much stronger impact of bad return news (as opposed to good return news) on volatility. Second, we introduce a concept of news based on the difference between implied and realized volatilities (the variance risk premium) and we find that a positive variance risk premium (an anticipated increase in variance) has more impact on returns than a negative variance risk premium

    Measuring information-transfer delays

    Get PDF
    In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics

    A Learning-Based Framework for Two-Dimensional Vehicle Maneuver Prediction over V2V Networks

    Full text link
    Situational awareness in vehicular networks could be substantially improved utilizing reliable trajectory prediction methods. More precise situational awareness, in turn, results in notably better performance of critical safety applications, such as Forward Collision Warning (FCW), as well as comfort applications like Cooperative Adaptive Cruise Control (CACC). Therefore, vehicle trajectory prediction problem needs to be deeply investigated in order to come up with an end to end framework with enough precision required by the safety applications' controllers. This problem has been tackled in the literature using different methods. However, machine learning, which is a promising and emerging field with remarkable potential for time series prediction, has not been explored enough for this purpose. In this paper, a two-layer neural network-based system is developed which predicts the future values of vehicle parameters, such as velocity, acceleration, and yaw rate, in the first layer and then predicts the two-dimensional, i.e. longitudinal and lateral, trajectory points based on the first layer's outputs. The performance of the proposed framework has been evaluated in realistic cut-in scenarios from Safety Pilot Model Deployment (SPMD) dataset and the results show a noticeable improvement in the prediction accuracy in comparison with the kinematics model which is the dominant employed model by the automotive industry. Both ideal and nonideal communication circumstances have been investigated for our system evaluation. For non-ideal case, an estimation step is included in the framework before the parameter prediction block to handle the drawbacks of packet drops or sensor failures and reconstruct the time series of vehicle parameters at a desirable frequency

    What can we tell about monetary policy synchronization and interdependence over the 2007-2009 global financial crisis?

    Get PDF
    We investigate the synchronization and nonlinear adjustment dynamics of short-term interest rates for France, the UK and the US using the bi-directional feedback measures proposed by Geweke (1982) and appropriate smooth transition error-correction models (STECM). We find strong evidence of continual increases in bilateral synchroni-zation of these rates from 2005 to 2009 as well as of their lead-lag causal interactions with a slight dominance of the US rate. Our results also indicate that short-term interest rates converge towards a common long-run equilibrium in a nonlinear manner and their time dynamics exhibit regime-switching behavior.

    USING TRAJECTORIES FROM A BIVARIATEGROWTH CURVE OF COVARIATES IN A COXMODEL ANALYSIS

    Get PDF
    In many maintenance treatment trials, patients are first enrolled into an open treatmentbefore they are randomized into treatment groups. During this period, patients are followedover time with their responses measured longitudinally. This design is very common intoday's public health studies of the prevention of many diseases. Using mixed model theory, onecan characterize these data using a wide array of across subject models. A state-spacerepresentation of the mixed model and use of the Kalman filter allow more fexibility inchoosing the within error correlation structure even in the presence of missing and unequallyspaced observations. Furthermore, using the state-space approach, one can avoid invertinglarge matrices resulting in eficient computations. Estimated trajectories from these models can be used as predictors in a survival analysis in judging the efacacy of the maintenance treatments. The statistical problem lies in accounting for the estimation error in these predictors. We considered a bivariate growth curve where the longitudinal responses were unequally spaced and assumed that the within subject errors followed a continuous firstorder autoregressive (CAR (1)) structure. A simulation study was conducted to validatethe model. We developed a method where estimated random effects for each subject froma bivariate growth curve were used as predictors in the Cox proportional hazards model,using the full likelihood based on the conditional expectation of covariates to adjust for the estimation errors in the predictor variables. Simulation studies indicated that error corrected estimators for model parameters are mostly less biased when compared with thenave regression without accounting for estimation errors. These results hold true in Coxmodels with one or two predictors. An illustrative example is provided with data from a maintenance treatment trial for major depression in an elderly population. A Visual Fortran 90 and a SAS IML program are developed

    A TWO-STEP ESTIMATOR FOR A SPATIAL LAG MODEL OF COUNTS: THEORY, SMALL SAMPLE PERFORMANCE AND AN APPLICATION

    Get PDF
    Several spatial econometric approaches are available to model spatially correlated disturbances in count models, but there are at present no structurally consistent count models incorporating spatial lag autocorrelation. A two-step, limited information maximum likelihood estimator is proposed to fill this gap. The estimator is developed assuming a Poisson distribution, but can be extended to other count distributions. The small sample properties of the estimator are evaluated with Monte Carlo experiments. Simulation results suggest that the spatial lag count estimator achieves gains in terms of bias over the aspatial version as spatial lag autocorrelation and sample size increase. An empirical example deals with the location choice of single-unit start-up firms in the manufacturing industry in the US between 2000 and 2004. The empirical results suggest that in the dynamic process of firm formation, counties dominated by firms exhibiting (internal) increasing returns to scale are at a relative disadvantage even if localization economies are presentcount model, location choice, manufacturing, Poisson, spatial econometrics

    Does money matter in inflation forecasting?.

    Get PDF
    This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation
    corecore