64 research outputs found
Econometric Inference, Cyclical Fluctuations, and Superior Information
This paper presents and assesses a procedure to estimate conventional parameters characterizing fluctuations at the business cycle frequency, when the economic agents' information set is superior to the econometrician's one. Specifically, we first generalize the conditions under which the econometrician can estimate these "cyclical fluctuation" parameters from augmented laws of motion for forcing variables that fully recover the agents' superior information. Second, we document the econometric properties of the estimates when the augmented laws of motion are possibly misspecified. Third, we assess the ability of certain information criteria to detect the presence of superior information.Block bootstrap, Hidden variables, laws of motion for forcing variables, Monte Carlo simulations
Equity Premia and State-Dependent Risks
This paper analyzes the empirical relations between equity premia and state-dependent consumption and market risks. These relations are derived from a flexible specification of the CCAPM with mixture distribution, which admits the existence of two regimes. Focusing on the market return, we find that the consumption and market risks are priced in each state, and the responses of expected equity premia to these risks are state dependent. Extending to various portfolio returns, we show that the responses to downside consumption risks are the most important, they are almost always statistically larger than the responses to upside consumption risks, and they are much larger for firms having smaller sizes and facing more financial distresses.Mixture of truncated normals, downside and upside consumption and market risks
Macroeconomic Effects of Terrorist Shocks in Israel
This paper estimates a structural vector autoregression model to assess the dynamic effects of terrorism on output and prices in Israel over the post-1985 period. Long-run restrictions are used to obtain an interpretation of the effects of terrorism in terms of aggregate demand and supply curves. The empirical responses of output and prices suggest that the immediate effects of terrorism are similar to those associated with a negative demand shock. Such leftward shift of the aggregate demand curve is consistent with the adverse effects of terrorism on most components of aggregate expenditure, which have been documented in previous studies. In contrast, the long-term consequences of terrorism are similar to those related to a negative supply shock. Such leftward shift of the long-run aggregate supply curve suggests the potential existence of adverse effects of terrorism on the determinants of potential output, which have not been considered so far.Goods market, output, price, and terrorist indices, structural vector autoregressions, long-run identifying restrictions, dynamic responses and variance decompositions
Functional Kernel Density Estimation: Point and Fourier Approaches to Time Series Anomaly Detection
We present an unsupervised method to detect anomalous time series among a collection of time series. To do so, we extend traditional Kernel Density Estimation for estimating probability distributions in Euclidean space to Hilbert spaces. The estimated probability densities we derive can be obtained formally through treating each series as a point in a Hilbert space, placing a kernel at those points, and summing the kernels (a “point approach”), or through using Kernel Density Estimation to approximate the distributions of Fourier mode coefficients to infer a probability density (a “Fourier approach”). We refer to these approaches as Functional Kernel Density Estimation for Anomaly Detection as they both yield functionals that can score a time series for how anomalous it is. Both methods naturally handle missing data and apply to a variety of settings, performing well when compared with an outlyingness score derived from a boxplot method for functional data, with a Principal Component Analysis approach for functional data, and with the Functional Isolation Forest method. We illustrate the use of the proposed methods with aviation safety report data from the International Air Transport Association (IATA)
Prediction intervals for travel time on transportation networks
Estimating travel-time is essential for making travel decisions in
transportation networks. Empirically, single road-segment travel-time is well
studied, but how to aggregate such information over many edges to arrive at the
distribution of travel time over a route is still theoretically challenging.
Understanding travel-time distribution can help resolve many fundamental
problems in transportation, quantifying travel uncertainty as an example. We
develop a novel statistical perspective to specific types of dynamical
processes that mimic the behavior of travel time on real-world networks. We
show that, under general conditions, travel-time normalized by distance,
follows a Gaussian distribution with route-invariant (universal) location and
scale parameters. We develop efficient inference methods for such parameters,
with which we propose asymptotic universal confidence and prediction intervals
of travel time. We further develop our theory to include road-segment level
information to construct route-specific location and scale parameter sequences
that produce tighter route-specific Gaussian-based prediction intervals. We
illustrate our methods with a real-world case study using precollected mobile
GPS data, where we show that the route-specific and route-invariant intervals
both achieve the 95\% theoretical coverage levels, where the former result in
tighter bounds that also outperform competing models.Comment: 24 main pages, 4 figures and 4 tables. This version includes many
changes to the previous on
Improving the generalizability and robustness of large-scale traffic signal control
A number of deep reinforcement-learning (RL) approaches propose to control
traffic signals. In this work, we study the robustness of such methods along
two axes. First, sensor failures and GPS occlusions create missing-data
challenges and we show that recent methods remain brittle in the face of these
missing data. Second, we provide a more systematic study of the generalization
ability of RL methods to new networks with different traffic regimes. Again, we
identify the limitations of recent approaches. We then propose using a
combination of distributional and vanilla reinforcement learning through a
policy ensemble. Building upon the state-of-the-art previous model which uses a
decentralized approach for large-scale traffic signal control with graph
convolutional networks (GCNs), we first learn models using a distributional
reinforcement learning (DisRL) approach. In particular, we use implicit
quantile networks (IQN) to model the state-action return distribution with
quantile regression. For traffic signal control problems, an ensemble of
standard RL and DisRL yields superior performance across different scenarios,
including different levels of missing sensor data and traffic flow patterns.
Furthermore, the learning scheme of the resulting model can improve zero-shot
transferability to different road network structures, including both synthetic
networks and real-world networks (e.g., Luxembourg, Manhattan). We conduct
extensive experiments to compare our approach to multi-agent reinforcement
learning and traditional transportation approaches. Results show that the
proposed method improves robustness and generalizability in the face of missing
data, varying road networks, and traffic flows
Ensemble Methods for Survival Data with Time-Varying Covariates
Survival data with time-varying covariates are common in practice. If
relevant, such covariates can improve on the estimation of a survival function.
However, the traditional survival forests - conditional inference forest,
relative risk forest and random survival forest - have accommodated only
time-invariant covariates.
We generalize the conditional inference and relative risk forests to allow
time-varying covariates. We compare their performance with that of the extended
Cox model, a commonly used method, and the transformation forest method,
designed to detect non-proportional hazards deviations and adapted here to
accommodate time-varying covariates, through a comprehensive simulation study
in which the Kaplan-Meier estimate serves as a benchmark and the integrated L2
difference between the true and estimated survival functions is used for
evaluation.
In general, the performance of the two proposed forests substantially
improves over the Kaplan-Meier estimate. Under the proportional-hazard setting,
the best method is always one of the two proposed forests, while under the
non-proportional hazards setting, it is the adapted transformation forest. We
use K-fold cross-validation to choose between the methods, which is shown to be
an effective tool to provide guidance in practice. The performance of the
proposed forest methods for time-invariant covariate data is broadly similar to
that found for time-varying covariate data
Equity Premia and State-Dependent Risks
This paper analyzes the empirical relations between equity premia and state-dependent consumption and market risks. These relations are derived from a flexible specification of the CCAPM with mixture distribution, which admits the existence of two regimes. Focusing on the market return, we find that the consumption and market risks are priced in each state, and the responses of expected equity premia to these risks are state dependent. Extending to various portfolio returns, we show that the responses to downside consumption risks are the most important, they are almost always statistically larger than the responses to upside consumption risks, and they are much larger for firms having smaller sizes and facing more financial distresses
- …