4,710 research outputs found
Government Policy and Ownership of Financial Assets
Since World War II, direct stock ownership by households across the globe has largely been replaced by indirect stock ownership by financial institutions. We argue that tax policy is the driving force. Using long time-series from eight countries, we show that the fraction of household ownership decreases with measures of the tax benefits of holding stocks inside tax-deferred plans. This finding is important for policy considerations on effective taxation and for financial economics research on the long-term effects of taxation on corporate finance and asset prices.
Nursing home residents make a difference – The overestimation of saving rates at older ages
While life-cycle theory makes the clear prediction that people dissave at old-age, this prediction is not at all borne out by the data from many countries. Various suggestions have been made to explain this discrepancy. This paper sheds more light on the effect of the exclusion of institutionalized individuals in estimating saving rates over old-age, a conceptual aspect often mentioned but never investigated. Particularly this group is expected to decumulate wealth since nursing home expenses net of private (and public) insurance exceed disposable income on average. This paper uses the Health and Retirement Study (HRS) for the USA and the Income and Expenditure Survey (EVS) for Germany to show that there is an increasing overestimation of saving rates from age 75 on if institutionalized households are not included. In the USA, the overestimation of the mean (median) saving rates is 3.3 percentage points (4.3pp) at age 80, 5.4pp (9.4pp) at age 90 and even more for age 90+. The overestimation of the German mean saving rate increases to almost 6pp at age 90. This strong overestimation is based on the fact that nursing home residents strongly reduce their wealth holdings. Referring to the USA, the representative median single nursing home resident reduces wealth holdings by 90% over a two-year period; the representative mean single nursing home resident diminishes total net wealth by 19%. The dissaving is less strong for couples. The ongoing aging of industrialized populations and the connected increase in the fraction of the nursing home population will strengthen the importance of including the nursing home population to estimate aggregate saving rates in micro empirical studies.
Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting
Diffusion models have achieved state-of-the-art performance in generative
modeling tasks across various domains. Prior works on time series diffusion
models have primarily focused on developing conditional models tailored to
specific forecasting or imputation tasks. In this work, we explore the
potential of task-agnostic, unconditional diffusion models for several time
series applications. We propose TSDiff, an unconditionally trained diffusion
model for time series. Our proposed self-guidance mechanism enables
conditioning TSDiff for downstream tasks during inference, without requiring
auxiliary networks or altering the training procedure. We demonstrate the
effectiveness of our method on three different time series tasks: forecasting,
refinement, and synthetic data generation. First, we show that TSDiff is
competitive with several task-specific conditional forecasting methods
(predict). Second, we leverage the learned implicit probability density of
TSDiff to iteratively refine the predictions of base forecasters with reduced
computational overhead over reverse diffusion (refine). Notably, the generative
performance of the model remains intact -- downstream forecasters trained on
synthetic samples from TSDiff outperform forecasters that are trained on
samples from other state-of-the-art generative time series models, occasionally
even outperforming models trained on real data (synthesize)
Amelia II: A Program for Missing Data
Amelia II is a complete R package for multiple imputation of missing data. The package implements a new expectation-maximization with bootstrapping algorithm that works faster, with larger numbers of variables, and is far easier to use, than various Markov chain Monte Carlo approaches, but gives essentially the same answers. The program also improves imputation models by allowing researchers to put Bayesian priors on individual cell values, thereby including a great deal of potentially valuable and extensive information. It also includes features to accurately impute cross-sectional datasets, individual time series, or sets of time series for different cross-sections. A full set of graphical diagnostics are also available. The program is easy to use, and the simplicity of the algorithm makes it far more robust; both a simple command line and extensive graphical user interface are included
Sequence-to-Sequence Imputation of Missing Sensor Data
Although the sequence-to-sequence (encoder-decoder) model is considered the
state-of-the-art in deep learning sequence models, there is little research
into using this model for recovering missing sensor data. The key challenge is
that the missing sensor data problem typically comprises three sequences (a
sequence of observed samples, followed by a sequence of missing samples,
followed by another sequence of observed samples) whereas, the
sequence-to-sequence model only considers two sequences (an input sequence and
an output sequence). We address this problem by formulating a
sequence-to-sequence in a novel way. A forward RNN encodes the data observed
before the missing sequence and a backward RNN encodes the data observed after
the missing sequence. A decoder decodes the two encoders in a novel way to
predict the missing data. We demonstrate that this model produces the lowest
errors in 12% more cases than the current state-of-the-art
Towards better traffic volume estimation: Tackling both underdetermined and non-equilibrium problems via a correlation-adaptive graph convolution network
Traffic volume is an indispensable ingredient to provide fine-grained
information for traffic management and control. However, due to limited
deployment of traffic sensors, obtaining full-scale volume information is far
from easy. Existing works on this topic primarily focus on improving the
overall estimation accuracy of a particular method and ignore the underlying
challenges of volume estimation, thereby having inferior performances on some
critical tasks. This paper studies two key problems with regard to traffic
volume estimation: (1) underdetermined traffic flows caused by undetected
movements, and (2) non-equilibrium traffic flows arise from congestion
propagation. Here we demonstrate a graph-based deep learning method that can
offer a data-driven, model-free and correlation adaptive approach to tackle the
above issues and perform accurate network-wide traffic volume estimation.
Particularly, in order to quantify the dynamic and nonlinear relationships
between traffic speed and volume for the estimation of underdetermined flows, a
speed patternadaptive adjacent matrix based on graph attention is developed and
integrated into the graph convolution process, to capture non-local
correlations between sensors. To measure the impacts of non-equilibrium flows,
a temporal masked and clipped attention combined with a gated temporal
convolution layer is customized to capture time-asynchronous correlations
between upstream and downstream sensors. We then evaluate our model on a
real-world highway traffic volume dataset and compare it with several benchmark
models. It is demonstrated that the proposed model achieves high estimation
accuracy even under 20% sensor coverage rate and outperforms other baselines
significantly, especially on underdetermined and non-equilibrium flow
locations. Furthermore, comprehensive quantitative model analysis are also
carried out to justify the model designs
- …