3 research outputs found
Can Perturbations Help Reduce Investment Risks? Risk-Aware Stock Recommendation via Split Variational Adversarial Training
In the stock market, a successful investment requires a good balance between
profits and risks. Recently, stock recommendation has been widely studied in
quantitative investment to select stocks with higher return ratios for
investors. Despite the success in making profits, most existing recommendation
approaches are still weak in risk control, which may lead to intolerable paper
losses in practical stock investing. To effectively reduce risks, we draw
inspiration from adversarial perturbations and propose a novel Split
Variational Adversarial Training (SVAT) framework for risk-aware stock
recommendation. Essentially, SVAT encourages the model to be sensitive to
adversarial perturbations of risky stock examples and enhances the model's risk
awareness by learning from perturbations. To generate representative
adversarial examples as risk indicators, we devise a variational perturbation
generator to model diverse risk factors. Particularly, the variational
architecture enables our method to provide a rough risk quantification for
investors, showing an additional advantage of interpretability. Experiments on
three real-world stock market datasets show that SVAT effectively reduces the
volatility of the stock recommendation model and outperforms state-of-the-art
baseline methods by more than 30% in terms of risk-adjusted profits
Towards Better Forecasting by Fusing Near and Distant Future Visions
Multivariate time series forecasting is an important yet challenging problem
in machine learning. Most existing approaches only forecast the series value of
one future moment, ignoring the interactions between predictions of future
moments with different temporal distance. Such a deficiency probably prevents
the model from getting enough information about the future, thus limiting the
forecasting accuracy. To address this problem, we propose Multi-Level Construal
Neural Network (MLCNN), a novel multi-task deep learning framework. Inspired by
the Construal Level Theory of psychology, this model aims to improve the
predictive performance by fusing forecasting information (i.e., future visions)
of different future time. We first use the Convolution Neural Network to
extract multi-level abstract representations of the raw data for near and
distant future predictions. We then model the interplay between multiple
predictive tasks and fuse their future visions through a modified
Encoder-Decoder architecture. Finally, we combine traditional Autoregression
model with the neural network to solve the scale insensitive problem.
Experiments on three real-world datasets show that our method achieves
statistically significant improvements compared to the most state-of-the-art
baseline methods, with average 4.59% reduction on RMSE metric and average 6.87%
reduction on MAE metric.Comment: Accepted by AAAI 202