15 research outputs found
Forecasting bitcoin volatility: Exploring the potential of deep learning
This study aims to evaluate forecasting properties of classic methodologies (ARCH and GARCH models) in comparison with deep learning methodologies (MLP, RNN, and LSTM architectures) for predicting Bitcoin's volatility. As a new asset class with unique characteristics, Bitcoin's high volatility and structural breaks make forecasting challenging. Based on 2753 observations from 08-09-2014 to 01-05-2022, this study focuses on Bitcoin logarithmic returns. Results show that deep learning methodologies have advantages in terms of forecast quality, although significant computational costs are required. Although both MLP and RNN models produce smoother forecasts with less fluctuation, they fail to capture large spikes. The LSTM architecture, on the other hand, reacts strongly to such movements and tries to adjust its forecast accordingly. To compare forecasting accuracy at different horizons MAPE, MAE metrics are used. Diebold-Mariano tests were conducted to compare the forecast, confirming the superiority of deep learning methodologies. Overall, this study suggests that deep learning methodologies could provide a promising tool for forecasting Bitcoin returns (and therefore volatility), especially for short-term horizons.info:eu-repo/semantics/publishedVersio
Recurrently Predicting Hypergraphs
This work considers predicting the relational structure of a hypergraph for a
given set of vertices, as common for applications in particle physics,
biological systems and other complex combinatorial problems. A problem arises
from the number of possible multi-way relationships, or hyperedges, scaling in
for a set of elements. Simply storing an indicator
tensor for all relationships is already intractable for moderately sized ,
prompting previous approaches to restrict the number of vertices a hyperedge
connects. Instead, we propose a recurrent hypergraph neural network that
predicts the incidence matrix by iteratively refining an initial guess of the
solution. We leverage the property that most hypergraphs of interest are
sparsely connected and reduce the memory requirement to ,
where is the maximum number of positive edges, i.e., edges that actually
exist. In order to counteract the linearly growing memory cost from training a
lengthening sequence of refinement steps, we further propose an algorithm that
applies backpropagation through time on randomly sampled subsequences. We
empirically show that our method can match an increase in the intrinsic
complexity without a performance decrease and demonstrate superior performance
compared to state-of-the-art models
Deep Equilibrium Multimodal Fusion
Multimodal fusion integrates the complementary information present in
multiple modalities and has gained much attention recently. Most existing
fusion approaches either learn a fixed fusion strategy during training and
inference, or are only capable of fusing the information to a certain extent.
Such solutions may fail to fully capture the dynamics of interactions across
modalities especially when there are complex intra- and inter-modality
correlations to be considered for informative multimodal fusion. In this paper,
we propose a novel deep equilibrium (DEQ) method towards multimodal fusion via
seeking a fixed point of the dynamic multimodal fusion process and modeling the
feature correlations in an adaptive and recursive manner. This new way encodes
the rich information within and across modalities thoroughly from low level to
high level for efficacious downstream multimodal learning and is readily
pluggable to various multimodal frameworks. Extensive experiments on BRCA,
MM-IMDB, CMU-MOSI, SUN RGB-D, and VQA-v2 demonstrate the superiority of our DEQ
fusion. More remarkably, DEQ fusion consistently achieves state-of-the-art
performance on multiple multimodal benchmarks. The code will be released