310 research outputs found
Dynamic behaviour of optimal portfolio with stochastic volatility
In the existing literature, little is known about the dynamic behaviour of the optimal portfolio in terms of market inputs and arbitrary stochastic factor dynamics in an incomplete market with a
stochastic volatility. In this paper, to study optimal portfolio behaviour, we compute and analyze the mean and the variance of the
optimal portfolio and of their adjustment speed in terms of market inputs in an incomplete market. The incompleteness arises
from the additional source of uncertainty of the volatility in
Heston’s stochastic volatility model. Conducting sensitivity analysis
for the mean and the variance of the optimal portfolio process as
well as its adjustment speed to the market parameters, we find
several interesting behavioural patterns of investors towards asset
price and its volatility shocks. Our results are robust and convergent by the agreement from two simulation methods for different
time step increments and the number of Monte Carlo simulation paths
Impacts of road network expansion on landscape ecological risk in a megacity, China: A case study of Beijing
AbstractRoad networks affect the spatial structure of urban landscapes, and with continuous expansion, it will also exert more widespread influences on the regional ecological environment. With the support of geographic information system (GIS) technology, based on the application of various spatial analysis methods, this study analyzed the spatiotemporal changes of road networks and landscape ecological risk in the research area of Beijing to explore the impacts of road network expansion on ecological risk in the urban landscape. The results showed the following: 1) In the dynamic processes of change in the overall landscape pattern, the changing differences in landscape indices of various landscape types were obvious and were primarily related to land-use type. 2) For the changes in a time series, the expansion of the road kernel area was consistent with the extension of the sub-low-risk area in the urban center, but some differences were observed during different stages of development. 3) For the spatial position, the expanding changes in the road kernel area were consistent with the grade changes of the urban central ecological risk, primarily because both had a certain spatial correlation with the expressways. 4) The influence of road network expansion on the ecological risk in the study area had obvious spatial differences, which may be closely associated with the distribution of ecosystem types
Bayesian graph convolutional neural networks for semi-supervised classification
Recently, techniques for applying convolutional neural networks to
graph-structured data have emerged. Graph convolutional neural networks (GCNNs)
have been used to address node and graph classification and matrix completion.
Although the performance has been impressive, the current implementations have
limited capability to incorporate uncertainty in the graph structure. Almost
all GCNNs process a graph as though it is a ground-truth depiction of the
relationship between nodes, but often the graphs employed in applications are
themselves derived from noisy data or modelling assumptions. Spurious edges may
be included; other edges may be missing between nodes that have very strong
relationships. In this paper we adopt a Bayesian approach, viewing the observed
graph as a realization from a parametric family of random graphs. We then
target inference of the joint posterior of the random graph parameters and the
node (or graph) labels. We present the Bayesian GCNN framework and develop an
iterative learning procedure for the case of assortative mixed-membership
stochastic block models. We present the results of experiments that demonstrate
that the Bayesian formulation can provide better performance when there are
very few labels available during the training process
DyG2Vec: Representation Learning for Dynamic Graphs with Self-Supervision
Temporal graph neural networks have shown promising results in learning
inductive representations by automatically extracting temporal patterns.
However, previous works often rely on complex memory modules or inefficient
random walk methods to construct temporal representations. In addition, the
existing dynamic graph encoders are non-trivial to adapt to self-supervised
paradigms, which prevents them from utilizing unlabeled data. To address these
limitations, we present an efficient yet effective attention-based encoder that
leverages temporal edge encodings and window-based subgraph sampling to
generate task-agnostic embeddings. Moreover, we propose a joint-embedding
architecture using non-contrastive SSL to learn rich temporal embeddings
without labels. Experimental results on 7 benchmark datasets indicate that on
average, our model outperforms SoTA baselines on the future link prediction
task by 4.23% for the transductive setting and 3.30% for the inductive setting
while only requiring 5-10x less training/inference time. Additionally, we
empirically validate the SSL pre-training significance under two probings
commonly used in language and vision modalities. Lastly, different aspects of
the proposed framework are investigated through experimental analysis and
ablation studies.Comment: Proceedings of the 19th International Workshop on Mining and Learning
with Graphs (MLG
Dynamic Embedding Size Search with Minimum Regret for Streaming Recommender System
With the continuous increase of users and items, conventional recommender
systems trained on static datasets can hardly adapt to changing environments.
The high-throughput data requires the model to be updated in a timely manner
for capturing the user interest dynamics, which leads to the emergence of
streaming recommender systems. Due to the prevalence of deep learning-based
recommender systems, the embedding layer is widely adopted to represent the
characteristics of users, items, and other features in low-dimensional vectors.
However, it has been proved that setting an identical and static embedding size
is sub-optimal in terms of recommendation performance and memory cost,
especially for streaming recommendations. To tackle this problem, we first
rethink the streaming model update process and model the dynamic embedding size
search as a bandit problem. Then, we analyze and quantify the factors that
influence the optimal embedding sizes from the statistics perspective. Based on
this, we propose the \textbf{D}ynamic \textbf{E}mbedding \textbf{S}ize
\textbf{S}earch (\textbf{DESS}) method to minimize the embedding size selection
regret on both user and item sides in a non-stationary manner. Theoretically,
we obtain a sublinear regret upper bound superior to previous methods.
Empirical results across two recommendation tasks on four public datasets also
demonstrate that our approach can achieve better streaming recommendation
performance with lower memory cost and higher time efficiency.Comment: Accepted for publication on CIKM202
Bidirectional Learning for Offline Infinite-width Model-based Optimization
In offline model-based optimization, we strive to maximize a black-box
objective function by only leveraging a static dataset of designs and their
scores. This problem setting arises in numerous fields including the design of
materials, robots, DNA sequences, and proteins. Recent approaches train a deep
neural network (DNN) on the static dataset to act as a proxy function, and then
perform gradient ascent on the existing designs to obtain potentially
high-scoring designs. This methodology frequently suffers from the
out-of-distribution problem where the proxy function often returns poor
designs. To mitigate this problem, we propose BiDirectional learning for
offline Infinite-width model-based optimization (BDI). BDI consists of two
mappings: the forward mapping leverages the static dataset to predict the
scores of the high-scoring designs, and the backward mapping leverages the
high-scoring designs to predict the scores of the static dataset. The backward
mapping, neglected in previous work, can distill more information from the
static dataset into the high-scoring designs, which effectively mitigates the
out-of-distribution problem. For a finite-width DNN model, the loss function of
the backward mapping is intractable and only has an approximate form, which
leads to a significant deterioration of the design quality. We thus adopt an
infinite-width DNN model, and propose to employ the corresponding neural
tangent kernel to yield a closed-form loss for more accurate design updates.
Experiments on various tasks verify the effectiveness of BDI. The code is
available at https://github.com/GGchen1997/BDI.Comment: NeurIPS2022 camera-ready version; AI4Science; Drug discovery; Offline
model-based optimization; Neural tangent kernel; Bi-level optimizatio
- …