718 research outputs found
Effect of green credit policy on energy firms’ growth: evidence from China
The response of energy firms to green credit policy is of great significance,
which is related to the emission reduction effect of
green finance and transformation of energy firms. This paper analyzes
the impact of green credit policy on the growth of energy
firms based on the data of Chinese listed companies from 2009
to 2019. The empirical results show that green credit policy has
significantly promoted the growth of energy firms. Further
research shows that green credit policy promoted the growth of
energy firms by reducing financing costs and promoting green
innovation. Besides, the owned firms, big-scale firms and firms in
central and eastern China are more susceptible to the impacts of
the green credit policy. This study is relevant to the implementation
of green credit policies and the promotion of the development
and transformation of energy firms
AdapterGNN: Efficient Delta Tuning Improves Generalization Ability in Graph Neural Networks
Fine-tuning pre-trained models has recently yielded remarkable performance
gains in graph neural networks (GNNs). In addition to pre-training techniques,
inspired by the latest work in the natural language fields, more recent work
has shifted towards applying effective fine-tuning approaches, such as
parameter-efficient tuning (delta tuning). However, given the substantial
differences between GNNs and transformer-based models, applying such approaches
directly to GNNs proved to be less effective. In this paper, we present a
comprehensive comparison of delta tuning techniques for GNNs and propose a
novel delta tuning method specifically designed for GNNs, called AdapterGNN.
AdapterGNN preserves the knowledge of the large pre-trained model and leverages
highly expressive adapters for GNNs, which can adapt to downstream tasks
effectively with only a few parameters, while also improving the model's
generalization ability on the downstream tasks. Extensive experiments show that
AdapterGNN achieves higher evaluation performance (outperforming full
fine-tuning by 1.4% and 5.5% in the chemistry and biology domains respectively,
with only 5% of its parameters tuned) and lower generalization gaps compared to
full fine-tuning. Moreover, we empirically show that a larger GNN model can
have a worse generalization ability, which differs from the trend observed in
large language models. We have also provided a theoretical justification for
delta tuning can improve the generalization ability of GNNs by applying
generalization bounds
DIP: Differentiable Interreflection-aware Physics-based Inverse Rendering
We present a physics-based inverse rendering method that learns the
illumination, geometry, and materials of a scene from posed multi-view RGB
images. To model the illumination of a scene, existing inverse rendering works
either completely ignore the indirect illumination or model it by coarse
approximations, leading to sub-optimal illumination, geometry, and material
prediction of the scene. In this work, we propose a physics-based illumination
model that explicitly traces the incoming indirect lights at each surface point
based on interreflection, followed by estimating each identified indirect light
through an efficient neural network. Furthermore, we utilize the Leibniz's
integral rule to resolve non-differentiability in the proposed illumination
model caused by one type of environment light -- the tangent lights. As a
result, the proposed interreflection-aware illumination model can be learned
end-to-end together with geometry and materials estimation. As a side product,
our physics-based inverse rendering model also facilitates flexible and
realistic material editing as well as relighting. Extensive experiments on both
synthetic and real-world datasets demonstrate that the proposed method performs
favorably against existing inverse rendering methods on novel view synthesis
and inverse rendering
Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations
The abundance of instructional videos and their narrations over the Internet
offers an exciting avenue for understanding procedural activities. In this
work, we propose to learn video representation that encodes both action steps
and their temporal ordering, based on a large-scale dataset of web
instructional videos and their narrations, without using human annotations. Our
method jointly learns a video representation to encode individual step
concepts, and a deep probabilistic model to capture both temporal dependencies
and immense individual variations in the step ordering. We empirically
demonstrate that learning temporal ordering not only enables new capabilities
for procedure reasoning, but also reinforces the recognition of individual
steps. Our model significantly advances the state-of-the-art results on step
classification (+2.8% / +3.3% on COIN / EPIC-Kitchens) and step forecasting
(+7.4% on COIN). Moreover, our model attains promising results in zero-shot
inference for step classification and forecasting, as well as in predicting
diverse and plausible steps for incomplete procedures. Our code is available at
https://github.com/facebookresearch/ProcedureVRL.Comment: Accepted to CVPR 202
Parameter-efficient is not sufficient: Exploring Parameter, Memory, and Time Efficient Adapter Tuning for Dense Predictions
Pre-training & fine-tuning is a prevalent paradigm in computer vision (CV).
Recently, parameter-efficient transfer learning (PETL) methods have shown
promising performance in adapting to downstream tasks with only a few trainable
parameters. Despite their success, the existing PETL methods in CV can be
computationally expensive and require large amounts of memory and time cost
during training, which limits low-resource users from conducting research and
applications on large models. In this work, we propose Parameter, Memory, and
Time Efficient Visual Adapter () tuning to address this issue.
We provide a gradient backpropagation highway for low-rank adapters which
eliminates the need for expensive backpropagation through the frozen
pre-trained model, resulting in substantial savings of training memory and
training time. Furthermore, we optimise the structure for CV
tasks to promote model performance. Extensive experiments on COCO, ADE20K, and
Pascal VOC benchmarks show that can save up to 62.2% training
memory and 26.2% training time on average, while achieving comparable
performance to full fine-tuning and better performance than most PETL methods.
Note that we can even train the Swin-Large-based Cascade Mask RCNN on GTX
1080Ti GPUs with less than 1.5% trainable parameters.Comment: 14 pages, 4 figures, 5 tables, Submitted to NeurIPS202
- …