686 research outputs found
Anatomical physiological and biochemical processes involved in grapevine rootstock drought tolerance
In order to explore the drought resistance mechanism of grape rootstocks, two grape rootstock species, '1103P' (a drought-tolerant rootstock) and '101-14M' (drought-sensitive), were treated with moderate water deficit (field capacity of 45-50 %). Throughout the experimental period, the leaves of '1103P' showed a higher stomatal conductance (gs), relative water content and photosynthetic rate (Pn) than '101-14M', indicating '1103P' was more resistant to tolerant than '101-14M'. We propose that '1103P' could prevent water loss from leaves under drought conditions based on the discoveries that '1103P' had higher leaf phytohormone abscisic acid (ABA) content and leaf cuticular wax content, and smaller stomata aperture than those of '101-14M'. Additionally, the activities of H2O2-scavenging enzymes in leaves of '1103P' were higher than those of '101-14M' under drought conditions, indicating the lipid peroxidation induced by H2O2 of '1103P' was less serious than that of '101-14M'. Therefore, better water-saving and higher reactive oxygen species (ROS) scavenging abilities contributed together to stronger drought resistance of '1103P' than '101-14M'
Continual Learning From a Stream of APIs
Continual learning (CL) aims to learn new tasks without forgetting previous
tasks. However, existing CL methods require a large amount of raw data, which
is often unavailable due to copyright considerations and privacy risks.
Instead, stakeholders usually release pre-trained machine learning models as a
service (MLaaS), which users can access via APIs. This paper considers two
practical-yet-novel CL settings: data-efficient CL (DECL-APIs) and data-free CL
(DFCL-APIs), which achieve CL from a stream of APIs with partial or no raw
data. Performing CL under these two new settings faces several challenges:
unavailable full raw data, unknown model parameters, heterogeneous models of
arbitrary architecture and scale, and catastrophic forgetting of previous APIs.
To overcome these issues, we propose a novel data-free cooperative continual
distillation learning framework that distills knowledge from a stream of APIs
into a CL model by generating pseudo data, just by querying APIs. Specifically,
our framework includes two cooperative generators and one CL model, forming
their training as an adversarial game. We first use the CL model and the
current API as fixed discriminators to train generators via a derivative-free
method. Generators adversarially generate hard and diverse synthetic data to
maximize the response gap between the CL model and the API. Next, we train the
CL model by minimizing the gap between the responses of the CL model and the
black-box API on synthetic data, to transfer the API's knowledge to the CL
model. Furthermore, we propose a new regularization term based on network
similarity to prevent catastrophic forgetting of previous APIs.Our method
performs comparably to classic CL with full raw data on the MNIST and SVHN in
the DFCL-APIs setting. In the DECL-APIs setting, our method achieves 0.97x,
0.75x and 0.69x performance of classic CL on CIFAR10, CIFAR100, and
MiniImageNet
- …