1,771 research outputs found
On the global existence and finite time blow-up of shadow systems
AbstractShadow systems are often used to approximate reaction–diffusion systems when one of the diffusion rates is large. In this paper, we study the global existence and blow-up phenomena for shadow systems. Our results show that even for these fundamental aspects, there are serious discrepancies between the dynamics of the reaction–diffusion systems and that of their corresponding shadow systems
Fractional Denoising for 3D Molecular Pre-training
Coordinate denoising is a promising 3D molecular pre-training method, which
has achieved remarkable performance in various downstream drug discovery tasks.
Theoretically, the objective is equivalent to learning the force field, which
is revealed helpful for downstream tasks. Nevertheless, there are two
challenges for coordinate denoising to learn an effective force field, i.e. low
coverage samples and isotropic force field. The underlying reason is that
molecular distributions assumed by existing denoising methods fail to capture
the anisotropic characteristic of molecules. To tackle these challenges, we
propose a novel hybrid noise strategy, including noises on both dihedral angel
and coordinate. However, denoising such hybrid noise in a traditional way is no
more equivalent to learning the force field. Through theoretical deductions, we
find that the problem is caused by the dependency of the input conformation for
covariance. To this end, we propose to decouple the two types of noise and
design a novel fractional denoising method (Frad), which only denoises the
latter coordinate part. In this way, Frad enjoys both the merits of sampling
more low-energy structures and the force field equivalence. Extensive
experiments show the effectiveness of Frad in molecular representation, with a
new state-of-the-art on 9 out of 12 tasks of QM9 and on 7 out of 8 targets of
MD17
Learn to Unlearn: A Survey on Machine Unlearning
Machine Learning (ML) models have been shown to potentially leak sensitive
information, thus raising privacy concerns in ML-driven applications. This
inspired recent research on removing the influence of specific data samples
from a trained ML model. Such efficient removal would enable ML to comply with
the "right to be forgotten" in many legislation, and could also address
performance bottlenecks from low-quality or poisonous samples. In that context,
machine unlearning methods have been proposed to erase the contributions of
designated data samples on models, as an alternative to the often impracticable
approach of retraining models from scratch. This article presents a
comprehensive review of recent machine unlearning techniques, verification
mechanisms, and potential attacks. We further highlight emerging challenges and
prospective research directions (e.g. resilience and fairness concerns). We aim
for this paper to provide valuable resources for integrating privacy, equity,
andresilience into ML systems and help them "learn to unlearn".Comment: 10 pages, 5 figures, 1 tabl
- …