275 research outputs found
Continual Learning, Fast and Slow
According to the Complementary Learning Systems (CLS)
theory~\cite{mcclelland1995there} in neuroscience, humans do effective
\emph{continual learning} through two complementary systems: a fast learning
system centered on the hippocampus for rapid learning of the specifics,
individual experiences; and a slow learning system located in the neocortex for
the gradual acquisition of structured knowledge about the environment.
Motivated by this theory, we propose \emph{DualNets} (for Dual Networks), a
general continual learning framework comprising a fast learning system for
supervised learning of pattern-separated representation from specific tasks and
a slow learning system for representation learning of task-agnostic general
representation via Self-Supervised Learning (SSL). DualNets can seamlessly
incorporate both representation types into a holistic framework to facilitate
better continual learning in deep neural networks. Via extensive experiments,
we demonstrate the promising results of DualNets on a wide range of continual
learning protocols, ranging from the standard offline, task-aware setting to
the challenging online, task-free scenario. Notably, on the
CTrL~\cite{veniat2020efficient} benchmark that has unrelated tasks with vastly
different visual images, DualNets can achieve competitive performance with
existing state-of-the-art dynamic architecture
strategies~\cite{ostapenko2021continual}. Furthermore, we conduct comprehensive
ablation studies to validate DualNets efficacy, robustness, and scalability.
Code will be made available at \url{https://github.com/phquang/DualNet}.Comment: arXiv admin note: substantial text overlap with arXiv:2110.0017
Compositional Verification of Heap-Manipulating Programs through Property-Guided Learning
Analyzing and verifying heap-manipulating programs automatically is
challenging. A key for fighting the complexity is to develop compositional
methods. For instance, many existing verifiers for heap-manipulating programs
require user-provided specification for each function in the program in order
to decompose the verification problem. The requirement, however, often hinders
the users from applying such tools. To overcome the issue, we propose to
automatically learn heap-related program invariants in a property-guided way
for each function call. The invariants are learned based on the memory graphs
observed during test execution and improved through memory graph mutation. We
implemented a prototype of our approach and integrated it with two existing
program verifiers. The experimental results show that our approach enhances
existing verifiers effectively in automatically verifying complex
heap-manipulating programs with multiple function calls
S2TD: a Separation Logic Verifier that Supports Reasoning of the Absence and Presence of Bugs
Heap-manipulating programs are known to be challenging to reason about. We
present a novel verifier for heap-manipulating programs called S2TD, which
encodes programs systematically in the form of Constrained Horn Clauses (CHC)
using a novel extension of separation logic (SL) with recursive predicates and
dangling predicates. S2TD actively explores cyclic proofs to address the path
explosion problem. S2TD differentiates itself from existing CHC-based verifiers
by focusing on heap-manipulating programs and employing cyclic proof to
efficiently verify or falsify them with counterexamples. Compared with existing
SL-based verifiers, S2TD precisely specifies the heaps of de-allocated pointers
to avoid false positives in reasoning about the presence of bugs. S2TD has been
evaluated using a comprehensive set of benchmark programs from the SV-COMP
repository. The results show that S2TD is more effective than state-of-art
program verifiers and is more efficient than most of them.Comment: 24 page
Concolic Testing Heap-Manipulating Programs
Concolic testing is a test generation technique which works effectively by
integrating random testing generation and symbolic execution. Existing concolic
testing engines focus on numeric programs. Heap-manipulating programs make
extensive use of complex heap objects like trees and lists. Testing such
programs is challenging due to multiple reasons. Firstly, test inputs for such
program are required to satisfy non-trivial constraints which must be specified
precisely. Secondly, precisely encoding and solving path conditions in such
programs are challenging and often expensive. In this work, we propose the
first concolic testing engine called CSF for heap-manipulating programs based
on separation logic. CSF effectively combines specification-based testing and
concolic execution for test input generation. It is evaluated on a set of
challenging heap-manipulating programs. The results show that CSF generates
valid test inputs with high coverage efficiently. Furthermore, we show that CSF
can be potentially used in combination with precondition inference tools to
reduce the user effort
Enhancing Few-shot Image Classification with Cosine Transformer
This paper addresses the few-shot image classification problem, where the
classification task is performed on unlabeled query samples given a small
amount of labeled support samples only. One major challenge of the few-shot
learning problem is the large variety of object visual appearances that
prevents the support samples to represent that object comprehensively. This
might result in a significant difference between support and query samples,
therefore undermining the performance of few-shot algorithms. In this paper, we
tackle the problem by proposing Few-shot Cosine Transformer (FS-CT), where the
relational map between supports and queries is effectively obtained for the
few-shot tasks. The FS-CT consists of two parts, a learnable prototypical
embedding network to obtain categorical representations from support samples
with hard cases, and a transformer encoder to effectively achieve the
relational map from two different support and query samples. We introduce
Cosine Attention, a more robust and stable attention module that enhances the
transformer module significantly and therefore improves FS-CT performance from
5% to over 20% in accuracy compared to the default scaled dot-product
mechanism. Our method performs competitive results in mini-ImageNet, CUB-200,
and CIFAR-FS on 1-shot learning and 5-shot learning tasks across backbones and
few-shot configurations. We also developed a custom few-shot dataset for Yoga
pose recognition to demonstrate the potential of our algorithm for practical
application. Our FS-CT with cosine attention is a lightweight, simple few-shot
algorithm that can be applied for a wide range of applications, such as
healthcare, medical, and security surveillance. The official implementation
code of our Few-shot Cosine Transformer is available at
https://github.com/vinuni-vishc/Few-Shot-Cosine-Transforme
DEVELOPMENT OF POLYRROLE THIN FILM BASED SOLID-CONTACT ION-SELECTIVE ELECTRODES FOR NITRATE AND NITRITE
Joint Research on Environmental Science and Technology for the Eart
- …