507 research outputs found
Automating security monitoring and analysis for Space Station Freedom's electric power system
Operating a large, space power system requires classifying the system's status and analyzing its security. Conventional algorithms are used by terrestrial electric utilities to provide such information to their dispatchers, but their application aboard Space Station Freedom will consume too much processing time. A new approach for monitoring and analysis using adaptive pattern techniques is presented. This approach yields an on-line security monitoring and analysis algorithm that is accurate and fast; and thus, it can free the Space Station Freedom's power control computers for other tasks
NAM: Non-Adversarial Unsupervised Domain Mapping
Several methods were recently proposed for the task of translating images
between domains without prior knowledge in the form of correspondences. The
existing methods apply adversarial learning to ensure that the distribution of
the mapped source domain is indistinguishable from the target domain, which
suffers from known stability issues. In addition, most methods rely heavily on
`cycle' relationships between the domains, which enforce a one-to-one mapping.
In this work, we introduce an alternative method: Non-Adversarial Mapping
(NAM), which separates the task of target domain generative modeling from the
cross-domain mapping task. NAM relies on a pre-trained generative model of the
target domain, and aligns each source image with an image synthesized from the
target domain, while jointly optimizing the domain mapping function. It has
several key advantages: higher quality and resolution image translations,
simpler and more stable training and reusable target models. Extensive
experiments are presented validating the advantages of our method.Comment: ECCV 201
Decoupled Contrastive Learning
Contrastive learning (CL) is one of the most successful paradigms for
self-supervised learning (SSL). In a principled way, it considers two augmented
"views" of the same image as positive to be pulled closer, and all other images
as negative to be pushed further apart. However, behind the impressive success
of CL-based techniques, their formulation often relies on heavy-computation
settings, including large sample batches, extensive training epochs, etc. We
are thus motivated to tackle these issues and establish a simple, efficient,
yet competitive baseline of contrastive learning. Specifically, we identify,
from theoretical and empirical studies, a noticeable negative-positive-coupling
(NPC) effect in the widely used InfoNCE loss, leading to unsuitable learning
efficiency concerning the batch size. By removing the NPC effect, we propose
decoupled contrastive learning (DCL) loss, which removes the positive term from
the denominator and significantly improves the learning efficiency. DCL
achieves competitive performance with less sensitivity to sub-optimal
hyperparameters, requiring neither large batches in SimCLR, momentum encoding
in MoCo, or large epochs. We demonstrate with various benchmarks while
manifesting robustness as much less sensitive to suboptimal hyperparameters.
Notably, SimCLR with DCL achieves 68.2% ImageNet-1K top-1 accuracy using batch
size 256 within 200 epochs pre-training, outperforming its SimCLR baseline by
6.4%. Further, DCL can be combined with the SOTA contrastive learning method,
NNCLR, to achieve 72.3% ImageNet-1K top-1 accuracy with 512 batch size in 400
epochs, which represents a new SOTA in contrastive learning. We believe DCL
provides a valuable baseline for future contrastive SSL studies.Comment: Accepted by ECCV202
Pooling-Invariant Image Feature Learning
Unsupervised dictionary learning has been a key component in state-of-the-art
computer vision recognition architectures. While highly effective methods exist
for patch-based dictionary learning, these methods may learn redundant features
after the pooling stage in a given early vision architecture. In this paper, we
offer a novel dictionary learning scheme to efficiently take into account the
invariance of learned features after the spatial pooling stage. The algorithm
is built on simple clustering, and thus enjoys efficiency and scalability. We
discuss the underlying mechanism that justifies the use of clustering
algorithms, and empirically show that the algorithm finds better dictionaries
than patch-based methods with the same dictionary size
- …