121 research outputs found
Reduction of Electric Vehicle Electromagnetic Radiations Using a Global Network Model
To address an electric vehicle’s magnetic emission problem, a model-based improvement strategy is proposed to avoid resource-intensive experimental diagnosis processes, thus achieving higher efficiency. Considering the electrical and structural characteristics of electric vehicles, a network model is developed to predict magnetic emissions. It decomposes the electronic power system into a global network and external circuit nodes according to electrical size. The Z-parameter is used to characterize the global network for the decomposition of impedance coupling so that the model parameters can be obtained separately using different methods. With this network model, an evaluation index is designed to measure the influence of technical factors on magnetic emissions by comprehensively considering their contributions and rooms for improvement. Engineers can directly determine the main interference source according to this evaluation score, and select a proper filter to attenuate the interference
ViT-Calibrator: Decision Stream Calibration for Vision Transformer
A surge of interest has emerged in utilizing Transformers in diverse vision
tasks owing to its formidable performance. However, existing approaches
primarily focus on optimizing internal model architecture designs that often
entail significant trial and error with high burdens. In this work, we propose
a new paradigm dubbed Decision Stream Calibration that boosts the performance
of general Vision Transformers. To achieve this, we shed light on the
information propagation mechanism in the learning procedure by exploring the
correlation between different tokens and the relevance coefficient of multiple
dimensions. Upon further analysis, it was discovered that 1) the final decision
is associated with tokens of foreground targets, while token features of
foreground target will be transmitted into the next layer as much as possible,
and the useless token features of background area will be eliminated gradually
in the forward propagation. 2) Each category is solely associated with specific
sparse dimensions in the tokens. Based on the discoveries mentioned above, we
designed a two-stage calibration scheme, namely ViT-Calibrator, including token
propagation calibration stage and dimension propagation calibration stage.
Extensive experiments on commonly used datasets show that the proposed approach
can achieve promising results. The source codes are given in the supplements.Comment: 14pages, 12 figure
Propheter: Prophetic Teacher Guided Long-Tailed Distribution Learning
The problem of deep long-tailed learning, a prevalent challenge in the realm
of generic visual recognition, persists in a multitude of real-world
applications. To tackle the heavily-skewed dataset issue in long-tailed
classification, prior efforts have sought to augment existing deep models with
the elaborate class-balancing strategies, such as class rebalancing, data
augmentation, and module improvement. Despite the encouraging performance, the
limited class knowledge of the tailed classes in the training dataset still
bottlenecks the performance of the existing deep models. In this paper, we
propose an innovative long-tailed learning paradigm that breaks the bottleneck
by guiding the learning of deep networks with external prior knowledge. This is
specifically achieved by devising an elaborated ``prophetic'' teacher, termed
as ``Propheter'', that aims to learn the potential class distributions. The
target long-tailed prediction model is then optimized under the instruction of
the well-trained ``Propheter'', such that the distributions of different
classes are as distinguishable as possible from each other. Experiments on
eight long-tailed benchmarks across three architectures demonstrate that the
proposed prophetic paradigm acts as a promising solution to the challenge of
limited class knowledge in long-tailed datasets. Our code and model can be
found in the supplementary material
Interaction Pattern Disentangling for Multi-Agent Reinforcement Learning
Deep cooperative multi-agent reinforcement learning has demonstrated its
remarkable success over a wide spectrum of complex control tasks. However,
recent advances in multi-agent learning mainly focus on value decomposition
while leaving entity interactions still intertwined, which easily leads to
over-fitting on noisy interactions between entities. In this work, we introduce
a novel interactiOn Pattern disenTangling (OPT) method, to disentangle not only
the joint value function into agent-wise value functions for decentralized
execution, but also the entity interactions into interaction prototypes, each
of which represents an underlying interaction pattern within a subgroup of the
entities. OPT facilitates filtering the noisy interactions between irrelevant
entities and thus significantly improves generalizability as well as
interpretability. Specifically, OPT introduces a sparse disagreement mechanism
to encourage sparsity and diversity among discovered interaction prototypes.
Then the model selectively restructures these prototypes into a compact
interaction pattern by an aggregator with learnable weights. To alleviate the
training instability issue caused by partial observability, we propose to
maximize the mutual information between the aggregation weights and the history
behaviors of each agent. Experiments on both single-task and multi-task
benchmarks demonstrate that the proposed method yields results superior to the
state-of-the-art counterparts. Our code is available at
https://github.com/liushunyu/OPT
- …