309 research outputs found
Efficient Robust Adaptive Beamforming Algorithms for Sensor Arrays
Sensor array processing techniques have been an important research area in recent years.
By using a sensor array of a certain configuration, we can improve the parameter estimation
accuracy from the observation data in the presence of interference and noise. In this
thesis, we focus on sensor array processing techniques that use antenna arrays for beamforming,
which is the key task in wireless communications, radar and sonar systems.
Firstly, we propose a low-complexity robust adaptive beamforming (RAB) technique
which estimates the steering vector using a Low-Complexity Shrinkage-Based Mismatch
Estimation (LOCSME) algorithm. The proposed LOCSME algorithm estimates the covariance
matrix of the input data and the interference-plus-noise covariance (INC) matrix
by using the Oracle Approximating Shrinkage (OAS) method. Secondly, we present
cost-effective low-rank techniques for designing robust adaptive beamforming (RAB) algorithms.
The proposed algorithms are based on the exploitation of the cross-correlation
between the array observation data and the output of the beamformer. Thirdly, we propose
distributed beamforming techniques that are based on wireless relay systems. Algorithms
that combine relay selections and SINR maximization or Minimum Mean-Square-
Error (MMSE) consensus are developed, assuming the relay systems are under total relay
transmit power constraint. Lastly, we look into the research area of robust distributed
beamforming (RDB) and develop a novel RDB approach based on the exploitation of
the cross-correlation between the received data at the relays and the destination and a
subspace projection method to estimate the channel errors, namely, the cross-correlation
and subspace projection (CCSP) RDB technique, which efficiently maximizes the output
SINR and minimizes the channel errors. Simulation results show that the proposed
techniques outperform existing techniques in various performance metrics
Distributional Modeling for Location-Aware Adversarial Patches
Adversarial patch is one of the important forms of performing adversarial
attacks in the physical world. To improve the naturalness and aggressiveness of
existing adversarial patches, location-aware patches are proposed, where the
patch's location on the target object is integrated into the optimization
process to perform attacks. Although it is effective, efficiently finding the
optimal location for placing the patches is challenging, especially under the
black-box attack settings. In this paper, we propose the Distribution-Optimized
Adversarial Patch (DOPatch), a novel method that optimizes a multimodal
distribution of adversarial locations instead of individual ones. DOPatch has
several benefits: Firstly, we find that the locations' distributions across
different models are pretty similar, and thus we can achieve efficient
query-based attacks to unseen models using a distributional prior optimized on
a surrogate model. Secondly, DOPatch can generate diverse adversarial samples
by characterizing the distribution of adversarial locations. Thus we can
improve the model's robustness to location-aware patches via carefully designed
Distributional-Modeling Adversarial Training (DOP-DMAT). We evaluate DOPatch on
various face recognition and image recognition tasks and demonstrate its
superiority and efficiency over existing methods. We also conduct extensive
ablation studies and analyses to validate the effectiveness of our method and
provide insights into the distribution of adversarial locations
Towards Viewpoint-Invariant Visual Recognition via Adversarial Training
Visual recognition models are not invariant to viewpoint changes in the 3D
world, as different viewing directions can dramatically affect the predictions
given the same object. Although many efforts have been devoted to making neural
networks invariant to 2D image translations and rotations, viewpoint invariance
is rarely investigated. As most models process images in the perspective view,
it is challenging to impose invariance to 3D viewpoint changes based only on 2D
inputs. Motivated by the success of adversarial training in promoting model
robustness, we propose Viewpoint-Invariant Adversarial Training (VIAT) to
improve viewpoint robustness of common image classifiers. By regarding
viewpoint transformation as an attack, VIAT is formulated as a minimax
optimization problem, where the inner maximization characterizes diverse
adversarial viewpoints by learning a Gaussian mixture distribution based on a
new attack GMVFool, while the outer minimization trains a viewpoint-invariant
classifier by minimizing the expected loss over the worst-case adversarial
viewpoint distributions. To further improve the generalization performance, a
distribution sharing strategy is introduced leveraging the transferability of
adversarial viewpoints across objects. Experiments validate the effectiveness
of VIAT in improving the viewpoint robustness of various image classifiers
based on the diversity of adversarial viewpoints generated by GMVFool.Comment: Accepted by ICCV 202
Improving Viewpoint Robustness for Visual Recognition via Adversarial Training
Viewpoint invariance remains challenging for visual recognition in the 3D
world, as altering the viewing directions can significantly impact predictions
for the same object. While substantial efforts have been dedicated to making
neural networks invariant to 2D image translations and rotations, viewpoint
invariance is rarely investigated. Motivated by the success of adversarial
training in enhancing model robustness, we propose Viewpoint-Invariant
Adversarial Training (VIAT) to improve the viewpoint robustness of image
classifiers. Regarding viewpoint transformation as an attack, we formulate VIAT
as a minimax optimization problem, where the inner maximization characterizes
diverse adversarial viewpoints by learning a Gaussian mixture distribution
based on the proposed attack method GMVFool. The outer minimization obtains a
viewpoint-invariant classifier by minimizing the expected loss over the
worst-case viewpoint distributions that can share the same one for different
objects within the same category. Based on GMVFool, we contribute a large-scale
dataset called ImageNet-V+ to benchmark viewpoint robustness. Experimental
results show that VIAT significantly improves the viewpoint robustness of
various image classifiers based on the diversity of adversarial viewpoints
generated by GMVFool. Furthermore, we propose ViewRS, a certified viewpoint
robustness method that provides a certified radius and accuracy to demonstrate
the effectiveness of VIAT from the theoretical perspective.Comment: 14 pages, 12 figures. arXiv admin note: substantial text overlap with
arXiv:2307.1023
ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints
Recent studies have demonstrated that visual recognition models lack
robustness to distribution shift. However, current work mainly considers model
robustness to 2D image transformations, leaving viewpoint changes in the 3D
world less explored. In general, viewpoint changes are prevalent in various
real-world applications (e.g., autonomous driving), making it imperative to
evaluate viewpoint robustness. In this paper, we propose a novel method called
ViewFool to find adversarial viewpoints that mislead visual recognition models.
By encoding real-world objects as neural radiance fields (NeRF), ViewFool
characterizes a distribution of diverse adversarial viewpoints under an
entropic regularizer, which helps to handle the fluctuations of the real camera
pose and mitigate the reality gap between the real objects and their neural
representations. Experiments validate that the common image classifiers are
extremely vulnerable to the generated adversarial viewpoints, which also
exhibit high cross-model transferability. Based on ViewFool, we introduce
ImageNet-V, a new out-of-distribution dataset for benchmarking viewpoint
robustness of image classifiers. Evaluation results on 40 classifiers with
diverse architectures, objective functions, and data augmentations reveal a
significant drop in model performance when tested on ImageNet-V, which provides
a possibility to leverage ViewFool as an effective data augmentation strategy
to improve viewpoint robustness.Comment: NeurIPS 202
- …