107,280 research outputs found
Don't Look into the Sun: Adversarial Solarization Attacks on Image Classifiers
Assessing the robustness of deep neural networks against out-of-distribution
inputs is crucial, especially in safety-critical domains like autonomous
driving, but also in safety systems where malicious actors can digitally alter
inputs to circumvent safety guards. However, designing effective
out-of-distribution tests that encompass all possible scenarios while
preserving accurate label information is a challenging task. Existing
methodologies often entail a compromise between variety and constraint levels
for attacks and sometimes even both. In a first step towards a more holistic
robustness evaluation of image classification models, we introduce an attack
method based on image solarization that is conceptually straightforward yet
avoids jeopardizing the global structure of natural images independent of the
intensity. Through comprehensive evaluations of multiple ImageNet models, we
demonstrate the attack's capacity to degrade accuracy significantly, provided
it is not integrated into the training augmentations. Interestingly, even then,
no full immunity to accuracy deterioration is achieved. In other settings, the
attack can often be simplified into a black-box attack with model-independent
parameters. Defenses against other corruptions do not consistently extend to be
effective against our specific attack.
Project website: https://github.com/paulgavrikov/adversarial_solarizatio
Masquerade attack detection through observation planning for multi-robot systems
The increasing adoption of autonomous mobile robots comes with
a rising concern over the security of these systems. In this work, we
examine the dangers that an adversary could pose in a multi-agent
robot system. We show that conventional multi-agent plans are
vulnerable to strong attackers masquerading as a properly functioning
agent. We propose a novel technique to incorporate attack
detection into the multi-agent path-finding problem through the
simultaneous synthesis of observation plans. We show that by
specially crafting the multi-agent plan, the induced inter-agent
observations can provide introspective monitoring guarantees; we
achieve guarantees that any adversarial agent that plans to break
the system-wide security specification must necessarily violate the
induced observation plan.Accepted manuscrip
- …