International audienceIn many cases, adversarial attacks against fake detectors employ algorithms specifically crafted for automatic image classifiers. These algorithms perform well, thanks to an excellent ad hoc distribution of initial attacks. However, these attacks are easily detected due to their specific initial distribution. Consequently, we explore alternative black-box attacks inspired by generic black-box optimization tools, particularly focusing on the log-normal algorithm that we successfully extend to attack fake detectors. Moreover, we demonstrate that this attack evades detection by neural networks trained to flag classical adversarial examples. Therefore, we train more general models capable of identifying a broader spectrum of attacks, including classical black-box attacks designed for images, black-box attacks driven by classical optimization, and no-box attacks. By integrating these attack detection capabilities with fake detectors, we develop more robust and effective fake detection systems
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.