Recent studies have shown that Deep Leaning models are susceptible to
adversarial examples, which are data, in general images, intentionally modified
to fool a machine learning classifier. In this paper, we present a
multi-objective nested evolutionary algorithm to generate universal
unrestricted adversarial examples in a black-box scenario. The unrestricted
attacks are performed through the application of well-known image filters that
are available in several image processing libraries, modern cameras, and mobile
applications. The multi-objective optimization takes into account not only the
attack success rate but also the detection rate. Experimental results showed
that this approach is able to create a sequence of filters capable of
generating very effective and undetectable attacks