Analysis of Dominant Classes in Universal Adversarial Perturbations
MetadataShow full item record
The reasons why Deep Neural Networks are susceptible to being fooled by adversarial examples remains an open discussion. Indeed, many differ- ent strategies can be employed to efficiently generate adversarial attacks, some of them relying on different theoretical justifications. Among these strategies, universal (input-agnostic) perturbations are of particular inter- est, due to their capability to fool a network independently of the input in which the perturbation is applied. In this work, we investigate an in- triguing phenomenon of universal perturbations, which has been reported previously in the literature, yet without a proven justification: universal perturbations change the predicted classes for most inputs into one par- ticular (dominant) class, even if this behavior is not specified during the creation of the perturbation. In order to justify the cause of this phe- nomenon, we propose a number of hypotheses and experimentally test them using a speech command classification problem in the audio domain as a testbed. Our analyses reveal interesting properties of universal per- turbations, suggest new methods to generate such attacks and provide an explanation of dominant classes, under both a geometric and a data- feature perspective.