Invisible encoded backdoor attack on DNNs using conditional GAN
Date
2023-02-17Author
Arshad, Iram
Qiao, Yuansong
Lee, Brian
Ye, Yuhang
Metadata
Show full item recordAbstract
Deep Learning (DL) models deliver superior performance and have achieved remarkable results for classification and vision tasks. However, recent research focuses on exploring these Deep Neural Networks (DNNs) weaknesses as these can be vulnerable due to transfer learning and outsourced training data. This paper investigates the feasibility of generating a stealthy invisible backdoor attack during the training phase of deep learning models. For developing the poison dataset, an interpolation technique is used to corrupt the sub-feature space of the conditional generative adversarial network. Then, the generated poison dataset is mixed with the clean dataset to corrupt the training images dataset. The experiment results show that by injecting a 3% poison dataset combined with the clean dataset, the DL models can effectively fool with a high degree of model accuracy.
Collections
The following license files are associated with this item: