Adversarial Lab
This project is a Web-based Tool for visualisation and generation of adversarial examples by attacking ImageNet Models like VGG, AlexNet, ResNet etc.
Visualizing and Comparision of Various Adversarial Attacks on user uploaded images using a simple interface, using the DNN framework Pytorch, using popular SOTA Pretrained TorchVision ModelZoo. The Following Attacks have been implemented so far:
-
FGSM
- Fast Gradient Sign Method, Untargeted
- Fast Gradient Sign Method, Targeted
-
Iterative
- Basic Iterative Method, Untargeted
- Least Likely Class Iterative Method
-
DeepFool, untargeted
-
LBFGS, targeted
Coming Soon: Carlini-Wagner l2, and Many More