advsecurenet.attacks.gradient_based package
advsecurenet.attacks.gradient_based.cw module
- class advsecurenet.attacks.gradient_based.cw.CWAttack(config: CWAttackConfig)
Bases:
AdversarialAttack
Carlini-Wagner L2 attack
- Parameters:
targeted (bool) – If True, targets the attack to the specified label. Defaults to False.
c_init (float) – Initial value of c. Defaults to 0.1.
kappa (float) – Confidence parameter for CW loss. Defaults to 0.
learning_rate (float) – Learning rate for the Adam optimizer. Defaults to 0.01.
max_iterations (int) – Maximum number of iterations for the Adam optimizer. Defaults to 10.
abort_early (bool) – If True, aborts the attack early if the loss stops decreasing. Defaults to False.
binary_search_steps (int) – Number of binary search steps. Defaults to 10.
device (torch.device) – Device to use for the attack. Defaults to “cpu”.
clip_min (float) – Minimum value of the input. Defaults to 0.
clip_max (float) – Maximum value of the input. Defaults to 1.
c_lower (float) – Lower bound for c. Defaults to 1e-6.
c_upper (float) – Upper bound for c. Defaults to 1.
patience (int) – Number of iterations to wait before aborting early. Defaults to 5.
verbose (bool) – If True, prints progress of the attack. Defaults to True.
References
[1] Carlini, Nicholas, and David Wagner. “Towards evaluating the robustness of neural networks.” 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 2017.
- attack(model: BaseModel, x: Tensor, y: Tensor, *args, **kwargs) Tensor
Performs the Carlini-Wagner L2 attack on the specified model and input.
- Parameters:
model (BaseModel) – Model to attack.
x (torch.Tensor) – Batch of inputs to attack.
y (torch.Tensor) – Label of the input. If targeted is True, the attack will try to make the model predict this label. Otherwise, the attack will try to make the model predict any other label than this one.
- Returns:
Adversarial example.
- Return type:
torch.Tensor
advsecurenet.attacks.gradient_based.deepfool module
- class advsecurenet.attacks.gradient_based.deepfool.DeepFool(config: DeepFoolAttackConfig)
Bases:
AdversarialAttack
DeepFool attack
- Parameters:
num_classes (int) – Number of classes in the dataset. Defaults to 10.
overshoot (float) – Overshoot parameter. Defaults to 0.02.
max_iterations (int) – Maximum number of iterations. Defaults to 50.
device (torch.device) – Device to use for the attack. Defaults to “cpu”.
References
[1] Moosavi-Dezfooli, Seyed-Mohsen, et al. “Deepfool: a simple and accurate method to fool deep neural networks.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
- attack(model: BaseModel, x: tensor, y: tensor, *args, **kwargs) tensor
Generates adversarial examples using the DeepFool attack.
- Parameters:
model (BaseModel) – The model to attack.
x (torch.tensor) – The original input tensor. Expected shape is (batch_size, channels, height, width).
y (torch.tensor) – The true labels for the input tensor. Expected shape is (batch_size,).
- Returns:
The adversarial example tensor.
- Return type:
torch.tensor
advsecurenet.attacks.gradient_based.fgsm module
- class advsecurenet.attacks.gradient_based.fgsm.FGSM(config: FgsmAttackConfig)
Bases:
AdversarialAttack
Fast Gradient Sign Method attack. The attack can be targeted or untargeted.
- Parameters:
epsilon (float) – The epsilon value to use for the attack. Defaults to 0.3.
device (torch.device) – Device to use for the attack. Defaults to “cpu”.
References
[1] Goodfellow, Ian J., et al. “Explaining and harnessing adversarial examples.” arXiv preprint arXiv:1412.6572 (2014).
- attack(model: BaseModel, x: Tensor, y: Tensor, *args, **kwargs) Tensor
Generates adversarial examples using the FGSM attack.
- Parameters:
model (BaseModel) – The model to attack.
x (torch.tensor) – The original input tensor. Expected shape is (batch_size, channels, height, width).
y (torch.tensor) – If the attack is targeted, the target labels tensor. Else, the original labels tensor. Expected shape is (batch_size,).
- Returns:
The adversarial example tensor.
- Return type:
torch.tensor
advsecurenet.attacks.gradient_based.lots module
- class advsecurenet.attacks.gradient_based.lots.LOTS(config: LotsAttackConfig)
Bases:
AdversarialAttack
LOTS attack
- Parameters:
deep_feature_layer (str) – The name of the layer to use for the attack.
mode (LotsAttackMode) – The mode to use for the attack. Defaults to LotsAttackMode.ITERATIVE.
epsilon (float) – The epsilon value to use for the attack. Defaults to 0.1.
learning_rate (float) – The learning rate to use for the attack. Defaults to 1./255.
max_iterations (int) – The maximum number of iterations to use for the attack. Defaults to 1000.
verbose (bool) – Whether to print progress of the attack. Defaults to True.
device (torch.device) – Device to use for the attack. Defaults to “cpu”.
References
[1] Rozsa, A., Güunther, M., and Boult, T. E. (2017). LOTS about attacking deep features. In International Joint Conference on Biometrics (IJCB), pages 168{176. IEEE. https://arxiv.org/abs/1611.06179
- attack(model: BaseModel, x: Tensor, y: Tensor | None = None, x_target: Tensor = None) Tensor
Generates adversarial examples using the LOTS attack. Based on the provided mode, either the iterative or single attack will be used.
- Parameters:
model (BaseModel) – The model to attack.
x (torch.Tensor) – The original input tensor. Shape: (batch_size, channels, height, width).
x_target (torch.Tensor) – The x_target tensor. Shape: (batch_size, channels, height, width).
y (torch.Tensor, optional) – The target classes tensor. Shape: (batch_size,).
- Returns:
The adversarial example tensor.
- Return type:
torch.Tensor
advsecurenet.attacks.gradient_based.pgd module
- class advsecurenet.attacks.gradient_based.pgd.PGD(config: PgdAttackConfig)
Bases:
AdversarialAttack
Projected Gradient Descent targeted / untargeted attack using l-infinity norm.
- Parameters:
epsilon (float) – The epsilon value to use for the attack. Defaults to 0.3.
alpha (float) – The alpha value to use for the attack. Defaults to 2/255.
num_iter (int) – The number of iterations to use for the attack. Defaults to 40.
device (torch.device) – Device to use for the attack. Defaults to “cpu”.
References
[1] Madry, Aleksander, et al. “Towards deep learning models resistant to adversarial attacks.” arXiv preprint arXiv:1706.06083 (2017).
- attack(model: BaseModel, x: Tensor, y: Tensor, *args, **kwargs) Tensor
Performs the PGD attack on the specified model and input.
- Parameters:
model (BaseModel) – The model to attack.
x (torch.tensor) – The original input tensor. Expected shape is (batch_size, channels, height, width).
y (torch.tensor) – The true labels for the input tensor. Expected shape is (batch_size,).
targeted (bool) – If True, targets the attack to the specified label. Defaults to False.
- Returns:
The adversarial example tensor.
- Return type:
torch.tensor