advsecurenet.utils package
advsecurenet.utils.adversarial_target_generator module
- class advsecurenet.utils.adversarial_target_generator.AdversarialTargetGenerator
Bases:
object
This module is responsible for generating target images. This is specially useful for targeted attacks and when the client doesn’t provide a target image. The example can be found in examples/attacks/targeted_attacks.ipynb.
- generate_target_images_and_labels(data: VisionDataset, targets: List | None = None, overwrite: bool | None = False) Tuple[List[Tensor], List[int]]
Generates target images and labels for the given data.
- Parameters:
data (TorchDataset) – The dataset to generate target images and labels from.
targets (Optional[List]) – The list of target labels. Defaults to None.
overwrite (Optional[bool]) – If True, overwrites the existing indices map. Defaults to False.
- Returns:
The list of target images and target labels.
- Return type:
Tuple[List[torch.Tensor], List[int]]
- generate_target_labels(data, targets: list | None = None, overwrite=False)
Generates target labels for the given data.
- Parameters:
data (torch.utils.data.Dataset) – The training data.
targets (list, optional) – The list of target labels. Defaults to None.
overwrite (bool, optional) – If True, overwrites the existing indices map. Defaults to False.
- Returns:
The list of target labels.
- Return type:
list
advsecurenet.utils.data module
- advsecurenet.utils.data.get_subset_data(data: Dataset, num_samples: int, random_seed: int | None = None) Dataset
Returns a subset of the given dataset.
- Parameters:
data (torch.utils.data.Dataset) – The dataset to get the subset from.
num_samples (int) – The number of samples to get from the dataset.
random_seed (Optional[int]) – The random seed to use for generating the subset. Defaults to None.
- Returns:
The subset of the dataset.
- Return type:
torch.utils.data.Dataset
- advsecurenet.utils.data.split_data(x, y, test_size=0.2, val_size=0.25, random_state=None)
Splits data into train, validation and test sets with the given ratios.
- Parameters:
x (list) – List of features.
y (list) – List of targets.
test_size (float) – Ratio of test samples.
val_size (float) – Ratio of validation samples.
random_state (int) – Random seed for reproducibility.
- Returns:
List of training features. x_val (list): List of validation features. x_test (list): List of test features. y_train (list): List of training targets. y_val (list): List of validation targets. y_test (list): List of test targets.
- Return type:
x_train (list)
- advsecurenet.utils.data.unnormalize_data(data: Tensor, mean: Tensor, std: Tensor) Tensor
Unnormalizes the given data using the given mean and standard deviation.
- Parameters:
data (torch.Tensor) – The data to be unnormalized.
mean (torch.Tensor) – The mean to be used for unnormalization.
std (torch.Tensor) – The standard deviation to be used for unnormalization.
- Returns:
The unnormalized data.
- Return type:
torch.Tensor
advsecurenet.utils.dataclass module
- advsecurenet.utils.dataclass.filter_for_dataclass(data: dict | object, dataclass_type: type, convert: bool | None = False) dict | object
Filter a dictionary to only include keys that are valid fields of the given dataclass type.
- Parameters:
data (Union[dict, dataclass]) – The data to filter. If a dataclass instance is provided, it will be flattened first.
dataclass_type (type) – The dataclass type to filter for.
convert (Optional[bool]) – Whether to convert the filtered data back to a dataclass instance. Default is False.
- Returns:
The filtered data. If the convert flag is set to True, the filtered data will be converted to a dataclass instance.
- Return type:
dict or object
- advsecurenet.utils.dataclass.flatten_dataclass(instance: object) dict
Recursively flatten dataclass instances into a single dictionary. Recursion is used to flatten nested dataclasses.
- Parameters:
instance (object) – The dataclass instance to flatten.
- Returns:
The flattened dataclass instance.
- Return type:
dict
- advsecurenet.utils.dataclass.is_list_of_dataclass(field_type: Type, value) bool
- advsecurenet.utils.dataclass.is_optional_type(field_type: Type) bool
- advsecurenet.utils.dataclass.merge_dataclasses(*dataclasses: object) object
Merge two dataclasses into a single dataclass. The fields of the second dataclass will overwrite the fields of the first dataclass.
- Parameters:
dataclasses (object) – The dataclasses to merge.
- Returns:
The merged dataclass.
- Return type:
object
- advsecurenet.utils.dataclass.process_field(field_type: Type, value)
- advsecurenet.utils.dataclass.process_generic_type(origin, args, value)
- advsecurenet.utils.dataclass.process_optional_field(args, value)
- advsecurenet.utils.dataclass.recursive_dataclass_instantiation(cls: Type[T], data: dict) T
Recursively instantiate a dataclass from a dictionary. Recursion is used to instantiate nested dataclasses.
- Parameters:
cls (Type[T]) – The dataclass type to instantiate.
data (dict) – The dictionary to instantiate the dataclass from.
- Returns:
The instantiated dataclass.
- Return type:
T
advsecurenet.utils.ddp module
- advsecurenet.utils.ddp.set_visible_gpus(gpu_ids: List[int] | None = None) None
Set the visible GPUs for the current process. If no GPU IDs are provided, all available GPUs are used.
- Parameters:
gpu_ids (Optional[List[int]]) – The list of GPU IDs to use. Defaults to None. If None, all available GPUs are used.
advsecurenet.utils.device_manager module
- class advsecurenet.utils.device_manager.DeviceManager(device: str | device, distributed_mode: bool)
Bases:
object
Device manager module for handling device placement in both single and distributed modes. In single mode, the device manager will place the tensors on the specified device. In distributed mode, the device manager will assume that the tensors are already placed correctly. This centralizes the device placement logic and makes it easier to switch between single and distributed modes.
- get_current_device()
Returns the current device. In distributed mode, it returns the device of the current process. In single mode, it returns the initialized device.
- to_device(*args)
Places the provided tensors on the correct device based on the current mode. In distributed mode, it places tensors on the device of the current process. In single mode, it places tensors on the initialized device.
advsecurenet.utils.dot_dict module
- class advsecurenet.utils.dot_dict.DotDict
Bases:
dict
dot.notation access to dictionary attributes This class allows for dot notation access to dictionary attributes. This additional functionality does not affect the existing dictionary methods.
Taken from: https://stackoverflow.com/a/23689767/5768407
advsecurenet.utils.logging module
- class advsecurenet.utils.logging.LoggingConfig(log_dir: str = <factory>, log_file: str = 'advsecurenet.log', level: str = 'INFO', disable_existing_loggers: bool = False, formatters: dict = <factory>, version: int = 1)
Bases:
object
This dataclass is used to store the logging configuration.
- log_dir
The log directory.
- Type:
str
- log_file
The name of the log file.
- Type:
str
- level
The log level. The hierarchy of log levels is as follows: DEBUG < INFO < WARNING < ERROR < CRITICAL.
- Type:
str
- disable_existing_loggers
Whether to disable existing loggers.
- Type:
bool
- formatters
The log formatters.
- Type:
dict
- version
The logging configuration version.
- Type:
int
- disable_existing_loggers: bool = False
- formatters: dict
- level: str = 'INFO'
- log_dir: str
- log_file: str = 'advsecurenet.log'
- version: int = 1
- advsecurenet.utils.logging.setup_logging(config: LoggingConfig | None = None) None
Setup logging configuration.
- Parameters:
config (LoggingConfig) – The logging configuration.
advsecurenet.utils.loss module
- advsecurenet.utils.loss.get_loss_function(criterion: str | Module, **kwargs) Module
Returns the loss function based on the given loss_function string or nn.Module.
- Parameters:
criterion (str or nn.Module, optional) – The loss function. Defaults to nn.CrossEntropyLoss().
- Returns:
The loss function.
- Return type:
nn.Module
Examples
>>> get_loss_function("cross_entropy") >>> get_loss_function(nn.CrossEntropyLoss())
advsecurenet.utils.model_utils module
- advsecurenet.utils.model_utils.download_weights(model_name: str | None = None, dataset_name: str | None = None, filename: str | None = None, save_path: str | None = '/home/runner/work/advsecurenet/advsecurenet/advsecurenet/weights') None
Downloads model weights from a remote source based on the model and dataset names.
- Parameters:
model_name (str) – The name of the model (e.g. “resnet50”).
dataset_name (str) – The name of the dataset the model was trained on (e.g. “cifar10”).
filename (str, optional) – The filename of the weights on the remote server. If provided, this will be used directly.
save_path (str, optional) – The directory to save the weights to. Defaults to weights directory.
Examples
>>> download_weights(model_name="resnet50", dataset_name="cifar10") Downloaded weights to /home/user/advsecurenet/weights/resnet50_cifar10.pth
>>> download_weights(filename="resnet50_cifar10.pth") Downloaded weights to /home/user/advsecurenet/weights/resnet50_cifar10.pth
- advsecurenet.utils.model_utils.load_model(model, filename, filepath=None, device: device = device(type='cpu'))
Loads the model weights from the given filepath.
- Parameters:
model (nn.Module) – The model to load the weights into.
filename (str) – The filename to load the model weights from.
filepath (str, optional) – The filepath to load the model weights from. Defaults to weights directory.
device (torch.device, optional) – The device to load the model weights to. Defaults to CPU.
- advsecurenet.utils.model_utils.save_model(model: Module, filename: str, filepath: str | None = None, distributed: bool = False)
Saves the model weights to the given filepath.
- Parameters:
model (nn.Module) – The model to save.
filename (str) – The filename to save the model weights to.
filepath (str, optional) – The filepath to save the model weights to. Defaults to weights directory.
distributed (bool, optional) – Whether the model is distributed or not. Defaults to False.
advsecurenet.utils.network module
- advsecurenet.utils.network.find_free_port()
Find a free port on the machine.
advsecurenet.utils.normalization_layer module
- class advsecurenet.utils.normalization_layer.NormalizationLayer(mean: List[float] | Tensor, std: List[float] | Tensor)
Bases:
Module
Normalization layer that normalizes the input tensor by subtracting the mean and dividing by the standard deviation. Each channel can have its own mean and standard deviation.
- forward(x: Tensor) Tensor
Forward pass of the normalization layer. Assumes x is in shape [N, C, H, W].
- Parameters:
x (torch.Tensor) – Input tensor to normalize.
- Returns:
Normalized tensor.
- Return type:
torch.Tensor
Note
N: Batch size C: Number of channels H: Height of the image W: Width of the image
advsecurenet.utils.reproducibility_utils module
- advsecurenet.utils.reproducibility_utils.numpy_random_seed(seed: int = 42) None
Sets the random seed for NumPy and its related libraries.
- Parameters:
seed (int, optional) – The random seed to use, by default 42
- advsecurenet.utils.reproducibility_utils.numpy_unseed() None
Unsets the random seed for NumPy and its related libraries.
- advsecurenet.utils.reproducibility_utils.python_random_seed(seed: int = 42) None
Sets the random seed for Python’s random module.
- Parameters:
seed (int, optional) – The random seed to use, by default 42
- advsecurenet.utils.reproducibility_utils.python_unseed() None
Unsets the random seed for Python’s random module.
- advsecurenet.utils.reproducibility_utils.set_deterministic(seed: int = 42) None
Sets the random seed for all libraries and sets the deterministic flag for CUDA.
- Parameters:
seed (int, optional) – The random seed to use. Defaults to 42.
- advsecurenet.utils.reproducibility_utils.set_nondeterministic() None
Unsets the random seed for all libraries and unsets the deterministic flag for CUDA.
- advsecurenet.utils.reproducibility_utils.set_seed(seed: int = 42) None
Sets the random seed for all libraries.
- Parameters:
seed (int, optional) – The random seed to use, by default 42
- advsecurenet.utils.reproducibility_utils.torch_random_seed(seed: int = 42) None
Sets the random seed for PyTorch and its related libraries.
- Parameters:
seed (int, optional) – The random seed to use, by default 42
- advsecurenet.utils.reproducibility_utils.torch_unseed() None
Unsets the random seed for PyTorch and its related libraries.
- advsecurenet.utils.reproducibility_utils.unique_seed() int
Generates a unique seed for each run. This combines the current time and the process ID.