secretflow.ml.nn.fl.backend.torch.strategy#
Classes:
alias of |
|
alias of |
|
alias of |
|
alias of |
|
alias of |
|
alias of |
- secretflow.ml.nn.fl.backend.torch.strategy.PYUFedAvgW[source]#
alias of
ActorProxy(PYUFedAvgW)Methods:__init__(*args, **kwargs)Abstraction device object base class.
build_dataset(x[, y, s_w, sampling_rate, ...])build torch.dataloader
build_dataset_from_builder(dataset_builder, x)build tf.data.Dataset
build_dataset_from_csv(csv_file_path, label)build torch.dataloader
evaluate([evaluate_steps])get_rows_count(filename)get_stop_training()get_weights()init_training(callbacks[, epochs, steps, ...])load_model(model_path)load model from state dict, model structure must be defined before load
on_epoch_begin(epoch)on_epoch_end(epoch)on_train_begin()on_train_end()predict([predict_steps])save_model(model_path)For compatibility reasons it is recommended to instead save only its state dict Ref:https://pytorch.org/docs/master/notes/serialization.html#id5
set_validation_metrics(global_metrics)set_weights(weights)set weights of client model
train_step(weights, cur_steps, train_steps, ...)Accept ps model params, then do local train
transform_metrics(logs[, stage])wrap_local_metrics()
- secretflow.ml.nn.fl.backend.torch.strategy.PYUFedAvgG[source]#
alias of
ActorProxy(PYUFedAvgG)Methods:__init__(*args, **kwargs)Abstraction device object base class.
build_dataset(x[, y, s_w, sampling_rate, ...])build torch.dataloader
build_dataset_from_builder(dataset_builder, x)build tf.data.Dataset
build_dataset_from_csv(csv_file_path, label)build torch.dataloader
evaluate([evaluate_steps])get_rows_count(filename)get_stop_training()get_weights()init_training(callbacks[, epochs, steps, ...])load_model(model_path)load model from state dict, model structure must be defined before load
on_epoch_begin(epoch)on_epoch_end(epoch)on_train_begin()on_train_end()predict([predict_steps])save_model(model_path)For compatibility reasons it is recommended to instead save only its state dict Ref:https://pytorch.org/docs/master/notes/serialization.html#id5
set_validation_metrics(global_metrics)set_weights(weights)set weights of client model
train_step(gradients, cur_steps, ...)Accept ps model params, then do local train
transform_metrics(logs[, stage])wrap_local_metrics()
- secretflow.ml.nn.fl.backend.torch.strategy.PYUFedAvgU[source]#
alias of
ActorProxy(PYUFedAvgU)Methods:__init__(*args, **kwargs)Abstraction device object base class.
build_dataset(x[, y, s_w, sampling_rate, ...])build torch.dataloader
build_dataset_from_builder(dataset_builder, x)build tf.data.Dataset
build_dataset_from_csv(csv_file_path, label)build torch.dataloader
evaluate([evaluate_steps])get_rows_count(filename)get_stop_training()get_weights()init_training(callbacks[, epochs, steps, ...])load_model(model_path)load model from state dict, model structure must be defined before load
on_epoch_begin(epoch)on_epoch_end(epoch)on_train_begin()on_train_end()predict([predict_steps])save_model(model_path)For compatibility reasons it is recommended to instead save only its state dict Ref:https://pytorch.org/docs/master/notes/serialization.html#id5
set_validation_metrics(global_metrics)set_weights(weights)set weights of client model
train_step(updates, cur_steps, train_steps, ...)Accept ps model params, then do local train
transform_metrics(logs[, stage])wrap_local_metrics()
- secretflow.ml.nn.fl.backend.torch.strategy.PYUFedProx[source]#
alias of
ActorProxy(PYUFedProx)Methods:__init__(*args, **kwargs)Abstraction device object base class.
build_dataset(x[, y, s_w, sampling_rate, ...])build torch.dataloader
build_dataset_from_builder(dataset_builder, x)build tf.data.Dataset
build_dataset_from_csv(csv_file_path, label)build torch.dataloader
evaluate([evaluate_steps])get_rows_count(filename)get_stop_training()get_weights()init_training(callbacks[, epochs, steps, ...])load_model(model_path)load model from state dict, model structure must be defined before load
on_epoch_begin(epoch)on_epoch_end(epoch)on_train_begin()on_train_end()predict([predict_steps])save_model(model_path)For compatibility reasons it is recommended to instead save only its state dict Ref:https://pytorch.org/docs/master/notes/serialization.html#id5
set_validation_metrics(global_metrics)set_weights(weights)set weights of client model
train_step(weights, cur_steps, train_steps, ...)Accept ps model params,then do local train
transform_metrics(logs[, stage])w_norm(w1, w2)wrap_local_metrics()
- secretflow.ml.nn.fl.backend.torch.strategy.PYUFedSCR[source]#
alias of
ActorProxy(PYUFedSCR)Methods:__init__(*args, **kwargs)Abstraction device object base class.
build_dataset(x[, y, s_w, sampling_rate, ...])build torch.dataloader
build_dataset_from_builder(dataset_builder, x)build tf.data.Dataset
build_dataset_from_csv(csv_file_path, label)build torch.dataloader
evaluate([evaluate_steps])get_rows_count(filename)get_stop_training()get_weights()init_training(callbacks[, epochs, steps, ...])load_model(model_path)load model from state dict, model structure must be defined before load
on_epoch_begin(epoch)on_epoch_end(epoch)on_train_begin()on_train_end()predict([predict_steps])save_model(model_path)For compatibility reasons it is recommended to instead save only its state dict Ref:https://pytorch.org/docs/master/notes/serialization.html#id5
set_validation_metrics(global_metrics)set_weights(weights)set weights of client model
train_step(updates, cur_steps, train_steps, ...)Accept ps model params,then do local train
transform_metrics(logs[, stage])wrap_local_metrics()
- secretflow.ml.nn.fl.backend.torch.strategy.PYUFedSTC[source]#
alias of
ActorProxy(PYUFedSTC)Methods:__init__(*args, **kwargs)Abstraction device object base class.
build_dataset(x[, y, s_w, sampling_rate, ...])build torch.dataloader
build_dataset_from_builder(dataset_builder, x)build tf.data.Dataset
build_dataset_from_csv(csv_file_path, label)build torch.dataloader
evaluate([evaluate_steps])get_rows_count(filename)get_stop_training()get_weights()init_training(callbacks[, epochs, steps, ...])load_model(model_path)load model from state dict, model structure must be defined before load
on_epoch_begin(epoch)on_epoch_end(epoch)on_train_begin()on_train_end()predict([predict_steps])save_model(model_path)For compatibility reasons it is recommended to instead save only its state dict Ref:https://pytorch.org/docs/master/notes/serialization.html#id5
set_validation_metrics(global_metrics)set_weights(weights)set weights of client model
train_step(updates, cur_steps, train_steps, ...)Accept ps model params,then do local train
transform_metrics(logs[, stage])wrap_local_metrics()
secretflow.ml.nn.fl.backend.torch.strategy.fed_avg_g#
Classes:
|
FedAvgG: An implementation of FedAvg, where the clients upload their accumulated gradients during the federated round to the server for averaging and update their local models using the aggregated gradients from the server in each federated round. |
alias of |
- class secretflow.ml.nn.fl.backend.torch.strategy.fed_avg_g.FedAvgG(builder_base: Callable[[], TorchModel], random_seed: Optional[int] = None)[source]#
Bases:
BaseTorchModelFedAvgG: An implementation of FedAvg, where the clients upload their accumulated gradients during the federated round to the server for averaging and update their local models using the aggregated gradients from the server in each federated round.
Methods:
train_step(gradients, cur_steps, ...)Accept ps model params, then do local train
- train_step(gradients: ndarray, cur_steps: int, train_steps: int, **kwargs) Tuple[ndarray, int][source]#
Accept ps model params, then do local train
- Parameters:
gradients – global gradients from params server
cur_steps – current train step
train_steps – local training steps
kwargs – strategy-specific parameters
- Returns:
Parameters after local training
- secretflow.ml.nn.fl.backend.torch.strategy.fed_avg_g.PYUFedAvgG[source]#
alias of
ActorProxy(PYUFedAvgG)Methods:__init__(*args, **kwargs)Abstraction device object base class.
build_dataset(x[, y, s_w, sampling_rate, ...])build torch.dataloader
build_dataset_from_builder(dataset_builder, x)build tf.data.Dataset
build_dataset_from_csv(csv_file_path, label)build torch.dataloader
evaluate([evaluate_steps])get_rows_count(filename)get_stop_training()get_weights()init_training(callbacks[, epochs, steps, ...])load_model(model_path)load model from state dict, model structure must be defined before load
on_epoch_begin(epoch)on_epoch_end(epoch)on_train_begin()on_train_end()predict([predict_steps])save_model(model_path)For compatibility reasons it is recommended to instead save only its state dict Ref:https://pytorch.org/docs/master/notes/serialization.html#id5
set_validation_metrics(global_metrics)set_weights(weights)set weights of client model
train_step(gradients, cur_steps, ...)Accept ps model params, then do local train
transform_metrics(logs[, stage])wrap_local_metrics()
secretflow.ml.nn.fl.backend.torch.strategy.fed_avg_u#
Classes:
|
FedAvgU: An implementation of FedAvg, where the clients upload their model updates to the server for averaging and update their local models with the aggregated updates from the server in each federated round. |
alias of |
- class secretflow.ml.nn.fl.backend.torch.strategy.fed_avg_u.FedAvgU(builder_base: Callable[[], TorchModel], random_seed: Optional[int] = None)[source]#
Bases:
BaseTorchModelFedAvgU: An implementation of FedAvg, where the clients upload their model updates to the server for averaging and update their local models with the aggregated updates from the server in each federated round. This paradigm acts the same as FedAvgG when using the SGD optimizer, but may not for other optimizers (e.g., Adam).
Methods:
train_step(updates, cur_steps, train_steps, ...)Accept ps model params, then do local train
- train_step(updates: ndarray, cur_steps: int, train_steps: int, **kwargs) Tuple[ndarray, int][source]#
Accept ps model params, then do local train
- Parameters:
updates – global updates from params server
cur_steps – current train step
train_steps – local training steps
kwargs – strategy-specific parameters
- Returns:
Parameters after local training
- secretflow.ml.nn.fl.backend.torch.strategy.fed_avg_u.PYUFedAvgU[source]#
alias of
ActorProxy(PYUFedAvgU)Methods:__init__(*args, **kwargs)Abstraction device object base class.
build_dataset(x[, y, s_w, sampling_rate, ...])build torch.dataloader
build_dataset_from_builder(dataset_builder, x)build tf.data.Dataset
build_dataset_from_csv(csv_file_path, label)build torch.dataloader
evaluate([evaluate_steps])get_rows_count(filename)get_stop_training()get_weights()init_training(callbacks[, epochs, steps, ...])load_model(model_path)load model from state dict, model structure must be defined before load
on_epoch_begin(epoch)on_epoch_end(epoch)on_train_begin()on_train_end()predict([predict_steps])save_model(model_path)For compatibility reasons it is recommended to instead save only its state dict Ref:https://pytorch.org/docs/master/notes/serialization.html#id5
set_validation_metrics(global_metrics)set_weights(weights)set weights of client model
train_step(updates, cur_steps, train_steps, ...)Accept ps model params, then do local train
transform_metrics(logs[, stage])wrap_local_metrics()
secretflow.ml.nn.fl.backend.torch.strategy.fed_avg_w#
Classes:
|
FedAvgW: A naive implementation of FedAvg, where the clients upload their trained model weights to the server for averaging and update their local models via the aggregated weights from the server in each federated round. |
alias of |
- class secretflow.ml.nn.fl.backend.torch.strategy.fed_avg_w.FedAvgW(builder_base: Callable[[], TorchModel], random_seed: Optional[int] = None)[source]#
Bases:
BaseTorchModelFedAvgW: A naive implementation of FedAvg, where the clients upload their trained model weights to the server for averaging and update their local models via the aggregated weights from the server in each federated round.
Methods:
train_step(weights, cur_steps, train_steps, ...)Accept ps model params, then do local train
- train_step(weights: ndarray, cur_steps: int, train_steps: int, **kwargs) Tuple[ndarray, int][source]#
Accept ps model params, then do local train
- Parameters:
weights – global weight from params server
cur_steps – current train step
train_steps – local training steps
kwargs – strategy-specific parameters
- Returns:
Parameters after local training
- secretflow.ml.nn.fl.backend.torch.strategy.fed_avg_w.PYUFedAvgW[source]#
alias of
ActorProxy(PYUFedAvgW)Methods:__init__(*args, **kwargs)Abstraction device object base class.
build_dataset(x[, y, s_w, sampling_rate, ...])build torch.dataloader
build_dataset_from_builder(dataset_builder, x)build tf.data.Dataset
build_dataset_from_csv(csv_file_path, label)build torch.dataloader
evaluate([evaluate_steps])get_rows_count(filename)get_stop_training()get_weights()init_training(callbacks[, epochs, steps, ...])load_model(model_path)load model from state dict, model structure must be defined before load
on_epoch_begin(epoch)on_epoch_end(epoch)on_train_begin()on_train_end()predict([predict_steps])save_model(model_path)For compatibility reasons it is recommended to instead save only its state dict Ref:https://pytorch.org/docs/master/notes/serialization.html#id5
set_validation_metrics(global_metrics)set_weights(weights)set weights of client model
train_step(weights, cur_steps, train_steps, ...)Accept ps model params, then do local train
transform_metrics(logs[, stage])wrap_local_metrics()
secretflow.ml.nn.fl.backend.torch.strategy.fed_prox#
Classes:
|
FedfProx: An FL optimization strategy that addresses the challenge of heterogeneity on data (non-IID) and devices, which adds a proximal term to the local objective function of each client, for better convergence. |
alias of |
- class secretflow.ml.nn.fl.backend.torch.strategy.fed_prox.FedProx(builder_base: Callable[[], TorchModel], random_seed: Optional[int] = None)[source]#
Bases:
BaseTorchModelFedfProx: An FL optimization strategy that addresses the challenge of heterogeneity on data (non-IID) and devices, which adds a proximal term to the local objective function of each client, for better convergence. In the feature, this strategy will allow every client to train locally with a different Gamma-inexactness, for higher training efficiency.
Methods:
w_norm(w1, w2)train_step(weights, cur_steps, train_steps, ...)Accept ps model params,then do local train
- train_step(weights: ndarray, cur_steps: int, train_steps: int, **kwargs) Tuple[ndarray, int][source]#
Accept ps model params,then do local train
- Parameters:
weights – global weight from params server
cur_steps – current train step
train_steps – local training steps
kwargs – strategy-specific parameters mu: hyper-parameter for the proximal term, default is 0.0
- Returns:
Parameters after local training
- secretflow.ml.nn.fl.backend.torch.strategy.fed_prox.PYUFedProx[source]#
alias of
ActorProxy(PYUFedProx)Methods:__init__(*args, **kwargs)Abstraction device object base class.
build_dataset(x[, y, s_w, sampling_rate, ...])build torch.dataloader
build_dataset_from_builder(dataset_builder, x)build tf.data.Dataset
build_dataset_from_csv(csv_file_path, label)build torch.dataloader
evaluate([evaluate_steps])get_rows_count(filename)get_stop_training()get_weights()init_training(callbacks[, epochs, steps, ...])load_model(model_path)load model from state dict, model structure must be defined before load
on_epoch_begin(epoch)on_epoch_end(epoch)on_train_begin()on_train_end()predict([predict_steps])save_model(model_path)For compatibility reasons it is recommended to instead save only its state dict Ref:https://pytorch.org/docs/master/notes/serialization.html#id5
set_validation_metrics(global_metrics)set_weights(weights)set weights of client model
train_step(weights, cur_steps, train_steps, ...)Accept ps model params,then do local train
transform_metrics(logs[, stage])w_norm(w1, w2)wrap_local_metrics()
secretflow.ml.nn.fl.backend.torch.strategy.fed_scr#
Classes:
|
FedSCR: A structure-wise aggregation method to identify and remove redundant updates, it aggregates parameter updates over a particular structure (e.g., filters and channels). |
alias of |
- class secretflow.ml.nn.fl.backend.torch.strategy.fed_scr.FedSCR(builder_base: Callable[[], TorchModel], random_seed)[source]#
Bases:
BaseTorchModelFedSCR: A structure-wise aggregation method to identify and remove redundant updates, it aggregates parameter updates over a particular structure (e.g., filters and channels). If the sum of the absolute updates of a model structure is lower than a given threshold, FedSCR will treat the updates in this structure as less important and filter them out.
Methods:
__init__(builder_base, random_seed)train_step(updates, cur_steps, train_steps, ...)Accept ps model params,then do local train
- __init__(builder_base: Callable[[], TorchModel], random_seed)[source]#
- train_step(updates: ndarray, cur_steps: int, train_steps: int, **kwargs) Tuple[ndarray, int][source]#
Accept ps model params,then do local train
- Parameters:
updates – global updates from params server
cur_steps – current train step
train_steps – local training steps
kwargs – strategy-specific parameters threshold: user-defined threshold, controlling the selectivity of weight updates, filtering insignificant updates
- Returns:
Parameters after local training
- secretflow.ml.nn.fl.backend.torch.strategy.fed_scr.PYUFedSCR[source]#
alias of
ActorProxy(PYUFedSCR)Methods:__init__(*args, **kwargs)Abstraction device object base class.
build_dataset(x[, y, s_w, sampling_rate, ...])build torch.dataloader
build_dataset_from_builder(dataset_builder, x)build tf.data.Dataset
build_dataset_from_csv(csv_file_path, label)build torch.dataloader
evaluate([evaluate_steps])get_rows_count(filename)get_stop_training()get_weights()init_training(callbacks[, epochs, steps, ...])load_model(model_path)load model from state dict, model structure must be defined before load
on_epoch_begin(epoch)on_epoch_end(epoch)on_train_begin()on_train_end()predict([predict_steps])save_model(model_path)For compatibility reasons it is recommended to instead save only its state dict Ref:https://pytorch.org/docs/master/notes/serialization.html#id5
set_validation_metrics(global_metrics)set_weights(weights)set weights of client model
train_step(updates, cur_steps, train_steps, ...)Accept ps model params,then do local train
transform_metrics(logs[, stage])wrap_local_metrics()
secretflow.ml.nn.fl.backend.torch.strategy.fed_stc#
Classes:
|
FedSTC: Sparse Ternary Compression (STC), a new compression framework that is specifically designed to meet the requirements of the Federated Learning environment. |
alias of |
- class secretflow.ml.nn.fl.backend.torch.strategy.fed_stc.FedSTC(builder_base: Callable[[], TorchModel], random_seed)[source]#
Bases:
BaseTorchModelFedSTC: Sparse Ternary Compression (STC), a new compression framework that is specifically designed to meet the requirements of the Federated Learning environment. STC applies both sparsity and binarization in both upstream (client –> server) and downstream (server –> client) communication.
Methods:
__init__(builder_base, random_seed)train_step(updates, cur_steps, train_steps, ...)Accept ps model params,then do local train
- __init__(builder_base: Callable[[], TorchModel], random_seed)[source]#
- train_step(updates: ndarray, cur_steps: int, train_steps: int, **kwargs) Tuple[ndarray, int][source]#
Accept ps model params,then do local train
- Parameters:
updates – global updates from params server
cur_steps – current train step
train_steps – local training steps
kwargs – strategy-specific parameters sparsity: SparsityParameters,the ratio of masked elements, default is 0.0
- Returns:
Parameters after local training
- secretflow.ml.nn.fl.backend.torch.strategy.fed_stc.PYUFedSTC[source]#
alias of
ActorProxy(PYUFedSTC)Methods:__init__(*args, **kwargs)Abstraction device object base class.
build_dataset(x[, y, s_w, sampling_rate, ...])build torch.dataloader
build_dataset_from_builder(dataset_builder, x)build tf.data.Dataset
build_dataset_from_csv(csv_file_path, label)build torch.dataloader
evaluate([evaluate_steps])get_rows_count(filename)get_stop_training()get_weights()init_training(callbacks[, epochs, steps, ...])load_model(model_path)load model from state dict, model structure must be defined before load
on_epoch_begin(epoch)on_epoch_end(epoch)on_train_begin()on_train_end()predict([predict_steps])save_model(model_path)For compatibility reasons it is recommended to instead save only its state dict Ref:https://pytorch.org/docs/master/notes/serialization.html#id5
set_validation_metrics(global_metrics)set_weights(weights)set weights of client model
train_step(updates, cur_steps, train_steps, ...)Accept ps model params,then do local train
transform_metrics(logs[, stage])wrap_local_metrics()