secretflow.ml.nn.sl#
secretflow.ml.nn.sl.sl_model#
SLModel
Classes:
|
- class secretflow.ml.nn.sl.sl_model.SLModel(base_model_dict: Dict[Device, Callable[[], tensorflow.keras.Model]] = {}, device_y: PYU = None, model_fuse: Callable[[], tensorflow.keras.Model] = None, compressor: Compressor = None, dp_strategy_dict: Dict[Device, DPStrategy] = None, random_seed: int = None, strategy='split_nn', **kwargs)[源代码]#
基类:
object
Methods:
__init__
([base_model_dict, device_y, ...])Interface for vertical split learning .
handle_data
(x[, y, sample_weight, ...])fit
(x, y[, batch_size, epochs, verbose, ...])Vertical split learning training interface
predict
(x[, batch_size, verbose, ...])Vertical split learning offline prediction interface
evaluate
(x, y[, batch_size, sample_weight, ...])Vertical split learning evaluate interface
save_model
([base_model_path, ...])Vertical split learning save model interface
load_model
([base_model_path, ...])Vertical split learning load model interface
export_model
([base_model_path, ...])Vertical split learning export model interface
get_cpus
()- __init__(base_model_dict: Dict[Device, Callable[[], tensorflow.keras.Model]] = {}, device_y: PYU = None, model_fuse: Callable[[], tensorflow.keras.Model] = None, compressor: Compressor = None, dp_strategy_dict: Dict[Device, DPStrategy] = None, random_seed: int = None, strategy='split_nn', **kwargs)[源代码]#
Interface for vertical split learning .. attribute:: base_model_dict
Basemodel dictionary, key is PYU, value is the Basemodel defined by party.
- device_y#
Define which model have label.
- model_fuse#
Fuse model definition.
- compressor#
Define strategy tensor compression algorithms to speed up transmission.
- dp_strategy_dict#
Dp strategy dictionary.
- random_seed#
If specified, the initial value of the model will remain the same, which ensures reproducible.
- strategy#
Strategy of split learning.
- handle_data(x: Union[VDataFrame, FedNdarray, List[Union[HDataFrame, VDataFrame, FedNdarray]]], y: Optional[Union[FedNdarray, VDataFrame, PYUObject]] = None, sample_weight: Optional[Union[FedNdarray, VDataFrame]] = None, batch_size=32, shuffle=False, epochs=1, stage='train', random_seed=1234, dataset_builder: Optional[Dict] = None)[源代码]#
- fit(x: Union[VDataFrame, FedNdarray, List[Union[HDataFrame, VDataFrame, FedNdarray]]], y: Union[VDataFrame, FedNdarray, PYUObject], batch_size=32, epochs=1, verbose=1, callbacks=None, validation_data=None, shuffle=False, sample_weight=None, validation_freq=1, dp_spent_step_freq=None, dataset_builder: Optional[Callable[[List], Tuple[int, Iterable]]] = None, audit_log_dir: Optional[str] = None, audit_log_params: dict = {}, random_seed: Optional[int] = None)[源代码]#
Vertical split learning training interface
- 参数:
x – Input data. It could be:
VDataFrame (-) – a vertically aligned dataframe.
FedNdArray (-) – a vertically aligned ndarray.
List[Union[HDataFrame (-) – list of dataframe or ndarray.
VDataFrame – list of dataframe or ndarray.
FedNdarray]] – list of dataframe or ndarray.
y – Target data. It could be a VDataFrame or FedNdarray which has only one partition, or a PYUObject.
batch_size – Number of samples per gradient update.
epochs – Number of epochs to train the model
verbose – 0, 1. Verbosity mode
callbacks – List of keras.callbacks.Callback instances.
validation_data – Data on which to validate
shuffle – Whether shuffle dataset or not
validation_freq – specifies how many training epochs to run before a new validation run is performed
sample_weight – weights for the training samples
dp_spent_step_freq – specifies how many training steps to check the budget of dp
dataset_builder – Callable function, its input is x or [x, y] if y is set, it should return a dataset.
audit_log_dir – If audit_log_dir is set, audit model will be enabled
audit_log_params – Kwargs for saving audit model, eg: {‘save_traces’=True, ‘save_format’=’h5’}
random_seed – seed for prg, will only affect dataset shuffle
- predict(x: Union[VDataFrame, FedNdarray, List[Union[HDataFrame, VDataFrame, FedNdarray]]], batch_size=32, verbose=0, dataset_builder: Optional[Callable[[List], Tuple[int, Iterable]]] = None, compress: bool = False)[源代码]#
Vertical split learning offline prediction interface
- 参数:
x – Input data. It could be:
VDataFrame (-) – a vertically aligned dataframe.
FedNdArray (-) – a vertically aligned ndarray.
List[Union[HDataFrame (-) – list of dataframe or ndarray.
VDataFrame – list of dataframe or ndarray.
FedNdarray]] – list of dataframe or ndarray.
batch_size – Number of samples per gradient update, Int
verbose – 0, 1. Verbosity mode
dataset_builder – Callable function, its input is x or [x, y] if y is set, it should return steps_per_epoch and iterable dataset. Dataset builder is mainly for building graph dataset.
compress – Whether to use compressor to compress cross device data.
- evaluate(x: Union[VDataFrame, FedNdarray, List[Union[HDataFrame, VDataFrame, FedNdarray]]], y: Union[VDataFrame, FedNdarray, PYUObject], batch_size: int = 32, sample_weight=None, verbose=1, dataset_builder: Dict = None, random_seed: int = None, compress: bool = False)[源代码]#
Vertical split learning evaluate interface
- 参数:
x – Input data. It could be:
VDataFrame (-) – a vertically aligned dataframe.
FedNdArray (-) – a vertically aligned ndarray.
List[Union[HDataFrame (-) – list of dataframe or ndarray.
VDataFrame – list of dataframe or ndarray.
FedNdarray]] – list of dataframe or ndarray.
y – Target data. It could be a VDataFrame or FedNdarray which has only one partition, or a PYUObject.
batch_size – Integer or Dict. Number of samples per batch of computation. If unspecified, batch_size will default to 32.
sample_weight – Optional Numpy array of weights for the test samples, used for weighting the loss function.
verbose – Verbosity mode. 0 = silent, 1 = progress bar.
dataset_builder – Callable function, its input is x or [x, y] if y is set, it should return dataset.
random_seed – Seed for prgs, will only affect shuffle
compress – Whether to use compressor to compress cross device data.
- 返回:
federate evaluate result
- 返回类型:
metrics
- save_model(base_model_path: Optional[Union[str, Dict[PYU, str]]] = None, fuse_model_path: Optional[str] = None, is_test=False, **kwargs)[源代码]#
Vertical split learning save model interface
- 参数:
base_model_path – base model path,only support format like ‘a/b/c’, where c is the model name
fuse_model_path – fuse model path
is_test – whether is test mode
kwargs – other argument inherit from tf or torch
示例
>>> save_params = {'save_traces' : True, >>> 'save_format' : 'h5',} >>> slmodel.save_model(base_model_path, >>> fuse_model_path,) >>> is_test=True,) >>> # just passing params in >>> slmodel.save_model(base_model_path, >>> fuse_model_path,) >>> is_test=True, >>> save_traces=True, >>> save_format='h5')
- load_model(base_model_path: Optional[Union[str, Dict[PYU, str]]] = None, fuse_model_path: Optional[str] = None, is_test=False, base_custom_objects=None, fuse_custom_objects=None)[源代码]#
Vertical split learning load model interface
- 参数:
base_model_path – base model path
fuse_model_path – fuse model path
is_test – whether is test mode
base_custom_objects – Optional dictionary mapping names (strings) to custom classes or functions of the base model to be considered during deserialization
fuse_custom_objects – Optional dictionary mapping names (strings) to custom classes or functions of the base model to be considered during deserialization.
- export_model(base_model_path: Optional[Union[str, Dict[PYU, str]]] = None, fuse_model_path: Optional[str] = None, save_format='tf', is_test=False, **kwargs)[源代码]#
Vertical split learning export model interface
- 参数:
base_model_path – base model path,only support format like ‘a/b/c’, where c is the model name
fuse_model_path – fuse model path
save_format – what format to export
kwargs – other argument inherit from onnx safer
secretflow.ml.nn.sl.strategy_dispatcher#
Classes:
Functions:
|
register new strategy |
|
strategy dispatcher |
- class secretflow.ml.nn.sl.strategy_dispatcher.Dispatcher[源代码]#
基类:
object
Methods:
__init__
()register
(name, check_skip_grad, cls)dispatch
(name, backend, *args, **kwargs)
- secretflow.ml.nn.sl.strategy_dispatcher.register_strategy(_cls=None, *, strategy_name=None, backend=None, check_skip_grad=False)[源代码]#
register new strategy
- 参数:
_cls –
strategy_name – name of strategy
backend – backend of strategy(tensorflow/torch)
check_skip_grad – whether this strategy need check skip gradient
Returns: