ez_transfer.optimizers¶
adam_weight_decay_optimizer¶
Base class to make optimizers weight decay ready.
-
class
easytransfer.optimizers.adam_weight_decay_optimizer.
AdamWeightDecayOptimizer
(learning_rate, weight_decay_rate=0.0, beta_1=0.9, beta_2=0.999, epsilon=1e-06, exclude_from_weight_decay=None, name='AdamWeightDecayOptimizer')[source]¶ Bases:
tensorflow.python.training.optimizer.Optimizer
A basic Adam optimizer that includes "correct" L2 weight decay.
lamb_weight_decay_optimizer¶
Base class to make optimizers weight decay ready.
-
class
easytransfer.optimizers.lamb_weight_decay_optimizer.
LambWeightDecayOptimizer
(weight_decay_rate, exclude_from_weight_decay=None, exclude_from_layer_adaptation=None, **kwargs)[source]¶ Bases:
tensorflow.python.training.adam.AdamOptimizer
-
apply_gradients
(grads_and_vars, global_step=None, name=None, decay_var_list=None)[source]¶ Apply gradients to variables and decay the variables.
This function is the same as Optimizer.apply_gradients except that it allows to specify the variables that should be decayed using decay_var_list. If decay_var_list is None, all variables in var_list are decayed.
For more information see the documentation of Optimizer.apply_gradients.
Parameters: - grads_and_vars -- List of (gradient, variable) pairs as returned by compute_gradients().
- global_step -- Optional Variable to increment by one after the variables have been updated.
- name -- Optional name for the returned operation. Default to the name passed to the Optimizer constructor.
- decay_var_list -- Optional list of decay variables.
Returns: An Operation that applies the specified gradients. If global_step was not None, that operation also increments global_step.
-