ez_transfer.app_zoo¶
Feature Extractor¶
-
class
easytransfer.app_zoo.feature_extractor.
BertFeatureExtractor
(**kwargs)[source]¶ Bert Feature Extraction Model (Only for predicting)
-
build_logits
(features, mode)[source]¶ Building BERT feature extraction graph
Parameters: - features (OrderedDict) -- A dict mapping raw input to tensors
- mode (bool) -- tell the model whether it is under training
Returns: The output after pooling. Shape of [None, 768] all_hidden_outputs (Tensor): The last hidden outputs of all sequence.
Shape of [None, seq_len, hidden_size]
Return type: pooled_output (Tensor)
-
Text Classify¶
-
class
easytransfer.app_zoo.text_classify.
BaseTextClassify
(**kwargs)[source]¶
-
class
easytransfer.app_zoo.text_classify.
BertTextClassify
(**kwargs)[source]¶ BERT Text Classification Model
default_param_dict = { "pretrain_model_name_or_path": "pai-bert-base-zh", "multi_label": False, "num_labels": 2, "max_num_labels": 5, "dropout_rate": 0.1 }
-
static
default_model_params
()[source]¶ Get default model required parameters
Returns: key/value pair of default model required parameters Return type: default_param_dict (dict)
-
build_logits
(features, mode=None)[source]¶ Building graph of BERT Text Classifier
Parameters: - features (OrderedDict) -- A dict mapping raw input to tensors
- mode (`bool) -- tell the model whether it is under training
Returns: The output after the last dense layer. Shape of [None, num_labels] label_ids (Tensor): label_ids, shape of [None]
Return type: logits (Tensor)
-
static
-
class
easytransfer.app_zoo.text_classify.
TextCNNClassify
(**kwargs)[source]¶ TextCNN Text Classification Model
-
static
default_model_params
()[source]¶ Get default model required parameters
Returns: key/value pair of default model required parameters Return type: default_param_dict (dict)
-
build_logits
(features, mode=None)[source]¶ Building DAM text match graph
Parameters: - features (OrderedDict) -- A dict mapping raw input to tensors
- mode (bool) -- tell the model whether it is under training
Returns: The output after the last dense layer. Shape of [None, num_labels] label_ids (Tensor): label_ids, shape of [None]
Return type: logits (Tensor)
-
static
Text Match¶
-
class
easytransfer.app_zoo.text_match.
BaseTextMatch
(**kwargs)[source]¶ Basic Text Match Model
-
static
default_model_params
()[source]¶ Get default model required parameters
Returns: key/value pair of default model required parameters Return type: default_param_dict (dict)
-
build_eval_metrics
(logits, labels)[source]¶ Building evaluation metrics while evaluating
Parameters: predict_output (tuple) -- (logits, _) Returns: - A dict with tf.metrics op
- (mse) for regression
- (accuracy, auc, f1) for binary categories
- (accuracy, macro-f1, micro-f1) for multiple categories
Return type: ret_dict (dict)
-
static
-
class
easytransfer.app_zoo.text_match.
BertTextMatch
(**kwargs)[source]¶ Text Match model based on BERT-like pretrained models
default_param_dict = { "pretrain_model_name_or_path": "pai-bert-base-zh", "num_labels": 2, "dropout_rate": 0.1 }
-
static
default_model_params
()[source]¶ Get default model required parameters
Returns: key/value pair of default model required parameters Return type: default_param_dict (dict)
-
build_logits
(features, mode=None)[source]¶ Building BERT text match graph
Parameters: - features (OrderedDict) -- A dict mapping raw input to tensors
- mode (bool) -- tell the model whether it is under training
Returns: The output after the last dense layer. Shape of [None, num_labels] label_ids (Tensor): label_ids, shape of [None]
Return type: logits (Tensor)
-
static
-
class
easytransfer.app_zoo.text_match.
BertTextMatchTwoTower
(**kwargs)[source]¶ Text Match model based on BERT-like pretrained models, Two tower for learning embeddings
default_param_dict = { "pretrain_model_name_or_path": "pai-bert-base-zh", "num_labels": 2 }
-
static
default_model_params
()[source]¶ Get default model required parameters
Returns: key/value pair of default model required parameters Return type: default_param_dict (dict)
-
build_logits
(features, mode=None)[source]¶ Building BERT Two Tower text match graph
Parameters: - features (OrderedDict) -- A dict mapping raw input to tensors
- mode (bool) -- tell the model whether it is under training
Returns: The output after the last dense layer. Shape of [None, num_labels] label_ids (Tensor): label_ids, shape of [None]
Return type: logits (Tensor)
-
static
-
class
easytransfer.app_zoo.text_match.
DAMTextMatch
(**kwargs)[source]¶ Text Match model based on DAM models
Ankur P. Parikh, Oscar Tackstrom, Dipanjan Das, Jakob Uszkoreit, et al. A Decomposable Attention Model for Natural Language Inference , EMNLP, 2016.
default_param_dict = { "max_vocab_size": 20000, "embedding_size": 300, "hidden_size": 200, "num_labels": 2, "first_sequence_length": 50, "second_sequence_length": 50, "pretrain_word_embedding_name_or_path": "", "fix_embedding": False }
-
static
default_model_params
()[source]¶ Get default model required parameters
Returns: key/value pair of default model required parameters Return type: default_param_dict (dict)
-
build_logits
(features, mode=None)[source]¶ Building DAM text match graph
Parameters: - features (OrderedDict) -- A dict mapping raw input to tensors
- mode (bool) -- tell the model whether it is under training
Returns: The output after the last dense layer. Shape of [None, num_labels] label_ids (Tensor): label_ids, shape of [None]
Return type: logits (Tensor)
-
static
-
class
easytransfer.app_zoo.text_match.
DAMPlusTextMatch
(**kwargs)[source]¶ Text Match model based on DAM Plus model, Alibaba PAI Group
default_param_dict = { "max_vocab_size": 20000, "embedding_size": 300, "hidden_size": 200, "num_labels": 2, "first_sequence_length": 50, "second_sequence_length": 50, "pretrain_word_embedding_name_or_path": "", "fix_embedding": False }
-
static
default_model_params
()[source]¶ Get default model required parameters
Returns: key/value pair of default model required parameters Return type: default_param_dict (dict)
-
build_logits
(features, mode=None)[source]¶ Building DAMPlus text match graph
Parameters: - features (OrderedDict) -- A dict mapping raw input to tensors
- mode (bool) -- tell the model whether it is under training
Returns: The output after the last dense layer. Shape of [None, num_labels] label_ids (Tensor): label_ids, shape of [None]
Return type: logits (Tensor)
-
static
-
class
easytransfer.app_zoo.text_match.
BiCNNTextMatch
(**kwargs)[source]¶ Text Match model based on BiCNN model, Alibaba PAI Group
default_param_dict = { "max_vocab_size": 20000, "embedding_size": 300, "hidden_size": 200, "num_labels": 2, "first_sequence_length": 50, "second_sequence_length": 50, "pretrain_word_embedding_name_or_path": "", "fix_embedding": False }
-
static
default_model_params
()[source]¶ Get default model required parameters
Returns: key/value pair of default model required parameters Return type: default_param_dict (dict)
-
build_logits
(features, mode=None)[source]¶ Building BiCNN text match graph
Parameters: - features (OrderedDict) -- A dict mapping raw input to tensors
- mode (bool) -- tell the model whether it is under training
Returns: The output after the last dense layer. Shape of [None, num_labels] label_ids (Tensor): label_ids, shape of [None]
Return type: logits (Tensor)
-
static
-
class
easytransfer.app_zoo.text_match.
HCNNTextMatch
(**kwargs)[source]¶ Text Match model based on Hybrid Context CNN
Minghui Qiu, Yang Liu, Feng Ji, Wei Zhou, Jun Huang, et al. Transfer Learning for Context-Aware Question Matching in Information-seeking Conversation Systems , ACL 2018.
default_param_dict = { "max_vocab_size": 20000, "embedding_size": 300, "hidden_size": 300, "num_labels": 2, "first_sequence_length": 64, "second_sequence_length": 64, "pretrain_word_embedding_name_or_path": "", "fix_embedding": False, "l2_reg": 0.0004, "filter_size": 4, }
Sequence Labeling¶
-
class
easytransfer.app_zoo.sequence_labeling.
BaseSequenceLabeling
(**kwargs)[source]¶ -
-
build_eval_metrics
(logits, labels)[source]¶ Building evaluation metrics while evaluating
Parameters: - logits (Tensor) -- shape of [None, seq_length, num_labels]
- labels (Tensor) -- shape of [None, seq_length]
Returns: A dict with (py_accuracy, py_micro_f1, py_macro_f1) tf.metrics op
Return type: ret_dict (dict)
-
-
class
easytransfer.app_zoo.sequence_labeling.
BertSequenceLabeling
(**kwargs)[source]¶ BERT Sequence Labeling Model
default_param_dict = { "pretrain_model_name_or_path": "pai-bert-base-zh", "dropout_rate": 0.1 }
-
static
default_model_params
()[source]¶ Get default model required parameters
Returns: key/value pair of default model required parameters Return type: default_param_dict (dict)
-
build_logits
(features, mode=None)[source]¶ Building graph of BERT Sequence Labeling
Parameters: - features (OrderedDict) -- A dict mapping raw input to tensors
- mode (`bool) -- tell the model whether it is under training
Returns: The output after the last dense layer. Shape of [None, sequence_length, num_labels] label_ids (Tensor): label_ids, shape of [None, sequence_length]
Return type: logits (Tensor)
-
static
Text Comprehension¶
-
class
easytransfer.app_zoo.text_comprehension.
BaseTextComprehension
(**kwargs)[source]¶
-
class
easytransfer.app_zoo.text_comprehension.
BERTTextComprehension
(**kwargs)[source]¶ BERT Text Comprehension Model
default_param_dict = { "pretrain_model_name_or_path": "pai-bert-base-zh", "multi_label": False, "num_labels": 2, "max_query_length": 64, "doc_stride": 128, }
-
static
default_model_params
()[source]¶ Get default model required parameters
Returns: key/value pair of default model required parameters Return type: default_param_dict (dict)
-
build_logits
(features, mode=None)[source]¶ Building graph of BERT Text Comprehension
Parameters: - features (OrderedDict) -- A dict mapping raw input to tensors
- mode (`bool) -- tell the model whether it is under training
Returns: (start_logits, end_logits), The output after the last dense layer. Two tensor of Shape [None, num_labels] label_ids (tuple): (start_positions, end_positions). Two tensor of shape [None]
Return type: logits (tuple)
-
static
-
class
easytransfer.app_zoo.text_comprehension.
BERTTextHAEComprehension
(**kwargs)[source]¶ BERT-HAE Text Classification Model
default_param_dict = { "pretrain_model_name_or_path": "pai-bert-base-zh", "multi_label": False, "num_labels": 2, "dropout_rate": 0.1, "max_query_length": 64, }
-
static
default_model_params
()[source]¶ Get default model required parameters
Returns: key/value pair of default model required parameters Return type: default_param_dict (dict)
-
build_logits
(features, mode=None)[source]¶ Building graph of BERT Text Comprehension
Parameters: - features (OrderedDict) -- A dict mapping raw input to tensors
- mode (`bool) -- tell the model whether it is under training
Returns: (start_logits, end_logits), The output after the last dense layer. Two tensor of Shape [None, num_labels] label_ids (tuple): (start_positions, end_positions). Two tensor of shape [None]
Return type: logits (tuple)
-
static