ez_transfer.model_zoo¶
bert¶
-
class
easytransfer.model_zoo.modeling_bert.
BertConfig
(vocab_size, hidden_size, intermediate_size, num_hidden_layers, num_attention_heads, max_position_embeddings, type_vocab_size, hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, initializer_range=0.02, **kwargs)[source]¶ Configuration for Bert.
- Parameters
vocab_size -- Vocabulary size of inputs_ids in BertModel.
hidden_size -- Size of the encoder layers and the pooler layer.
num_hidden_layers -- Number of hidden layers in the Transformer encoder.
num_attention_heads -- Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size -- The size of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_dropout_prob -- The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob -- The dropout ratio for the attention probabilities.
max_position_embeddings -- The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
type_vocab_size -- The vocabulary size of the token_type_ids passed into BertModel.
initializer_range -- The stdev of the truncated_normal_initializer for initializing all weight matrices.
-
class
easytransfer.model_zoo.modeling_bert.
BertPreTrainedModel
(config, **kwargs)[source]¶ -
config_class
¶ alias of
BertConfig
-
call
(inputs, masked_lm_positions=None, **kwargs)[source]¶ - Parameters
inputs -- [input_ids, input_mask, segment_ids]
masked_lm_positions -- masked_lm_positions
- Returns
sequence_output, pooled_output
Examples:
google-bert-tiny-zh google-bert-tiny-en google-bert-small-zh google-bert-small-en google-bert-base-zh google-bert-base-en google-bert-large-zh google-bert-large-en pai-bert-tiny-zh pai-bert-tiny-en pai-bert-small-zh pai-bert-small-en pai-bert-base-zh pai-bert-base-en pai-bert-large-zh pai-bert-large-en model = model_zoo.get_pretrained_model('google-bert-base-zh') outputs = model([input_ids, input_mask, segment_ids], mode=mode)
-
roberta¶
-
class
easytransfer.model_zoo.modeling_bert.
BertConfig
(vocab_size, hidden_size, intermediate_size, num_hidden_layers, num_attention_heads, max_position_embeddings, type_vocab_size, hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, initializer_range=0.02, **kwargs)[source]¶ Configuration for Bert.
- Parameters
vocab_size -- Vocabulary size of inputs_ids in BertModel.
hidden_size -- Size of the encoder layers and the pooler layer.
num_hidden_layers -- Number of hidden layers in the Transformer encoder.
num_attention_heads -- Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size -- The size of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_dropout_prob -- The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob -- The dropout ratio for the attention probabilities.
max_position_embeddings -- The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
type_vocab_size -- The vocabulary size of the token_type_ids passed into BertModel.
initializer_range -- The stdev of the truncated_normal_initializer for initializing all weight matrices.
-
class
easytransfer.model_zoo.modeling_roberta.
RobertaPreTrainedModel
(config, **kwargs)[source]¶ -
config_class
¶
-
call
(inputs, masked_lm_positions=None, **kwargs)[source]¶ - Parameters
inputs -- [input_ids, input_mask, segment_ids]
masked_lm_positions -- masked_lm_positions
- Returns
sequence_output, pooled_output
Examples:
hit-roberta-base-zh hit-roberta-large-zh pai-roberta-base-zh pai-roberta-large-zh model = model_zoo.get_pretrained_model('hit-roberta-base-zh') outputs = model([input_ids, input_mask, segment_ids], mode=mode)
-
albert¶
-
class
easytransfer.model_zoo.modeling_albert.
AlbertConfig
(vocab_size, embedding_size, hidden_size, intermediate_size, num_hidden_layers, num_attention_heads, max_position_embeddings, type_vocab_size, hidden_dropout_prob=0, attention_probs_dropout_prob=0, initializer_range=0.02, **kwargs)[source]¶ Configuration for Albert.
- Parameters
vocab_size -- Vocabulary size of inputs_ids in AlbertModel.
embedding_size -- size of voc embeddings.
hidden_size -- Size of the encoder layers and the pooler layer.
num_hidden_layers -- Number of hidden layers in the Transformer encoder.
num_attention_heads -- Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size -- The size of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_dropout_prob -- The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob -- The dropout ratio for the attention probabilities.
max_position_embeddings -- The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
type_vocab_size -- The vocabulary size of the token_type_ids passed into AlbertModel.
initializer_range -- The stdev of the truncated_normal_initializer for initializing all weight matrices.
-
class
easytransfer.model_zoo.modeling_albert.
AlbertPreTrainedModel
(config, **kwargs)[source]¶ -
config_class
¶ alias of
AlbertConfig
-
call
(inputs, masked_lm_positions=None, **kwargs)[source]¶ - Parameters
inputs -- [input_ids, input_mask, segment_ids]
masked_lm_positions -- masked_lm_positions
- Returns
sequence_output, pooled_output
Examples:
google-albert-base-zh google-albert-base-en google-albert-large-zh google-albert-large-en google-albert-xlarge-zh google-albert-xlarge-en google-albert-xxlarge-zh google-albert-xxlarge-en pai-albert-base-zh pai-albert-base-en pai-albert-large-zh pai-albert-large-en pai-albert-xlarge-zh pai-albert-xlarge-en pai-albert-xxlarge-zh pai-albert-xxlarge-en model = model_zoo.get_pretrained_model('google-albert-base-zh') outputs = model([input_ids, input_mask, segment_ids], mode=mode)
-
imagebert¶
-
class
easytransfer.model_zoo.modeling_imagebert.
ImageBertConfig
(vocab_size, hidden_size, intermediate_size, num_hidden_layers, num_attention_heads, max_position_embeddings, type_vocab_size, hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, initializer_range=0.02, patch_type_vocab_size=2, patch_feature_size=2048, max_patch_position_embeddings=64, **kwargs)[source]¶ Configuration for ImageBert.
- Parameters
vocab_size -- Vocabulary size of inputs_ids in BertModel.
hidden_size -- Size of the encoder layers and the pooler layer.
num_hidden_layers -- Number of hidden layers in the Transformer encoder.
num_attention_heads -- Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size -- The size of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_dropout_prob -- The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob -- The dropout ratio for the attention probabilities.
max_position_embeddings -- The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
type_vocab_size -- The vocabulary size of the token_type_ids passed into BertModel.
initializer_range -- The stdev of the truncated_normal_initializer for initializing all weight matrices.
patch_feature_size -- patch feature size
max_patch_position_embeddings -- max_patch_position_embeddings
-
class
easytransfer.model_zoo.modeling_imagebert.
ImageBertPreTrainedModel
(config, **kwargs)[source]¶ -
config_class
¶ alias of
ImageBertConfig
-
call
(input_ids, input_mask=None, segment_ids=None, masked_lm_positions=None, image_feature=None, image_mask=None, masked_patch_positions=None, **kwargs)[source]¶ Examples:
model = model_zoo.get_pretrained_model('icbu-imagebert-small-en') mlm_logits, nsp_logits, mpm_logits, target_raw_patch_features = model(input_ids, input_mask=input_mask, segment_ids=token_type_ids, image_feature=image_feature, image_mask=image_mask, masked_lm_positions=lm_positions, masked_patch_positions=masked_patch_positions, output_features=False, mode=mode)
-