models.embeddings
class
TokenAndPositionEmbedding(keras.src.layers.layer.Layer):
Token and Position Embedding layer for Transformer models.
This layer combines token embeddings and positional embeddings to provide input embeddings for Transformer models.
Parameters:
- maxlen (int): Maximum length of the input sequence.
- vocab_size (int): Size of the vocabulary.
- embed_dim (int): Dimensionality of the embedding vectors.
Example:
>>> embed = TokenAndPositionEmbedding(64, 8008, 1280)
>>> input_sentence = keras.ops.ones((1, 10))
>>> output = embed(input_sentence)
TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim)
Initializes the TokenAndPositionEmbedding layer.
Args:
- maxlen (int): Maximum length of the input sequence.
- vocab_size (int): Size of the vocabulary.
- embed_dim (int): Dimensionality of the embedding vectors.
def
call(self, x):
Executes the forward pass of the TokenAndPositionEmbedding layer.
Args:
- x: Input tensor representing token indices.
Returns:
- keras.Tensor: Output tensor representing the combined embeddings.
Inherited Members
- keras.src.layers.layer.Layer
- get_build_config
- build_from_config
- add_variable
- add_weight
- trainable
- variables
- trainable_variables
- non_trainable_variables
- weights
- trainable_weights
- non_trainable_weights
- metrics
- metrics_variables
- get_weights
- set_weights
- dtype
- compute_dtype
- variable_dtype
- input_dtype
- supports_masking
- stateless_call
- add_loss
- losses
- save_own_variables
- load_own_variables
- count_params
- get_config
- keras.src.ops.operation.Operation
- from_config
- input
- output