tf.contrib.seq2seq

 

upper level

 

Modules

 

None

 

Classes

 

AttentionMechanism:

AttentionWrapper:

Wraps another RNNCell with attention.

AttentionWrapperState:

namedtuple storing the state of a AttentionWrapper.

BahdanauAttention:

Implements Bahdanau-style (additive) attention.

BahdanauMonotonicAttention:

Monotonic attention mechanism with Bahadanau-style energy function.

BasicDecoder:

Basic sampling decoder.

BasicDecoderOutput:

BeamSearchDecoder:

BeamSearch sampling decoder.

BeamSearchDecoderOutput:

BeamSearchDecoderState:

CustomHelper:

Base abstract class that allows the user to customize sampling.

Decoder:

An RNN Decoder abstract interface object.

FinalBeamSearchDecoderOutput:

Final outputs returned by the beam search after all decoding is finished.

GreedyEmbeddingHelper:

A helper for use during inference.

Helper:

Interface for implementing sampling in seq2seq decoders.

InferenceHelper:

A helper to use during inference with a custom sampling function.

LuongAttention:

Implements Luong-style (multiplicative) attention scoring.

LuongMonotonicAttention:

Monotonic attention mechanism with Luong-style energy function.

SampleEmbeddingHelper:

A helper for use during inference.

ScheduledEmbeddingTrainingHelper:

A training helper that adds scheduled sampling.

ScheduledOutputTrainingHelper:

A training helper that adds scheduled sampling directly to outputs.

TrainingHelper:

A helper for use during training. Only reads inputs.

 

Functions

 

dynamic_decode(…):

Perform dynamic decoding with decoder.

gather_tree(…):

Calculates the full beams from the per-step ids and parent beam ids.

hardmax(…):

Returns batched one-hot vectors.

monotonic_attention(…):

Compute monotonic attention distribution from choosing probabilities.

safe_cumprod(…):

Computes cumprod of x in logspace using cumsum to avoid underflow.

sequence_loss(…):

Weighted cross-entropy loss for a sequence of logits.

tile_batch(…):

Tile the batch dimension of a (possibly nested structure of) tensor(s) t.