Seq2Seq的资料很多,这里就简单介绍下吧。
顾名思义,它就是一个sequence来预测另一个sequence的模型,主要是一个encoder-decoder的框架。先放个图,
上图就是一个典型的seq2seq问答系统,Q:Are you free tomorrow? A: Yes, what's up? 每个词就是一个序列元素,好吧。是有点啰嗦了。简言之,你认为target之间是有关联的问题,且input是关联的,就可以用seq2seq。这里为什么先强调target,再谈input,我想你们应该是明白的!
再放一张细节图,来聊聊内部,个人感觉比较有趣的事情。
其中每个圆圈是个RNN Cell, 一般用LSTM或者GRU吧(为什么用这个?防止梯度消失,远程信息量损耗,为啥会防止梯度消失?因为门控,加法操作。你不是面试官吧!),个人感觉GRU会稍微好点,毕竟参数较少,不容易过拟合。我这种不会调参的,还是越少越好吧。。
对其不太了解的可以先看看RNN(LSTM, GRU)的介绍
www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
观察可以看出,中间那个C是由inputs encode得到,相当于整合了每个input的信息,比如an alcoholic drink made from malt and hop是输入的每个词,那C就代表beer!
相比shallow model 深度模型就是逐层构建高级特征,免去人工特征的优势,在这里可以很好地体现。
比如这个C,可以用它代表你每个词组成的一句话,相当于做了降维且抽象出更复杂的特征(甚至一篇由多个句子组成的文章也可以通过C来embeded),在Distributional Hypothesis下,还有word2vec和doc2vec等网络语言模型来embedding。相比于他们,seq2seq考虑了语句顺序的关系,当然,你要用更多的参数...;还有C&W的SENNA间接训练word embedding "Natural Language Processing (Almost) from Scratch",M&H的HLBL "A scalable hierarchical distributed language model"等,其中有很多内容和思想是可以学习的,比如在最后层用hierarchical structure降低查找的时间复杂度等!(好像有点跑题..)
再来稍微谈下decoder吧,可以明显看到,target之间是相连的,是不是想到了resnet的shortcut,加深网络的神器啊!个人认为,Seq2Seq最大的特点就是让输出之间有关联。如果输出之间的关系更好拟和,那么seq2seq将会得到较好的结果,就如同ResNet,巧妙的转化为残差模型,使参数的调整更敏感,更易训练深度模型,带来更大的提升!毕竟在数据充足的情况下,深度决定精度么
此图仅观赏
有一点就是C对每一个target都是一样的,于是就有了attention mechanism,让每一个target拥有自己的C,所谓千人千面的思想,当然增加了参数,在数据多的话,效果还是很厉害滴!但是大家有想过,如果input序列对target影响不大时,再使用attention会怎么样呢?恩,这里就不多说了,感兴趣朋友可以研究下,一起交流。
TensorFlow中实现
最近学了下tf,哇,感觉我太笨,难用。发现大多seq2seq都是做chatbot之类的,也就是输入输出是离散的,和我的应用不符。。资料也比较少,只能来读代码了。。
先简单介绍下chatbot实现,再改成数值预测(比如用市盈率,净值等特征预测资产价格),就很简单了!
Seq2Seq模型训练:
Encoder:input_sequences ----> (RNN) ----> C(Cell State)
Decoder:C + 结合时刻i的target ----> (RNN) ----> 预测时刻i+1的target
重点:训练过程decoder部分的输入是target
预测过程区别:decoder的输入是上一时刻的输出
chatbot模型流程:
seq2seq源码:github.com/google/seq2seq
chatbot因为是对话,每个词是离散的,所以需要将每个词embedding后,喂给模型,得到的decoder结果一般需要连接一个全连接层(Dense)+softmax来选择输出概率最大的词作为最终结果,只要对输入输出做一个embedding处理就好,源码还用了beam_search算法(贪心动态规划),来解码,也就是先选择几个概率较大的输出,接着他们解码最后看联合概率,选择输出序列。
还有处理不定长的情况,一般策略是选出最大长度,其他padding为0,这样可能会使资源浪费太大(比如,一个序列长度100000,其他平均10),可以用bucket策略,选择几个长度区间,把不同的序列padding到该区间内,节约资源,可谓工程实现上的亮点啦
我们以实现一个简单版本的Seq2Seq模型来聊聊吧。(第一次用这个,没选markdown模式...)
按照上述chatbot的流程,
encoder构建:
1.预处理
把所有文字存到字典(非常用的用<UNKNOW>表示,并添加<GO>,<EOS>,<PAD>用作启示休止填充符),并构建文字和编号的一一映射;假设问题最多10个词,如果某句话只有8个词,需要补2个<PAD>,答案前后需要添加<GO>和<EOS>指示,假设最多20个词。
拿一个batch(32)来说,inputs的维度是(32,10),targets为(32,20)
2.encoder构建
encoder比较简单,就是一个RNN构建,最后拿出来最后一个cell state作为decoder的初始cell state就行。(没有markdown尽量写简单吧)
import tensorflow as tf
from tensorflow.contrib.seq2seq import dynamic_rnn
##定义输入
inputs = tf.placeholder(tf.float32,shape=[batch_size,max_input_length])
##我们的输入需要embedding成向量喂给模型,定义embedding张量
with tf.variable_scope('embedding'):
#定义一个Vocabulary_size x embedding_size维度的矩阵, embedding_size是词向量的维度
encoder_embeddings = tf.Variable(tf.random_uniform([Vocabulary_size,embedding_size], -1.0, 1.0,name='emcoder_embedding')
decoder_embeddings = tf.Variable(tf.random_uniform([Vocabulary_size,embedding_size], -1.0, 1.0,name='decoder_embedding')
#将原始输入转化成embedded输入,tf中在cpu进行
with tf.device('/cpu:0'):
#embed是我们喂给rnn的输入,通过这个函数返回(batch_size,max_input_length,embedding_size)维度的张量,
#这个函数是每个词的one-hot向量*embeddings矩阵得到的
embed = tf.nn.embedding_lookup(embeddings,inputs)
##定义encoder cell,这里遵循tensorflow RNN建模流程
with tf.variable_scope('encoder'):
# 这里output_dim = 20
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(output_dim)
#rnn_layers 是要构建几层
encoder_cell = tf.contrib.rnn.MultiRNNCell([for _ in xrange(rnn_layers)])
##通过dynamic_rnn 获取encoder的final cell state
_, encoder_final_state = dynamic_rnn(encoder_cell, embed,dtype=tf.float32)
encoder部分就已经搞定了,对于decoder,需要分训练过程和预测过程来考虑,处理细节上有一定区别,上面已经介绍过了,直接开始吧!
3.decoder构建
说明
decoder构建可以参考dynamic_decoder这个函数的输入,一般需要三个函数helper(帮助给decoder在training和inferring不同的过程中提供输入和初始化),BasicDecoder(实现decoder的一步训练过程),dynamic_decoder(实现整个过程),源码参考helper.py,basic_decoder.py和decoder.py,这里解读下重点过程吧
helper
介绍一下简单的,这里主要是训练过程需要的TrainingHelper和预测过程的GreedyEmbeddingHelper
##说明##
"""
helper主要提供decoder的初始化输入和下一次训练的输入
这里只介绍重要的函数,为了节约空间一些抽象类中定义的属性就删啦
"""
class TrainingHelper(Helper):
#这里继承了Helper的抽象类,主要限制此类的函数和属性
def __init__(self, inputs, sequence_length, time_major=False, name=None):
with ops.name_scope(name, "TrainingHelper", [inputs, sequence_length]):
inputs = ops.convert_to_tensor(inputs, name="inputs")
# time_major就是原来shape(batch_size,max_input_length,embedding_size),
# 转化为shape(batch_size,max_input_length,embedding_size) ,为了加速训练
if not time_major:
inputs = nest.map_structure(_transpose_batch_time, inputs)
# 转化之后,每次训练输入就是同一时刻的不同batch的输入,_unstack_ta就是把
# 刚才的三维tensor分解为max_input_length个二维(batch_size,embedding_size)tensor
self._input_tas = nest.map_structure(_unstack_ta, inputs)
# sequence_length是用来控制训练次数的,是个一维tensor
self._sequence_length = ops.convert_to_tensor(
sequence_length, name="sequence_length")
if self._sequence_length.get_shape().ndims != 1:
raise ValueError(
"Expected sequence_length to be a vector, but received shape: %s" %
self._sequence_length.get_shape())
# 顾名思义哦,这实在迭代过程中如果全部序列都输入过了,就用0来完成迭代过程
self._zero_inputs = nest.map_structure(
lambda inp: array_ops.zeros_like(inp[0, :]), inputs)
self._batch_size = array_ops.size(sequence_length)
#获取初始化输入(第一个target)
def initialize(self, name=None):
with ops.name_scope(name, "TrainingHelperInitialize"):
# 判断此batch中序列是否为0
finished = math_ops.equal(0, self._sequence_length)
# 判断是否都为零,如果是则输入0就可以了
all_finished = math_ops.reduce_all(finished)
这里cond()是控制流op,如果all_finished == True,则输出zero_output,否则输出第一个target,还记得_input_tas吧
next_inputs = control_flow_ops.cond(
all_finished, lambda: self._zero_inputs,
lambda: nest.map_structure(lambda inp: inp.read(0), self._input_tas))
return (finished, next_inputs)
# 根据output获取对应的输出编号,decoder最后一般接一个softmax,进而获取概率最大的输出,根据前面定义的映射关系可以知道输出的具体值
def sample(self, time, outputs, name=None, **unused_kwargs):
with ops.name_scope(name, "TrainingHelperSample", [time, outputs]):
sample_ids = math_ops.cast(
math_ops.argmax(outputs, axis=-1), dtypes.int32)
return sample_ids
# 获取下一个输入(next_target和next_cell_state)
def next_inputs(self, time, outputs, state, name=None, **unused_kwargs):
"""next_inputs_fn for TrainingHelper."""
with ops.name_scope(name, "TrainingHelperNextInputs",
[time, outputs, state]):
next_time = time + 1
# 下一个是否是某序列的最后一个
finished = (next_time >= self._sequence_length)
all_finished = math_ops.reduce_all(finished)
# 读取下一个target,这里定义一个函数主要因为下面函数输入的需要
def read_from_ta(inp):
return inp.read(next_time)
next_inputs = control_flow_ops.cond(
all_finished, lambda: self._zero_inputs,
lambda: nest.map_structure(read_from_ta, self._input_tas))
return (finished, next_inputs, state)
"""
以上是trainning过程的helper,下面介绍一下inferring过程的helper,主要区别就是next_inputs获取的是上一个预测的结果
inferring通过检查start_token和end_token来判断停止
"""
class GreedyEmbeddingHelper(Helper):
def __init__(self, embedding, start_tokens, end_token):
# embedding要是callable的
if callable(embedding):
self._embedding_fn = embedding
else:
self._embedding_fn = (
lambda ids: embedding_ops.embedding_lookup(embedding, ids))
self._start_tokens = ops.convert_to_tensor(
start_tokens, dtype=dtypes.int32, name="start_tokens")
self._end_token = ops.convert_to_tensor(
end_token, dtype=dtypes.int32, name="end_token")
if self._start_tokens.get_shape().ndims != 1:
raise ValueError("start_tokens must be a vector")
self._batch_size = array_ops.size(start_tokens)
if self._end_token.get_shape().ndims != 0:
raise ValueError("end_token must be a scalar")
# 初始化输入为start_token<GO>,这是我们预处理时默认定义滴
self._start_inputs = self._embedding_fn(self._start_tokens)
# 这个初始化就比较简单了
def initialize(self, name=None):
finished = array_ops.tile([False], [self._batch_size])
return (finished, self._start_inputs)
# 和training的类似,返回最可能的输出id
def sample(self, time, outputs, state, name=None):
del time, state # unused by sample_fn
if not isinstance(outputs, ops.Tensor):
raise TypeError("Expected outputs to be a single Tensor, got: %s" % type(outputs))
sample_ids = math_ops.cast(
math_ops.argmax(outputs, axis=-1), dtypes.int32)
return sample_ids
# 也是通过finished来控制next_input,如果有next_input,只需要根据刚才得到的最优输出的id去embedding matrix里面找embedding vector作为下一个输入就可
def next_inputs(self, time, outputs, state, sample_ids, name=None):
del time, outputs # unused by next_inputs_fn
finished = math_ops.equal(sample_ids, self._end_token)
all_finished = math_ops.reduce_all(finished)
next_inputs = control_flow_ops.cond(
all_finished,
# If we're finished, the next_inputs value doesn't matter
lambda: self._start_inputs,
lambda: self._embedding_fn(sample_ids))
return (finished, next_inputs, state)
Basic_decoder
也是重点介绍一下核心内容,主要是tensorflow.contrib.seq2seq.BasicDecoder
class BasicDecoder(decoder.Decoder):
def __init__(self, cell, helper, initial_state):
if not rnn_cell_impl._like_rnncell(cell): # pylint: disable=protected-access
raise TypeError("cell must be an RNNCell, received: %s" % type(cell))
if not isinstance(helper, helper_py.Helper):
raise TypeError("helper must be a Helper, received: %s" % type(helper))
if (output_layer is not None
and not isinstance(output_layer, layers_base.Layer)):
raise TypeError(
"output_layer must be a Layer, received: %s" % type(output_layer))
# decoder的cell
self._cell = cell
# 根据训练还是预测过程选择helper
self._helper = helper
# 这里是encocer_final_state,也就是那个C
self._initial_state = initial_state
# 一般选择Dense来预测输出
self._output_layer = output_layer
# 初始化,输出helper的初始化(finished,first_target)和encoder_final_state
def initialize(self, name=None):
return self._helper.initialize() + (self._initial_state,)
# decoder过程
def step(self, time, inputs, state, name=None):
# 这里提一点注意事项:encoder的输出维度需要和decoder一样
with ops.name_scope(name, "BasicDecoderStep", (time, inputs, state)):
# 根据一次input和state输出对于的output和state
cell_outputs, cell_state = self._cell(inputs, state)
if self._output_layer is not None:
# cell_out的结果进行Dense预测输出
cell_outputs = self._output_layer(cell_outputs)
# 根据Dense结果获取对应的编号id
sample_ids = self._helper.sample(time=time, outputs=cell_outputs, state=cell_state)
# 通过helper的next_input函数获取下一个输入,别忘了是time_majoy
(finished, next_inputs, next_state) = self._helper.next_inputs(
time=time,
outputs=cell_outputs,
state=cell_state,
sample_ids=sample_ids)
# 这里是定义的结构
outputs = BasicDecoderOutput(cell_outputs, sample_ids)
return (outputs, next_state, next_inputs, finished)
dynamic_decoder
重点介绍一下核心内容,代码比较长,raise error等部分就删了,具体可看源码
# 提供一个decoder的实例,其他参数可选,可以查看源码了解
def dynamic_decode(decoder,output_time_major=False,impute_finished=False,maximum_iterations=None,parallel_iterations=32,swap_memory=False,scope=None):
# 通过decoder的初始化获取初始化输入
initial_finished, initial_inputs, initial_state = decoder.initialize()
# 这里根据输出tensor的维度产生0输出
zero_outputs = _create_zero_outputs(decoder.output_size, decoder.output_dtype, decoder.batch_size)
# 如果设定最大迭代次数为0,则迭代不会开始,还记得迭代是通过finished向量决定的吧
if maximum_iterations is not None:
initial_finished = math_ops.logical_or(
initial_finished, 0 >= maximum_iterations)
# 初始化sequnce_length全部为0
initial_sequence_lengths = array_ops.zeros_like(
initial_finished, dtype=dtypes.int32)
# 初始化time=0
initial_time = constant_op.constant(0, dtype=dtypes.int32)
def _shape(batch_size, from_shape):
if not isinstance(from_shape, tensor_shape.TensorShape):
return tensor_shape.TensorShape(None)
else:
batch_size = tensor_util.constant_value(ops.convert_to_tensor(batch_size, name="batch_size"))
return tensor_shape.TensorShape([batch_size]).concatenate(from_shape)
def _create_ta(s, d):
return tensor_array_ops.TensorArray(dtype=d, size=0,dynamic_size=True,element_shape=_shape(decoder.batch_size, s))
# 构建输出tensor,可以用来检测输出结构
initial_outputs_ta = nest.map_structure(_create_ta, decoder.output_size, decoder.output_dtype)
# 循环迭代条件:是否全部序列都跑完了
def condition(unused_time, unused_outputs_ta, unused_state, unused_inputs,
finished, unused_sequence_lengths):
return math_ops.logical_not(math_ops.reduce_all(finished))
# 模型迭代过程函数
def body(time, outputs_ta, state, inputs, finished, sequence_lengths):
# 通过decoder的step计算一次迭代过程
(next_outputs, decoder_state, next_inputs,decoder_finished) = decoder.step(time, inputs, state)
# 判断是否下次结束
next_finished = math_ops.logical_or(decoder_finished, finished)
# 这里考虑是否超过了最大迭代次数,来限制迭代过程
if maximum_iterations is not None:
next_finished = math_ops.logical_or(
next_finished, time + 1 >= maximum_iterations)
# 这个函数形式是array_ops.where(condition,x,y),输出为,如果第一个条件真,则输出第一个函数内容,否则输出第二个的内容,为pointwise操作!
# 这里也就是如果某序列到头了,就输出它的长度
next_sequence_lengths = array_ops.where(
math_ops.logical_and(math_ops.logical_not(finished), next_finished),
array_ops.fill(array_ops.shape(sequence_lengths), time + 1),
sequence_lengths)
nest.assert_same_structure(state, decoder_state)
nest.assert_same_structure(outputs_ta, next_outputs)
nest.assert_same_structure(inputs, next_inputs)
# 序列结束了则输出0就可以了
if impute_finished:
emit = nest.map_structure(lambda out, zero: array_ops.where(finished, zero, out),next_outputs,zero_outputs)
else:
emit = next_outputs
# 某batch序列较短,则多出的迭代只需要输出最后一个的cell_state,对于的输出为0
def _maybe_copy_state(new, cur):
if isinstance(cur, tensor_array_ops.TensorArray):
pass_through = True
else:
new.set_shape(cur.shape)
pass_through = (new.shape.ndims == 0)
return new if pass_through else array_ops.where(finished, cur, new)
if impute_finished:
next_state = nest.map_structure(_maybe_copy_state, decoder_state, state)
else:
next_state = decoder_state
outputs_ta = nest.map_structure(lambda ta, out: ta.write(time, out),
outputs_ta, emit)
return (time + 1, outputs_ta, next_state, next_inputs, next_finished,
next_sequence_lengths)
# 循环迭代过程,输出初始化内容,最终得到结果,保存到res内
res = control_flow_ops.while_loop(
condition,
body,
loop_vars=[
initial_time, initial_outputs_ta, initial_state, initial_inputs,
initial_finished, initial_sequence_lengths,
],
parallel_iterations=parallel_iterations,
swap_memory=swap_memory)
final_outputs_ta = res[1]
final_state = res[2]
final_sequence_lengths = res[5]
# 记得最初我们转化为time_majoy么,现在转化回来,变成(batch_size,output_length,outout_embedding_size)
final_outputs = nest.map_structure(lambda ta: ta.stack(), final_outputs_ta)
# 这里涉及了beem_search,就不讲了
try:
final_outputs, final_state = decoder.finalize(
final_outputs, final_state, final_sequence_lengths)
except NotImplementedError:
pass
if not output_time_major:
final_outputs = nest.map_structure(_transpose_batch_time, final_outputs)
return final_outputs, final_state, final_sequence_lengths
三个文件因为是分开,很难讲的特别连贯,大家如果想研究的可以自己看看,印象就比较深了,简单说就是helper提供不同过程模型的输入和输出解析,BasicDecoder来实现一次迭代过程,dynamic_decoder来控制整个模型的迭代次数,看一遍可以熟悉一下tensorflow的一些数据结构与基本操作的使用。
那下面就接着encoder之后把decoder写好吧(是不是中间插曲太长了。。)
decoder for ChatBot构建:
#定义helper
if is_inferring:
# inferring需要使用GreedyEmbeddingHelper
start_token = tf.placeholder(tf.int32,shape=[None],name='start_token') # <GO>
end_token = tf.placeholder(tf.int32,name='end_token') # <EOS>
helper = GreedyEmbeddingHelper(decoder_embeddings,start_token,end_token)
else:
# output_max_length = 20
target_ = tf.placeholder(tf.int32,shape=[batch_size,output_max_length],name='target_')
decoder_length = tf.placeholder(tf.int32,shape=[batch_size],name='decoder_length')
with tf.device('/cpu:0'):
target_embed = tf.nn.embedding_lookup(decoder_embeddings,target_)
helper = TrainingHelper(target_embed,decoder_length)
# 定义dynamic_decoder的decoder
with tf.variable_scope('decoder'):
de_cell = tf.nn.rnn_cell.BasicLSTMCell(output_dim)
decoder_cell = tf.contrib.rnn.MultiRNNCell([for _ in xrange(rnn_layers)])
# encoder_final_state为decoder的初始状态,cell的输出接一个Dense预测输出
decoder = BasicDecoder(decoder_cell,helper,encoder_final_state ,Dense(Vocabulary_size))
# 定义好helper和decoder后,可以通过dynamic_decoder来获取结果
# logits中存了每个batch的logit的输出和对应编码的输出,final_seq_length显示了每个batch中序列的长度
logits,decoder_final_state, final_seq_lengths = dynamic_decode(decoder)
# 接下来就可以定义损失训练和预测模型了
if is_inferring:
target_pre = tf.nn.softmax(logits)
else:
target_y = tf.reshape(target_,[-1])
logits_y = tf.reshape(logits.rnn_output,[-1,Vocabulary_size])
# 定义损失
cost = tf.losses.softmax_cross_entropy(target_y ,logits_y )
# 防止梯度爆炸定义梯度剪切
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
#定义优化器
optimizer = tf.train.AdamOptimizer(0.01)
train_step = optimizer.apply_gradients(zip(grads, tvars))
chatbot基本结构就是这样啦,可以在此结构上增加attention mechanism啊,hierarchical structure啊,Bidirectional RNN啊,等等。基于这个框架,也可以考虑不用RNN来encode,decode,用性能优越的CNN!甚至可以考虑用gate机制控制CNN做sequence传输,大家想想吧
因为我的任务不涉及离散过程,不需要embedding,基于这个任务改了下结构,大家想想主要修改什么呢?
- 首先需要把抽象类继承中不需要的函数删掉,比如helper中的sample等;
- GreedyEmbeddingHelper可以整个改为类似TrainingHelper的机制,注意next_input为上一个预测值即可,需要注意control_flow_ops.cond()函数输入必须为callable
- GreedyEmbeddingHelper初始化输入设为0,无需start_token和end_token
- BasicDecoder也不需要Dense层;直接输出cell_output即可;需要注意输出的类型,原始是tf.int32,现在改为tf.float32,否则会报错;BasicDecoderOutput中不要sample_ids等
- dynamic_decoder无需beam_search等
- 实例检测注意
直接上代码吧
"""
额发现太长了,放一些关键代码吧
"""
##Helper trainingHelper基本不用改
class GreedyEmbeddingHelper(Helper):
def __init__(self,seq_length, target_output,time_major=False):
# define seq_length for decoder the output
self._sequence_length = seq_length
target_output = ops.convert_to_tensor(target_output, name="tar_out")
if not time_major:
target_output = nest.map_structure(_transpose_batch_time, target_output)
self._batch_size = array_ops.size(seq_length)
self._start_inputs = nest.map_structure(
lambda inp: array_ops.zeros_like(inp[0, :]), target_output)
def initialize(self, name=None):
finished = array_ops.tile([False], [self._batch_size])
return (finished, self._start_inputs)
def next_inputs(self, time, outputs, state, name=None):
next_time = time + 1
finished = (next_time >= self._sequence_length)
all_finished = math_ops.reduce_all(finished)
self.out = outputs
next_inputs = control_flow_ops.cond(
all_finished, lambda: self._start_inputs,
lambda: self.out)
return (finished, next_inputs, state)
## BaisicDecoder
class BasicDecoderOutput(
collections.namedtuple("BasicDecoderOutput", ("rnn_output"))):
pass
class BasicDecoder(decoder.Decoder):
"""Basic sampling decoder."""
def __init__(self, cell, helper, initial_state):
self._cell = cell
self._helper = helper
self._initial_state = initial_state
def initialize(self, name=None):
return self._helper.initialize() + (self._initial_state,)
def step(self, time, inputs, state, name=None):
with ops.name_scope(name, "BasicDecoderStep", (time, inputs, state)):
cell_outputs, cell_state = self._cell(inputs, state)
(finished, next_inputs, next_state) = self._helper.next_inputs(time=time,outputs=cell_outputs,state=cell_state,)
outputs = BasicDecoderOutput(cell_outputs)
return (outputs, next_state, next_inputs, finished)
#dynamic_decoder
"""
主要功能由helper和BaiscDecoder实现在,这里修改不多,主要是改过的decoder和helper在dynamic_decoder中检查类似isinstance(decoder, Decoder),因为修改的文件在__main__下。我直接注释掉了。。。。
"""
到此,整个流程就结束啦。第一次写东西,感谢大家,希望此文对大家有一定的帮助!
本人非相关专业,肯定很多错误,文中一些个人观点也不一定对,希望大家不吝指出。
假期要过去了,哎。。。
如果有机会,下次介绍GAN+RL构建图灵问答机!