Paper Today:
'Incorporating Copying Mechanism in Sequence-to-Sequence Learning'
This paper develops a model called COPYNET which performs well in an important mechanism called 'copy mechanism'.
In human language communication, there are many situations that we will use 'copy mechanism', such as in a dialogue:
In order to make machine generate such dialogue, there are two things to do.
- First, to identify what should be copied.
- Second, to decide where the copy part should be addressed.
Currently there are some popular models like seq2seq, and adding Attention Mechanism to seq2seq.
COPYNET is also an encoder-decoder model, but a different strategy in neural network based models.
RNN and Attention Mechanism requires more 'understanding', but COPYNET requires high 'literal fidelity'.
There are mainly 3 improvements in the decoder part.
Prediction:
Based on the mix of two probabilistic modes, generate mode and copy mode, the model can pick the proper subsentence and generate some OOV words.
State Update:
There's a minor change that they designed a selective read for copy mode, which enables the model to notice the location information.
Reading M:
This model can get a hybrid of content based addressing and location based addressing.
In the experiment, this model did very well in tasks like text summarization.