原paper
https://arxiv.org/pdf/1609.03499.pdf
关键的句子:
At training time, the conditional predictions for all timesteps can be made in parallel because all timesteps of ground truth x are known. When generating with the model, the predictions are sequential: after each sample is predicted, it is fed back into the network to predict the next sample.
Temporal Convolution.
总结:
WaveNet中蕴含的causal卷积的哲学,最实用(或者说只适用的)场景就是text-to-speech。而像多步时间序列预测这种场景,原始的不用causual卷积,普通的CNN也是可以做到的。