I also found some of the same questions you had with the not so clear concepts of encoder/decoder states.
Teacher forcing seems to be the idea that things are trained based on previous text.
The Gif example just confused me considering it passes words like cat and with when they aren’t part of the example at all…
but I think the idea is that if you have the sentence “the ball bounced twice”
everything that is passed through utilizes the information before it. so:
“the” - the
“the ball” - ball
“the ball bounced” - bounced
“the ball bounced twice” - twice
by training using the previous words it helps seq2seq predict text.
again - based on the gif example I don’t understand how it does this.
I’d like to know the answers to your encoder/decoder questions as well - when i ran the final code that is provided the lesson says to compare the outputs and see if the words translated properly. I didn’t see a single english/spanish word from the provided list. I’d say my code was wrong except none of it was my code. it was provided!