Encoder Decoder Sequences: How Long is Too Long?

3 min read

In machine learning many times we deal with the input being a sequence and the output also a sequence. We call such a problem a sequence-to-sequence problem. A popular architecture for dealing with the sequence-to-sequence problem is the encoder decoder architecture. The encoder converts the variable length input sequence to a fixed length context vector, and the decoder converts the fixed length context vector to a variable length output sequence. I have been reading enough literature (for an example, take a look at this article) that says the disadvantage of the encoder decoder architecture is that irrespective of the length…...

This article is free to read

Login to read the full article


OR
Amaresh Patnaik Amaresh is a researcher in the areas of Machine Learning and Natural Language Processing. Additionally he works in the area of learning and development for professionals in the IT services industry. He uses Python and TensorFlow for his research work. In these columns he shares his experiences with the intention of helping the reader understand concepts and solving problems. He is based out of Mumbai, India.

Follow DDI

Gain Access to Expert Views

We won't send you spam. Unsubscribe at any time.