Machine Learning

Exploring Asymmetric Encoder-Decoder Structure for Context-based Sentence Representation Learning

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 11 months, 1 week ago.


  • arXiv
    5 pts

    Exploring Asymmetric Encoder-Decoder Structure for Context-based Sentence Representation Learning

    Context information plays an important role in human language understanding, and it is also useful for machines to learn vector representations of language. In this paper, we explore an asymmetric encoder-decoder structure for unsupervised context-based sentence representation learning. As a result, we build an encoder-decoder architecture with an RNN encoder and a CNN decoder. We further combine a suite of effective designs to significantly improve model efficiency while also achieving better performance. Our model is trained on two different large unlabeled corpora, and in both cases transferability is evaluated on a set of downstream language understanding tasks. We empirically show that our model is simple and fast while producing rich sentence representations that excel in downstream tasks.

    Exploring Asymmetric Encoder-Decoder Structure for Context-based Sentence Representation Learning
    by Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, Virginia R. de Sa
    https://arxiv.org/pdf/1710.10380v2.pdf

You must be logged in to reply to this topic.