Machine Learning

Lattice Recurrent Unit: Improving Convergence and Statistical Efficiency for Sequence Modeling

Tagged: , ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 month, 2 weeks ago.


  • arXiv
    5 pts

    Lattice Recurrent Unit: Improving Convergence and Statistical Efficiency for Sequence Modeling

    Recurrent neural networks have shown remarkable success in modeling sequences. However low resource situations still adversely affect the generalizability of these models. We introduce a new family of models, called Lattice Recurrent Units (LRU), to address the challenge of learning deep multi-layer recurrent models with limited resources. LRU models achieve this goal by creating distinct (but coupled) flow of information inside the units: a first flow along time dimension and a second flow along depth dimension. It also offers a symmetry in how information can flow horizontally and vertically. We analyze the effects of decoupling three different components of our LRU model: Reset Gate, Update Gate and Projected State. We evaluate this family on new LRU models on computational convergence rates and statistical efficiency. Our experiments are performed on four publicly-available datasets, comparing with Grid-LSTM and Recurrent Highway networks. Our results show that LRU has better empirical computational convergence rates and statistical efficiency values, along with learning more accurate language models.

    Lattice Recurrent Unit: Improving Convergence and Statistical Efficiency for Sequence Modeling
    by Chaitanya Ahuja, Louis-Phillipe Morency
    https://arxiv.org/pdf/1710.02254v1.pdf

You must be logged in to reply to this topic.