Machine Learning

Combining Representation Learning with Logic for Language Processing

Tagged: ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 11 months, 3 weeks ago.


  • arXiv
    5 pts

    Combining Representation Learning with Logic for Language Processing

    The current state-of-the-art in many natural language processing and automated knowledge base completion tasks is held by representation learning methods which learn distributed vector representations of symbols via gradient-based optimization. They require little or no hand-crafted features, thus avoiding the need for most preprocessing steps and task-specific assumptions. However, in many cases representation learning requires a large amount of annotated training data to generalize well to unseen data. Such labeled training data is provided by human annotators who often use formal logic as the language for specifying annotations. This thesis investigates different combinations of representation learning methods with logic for reducing the need for annotated training data, and for improving generalization.

    Combining Representation Learning with Logic for Language Processing
    by Tim Rocktäschel
    https://arxiv.org/pdf/1712.09687v1.pdf

You must be logged in to reply to this topic.