Machine Learning

Overcoming Language Variation in Sentiment Analysis with Social Attention

Tagged: , ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 8 months ago.


  • arXiv
    5 pts

    Overcoming Language Variation in Sentiment Analysis with Social Attention

    Variation in language is ubiquitous, particularly in newer forms of writing such as social media. Fortunately, variation is not random, it is often linked to social properties of the author. In this paper, we show how to exploit social networks to make sentiment analysis more robust to social language variation. The key idea is linguistic homophily: the tendency of socially linked individuals to use language in similar ways. We formalize this idea in a novel attention-based neural network architecture, in which attention is divided among several basis models, depending on the author’s position in the social network. This has the effect of smoothing the classification function across the social network, and makes it possible to induce personalized classifiers even for authors for whom there is no labeled data or demographic metadata. This model significantly improves the accuracies of sentiment analysis on Twitter and on review data.

    Overcoming Language Variation in Sentiment Analysis with Social Attention
    by Yi Yang, Jacob Eisenstein
    https://arxiv.org/pdf/1511.06052v4.pdf

You must be logged in to reply to this topic.