Machine Learning

Swish: a Self-Gated Activation Function

Tagged: ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 1 month ago.

  • arXiv
    5 pts

    Swish: a Self-Gated Activation Function

    The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectified Linear Unit (ReLU). Although various alternatives to ReLU have been proposed, none have managed to replace it due to inconsistent gains. In this work, we propose a new activation function, named Swish, which is simply $f(x) = x cdot text{sigmoid}(x)$. Our experiments show that Swish tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top-1 classification accuracy on ImageNet by 0.9% for Mobile NASNet-A and 0.6% for Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network.

    Swish: a Self-Gated Activation Function
    by Prajit Ramachandran, Barret Zoph, Quoc V. Le

You must be logged in to reply to this topic.