Swish: a SelfGated Activation Function
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 4 months ago.

Swish: a SelfGated Activation Function
The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widelyused activation function is the Rectified Linear Unit (ReLU). Although various alternatives to ReLU have been proposed, none have managed to replace it due to inconsistent gains. In this work, we propose a new activation function, named Swish, which is simply $f(x) = x cdot text{sigmoid}(x)$. Our experiments show that Swish tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top1 classification accuracy on ImageNet by 0.9% for Mobile NASNetA and 0.6% for InceptionResNetv2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network.
Swish: a SelfGated Activation Function
by Prajit Ramachandran, Barret Zoph, Quoc V. Le
https://arxiv.org/pdf/1710.05941v1.pdf
You must be logged in to reply to this topic.