Machine Learning

Lets keep it simple, Using simple architectures to outperform deeper and more complex architectures

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 11 months, 1 week ago.


  • arXiv
    5 pts

    Lets keep it simple, Using simple architectures to outperform deeper and more complex architectures

    Major winning Convolutional Neural Networks (CNNs), such as AlexNet, VGGNet, ResNet, GoogleNet, include tens to hundreds of millions of parameters, which impose considerable computation and memory overhead. This limits their practical use for training, optimization and memory efficiency. On the contrary, light-weight architectures, being proposed to address this issue, mainly suffer from low accuracy. These inefficiencies mostly stem from following an ad hoc procedure. We propose a simple architecture, called SimpleNet, based on a set of designing principles and we empirically show that SimpleNet provides a good tradeoff between the computation/memory efficiency and the accuracy. Our simple 13-layer architecture outperforms most of the deeper and complex architectures to date such as VGGNet, ResNet, and GoogleNet on several well-known benchmarks while having 2 to 25 times fewer number of parameters and operations. This makes it very handy for embedded system or system with computational and memory limitations. We achieved state-of-the-art result on CIFAR10 outperforming several heavier architectures, near state of the art on MNIST and competitive results on CIFAR100 and SVHN. Models are made available at: https://github.com/Coderx7/SimpleNet

    Lets keep it simple, Using simple architectures to outperform deeper and more complex architectures
    by Seyyed Hossein Hasanpour, Mohammad Rouhani, Mohsen Fayyaz, Mohammad Sabokrou
    https://arxiv.org/pdf/1608.06037v5.pdf

You must be logged in to reply to this topic.