Machine Learning

IDK Cascades: Fast Deep Learning by Learning not to Overthink

Tagged: , , ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 4 months ago.


  • arXiv
    5 pts

    IDK Cascades: Fast Deep Learning by Learning not to Overthink

    Advances in deep learning have led to substantial increases in prediction accuracy but have been accompanied by increases in the cost of rendering predictions. We conjecture that for a majority of real-world inputs, the recent advances in deep learning have created models that effectively “over-think” on simple inputs. In this paper we revisit the question of how to effectively build model cascades to reduce prediction costs. While classic cascade techniques primarily leverage class asymmetry to reduce cost, we extend this approach to arbitrary multi-class prediction tasks. We introduce the “I Don’t Know” (IDK) prediction cascades framework, a general framework for composing a set of pre-trained models to accelerate inference without a loss in prediction accuracy. We propose two search based methods for constructing cascades as well as a new cost-aware objective within this framework. We evaluate these techniques on a range of both benchmark and real-world datasets and demonstrate that prediction cascades can reduce computation by 37%, resulting in up to 1.6x speedups in image classification tasks over state-of-the-art models without a loss in accuracy. Furthermore, on a driving motion prediction task evaluated on a large scale autonomous driving dataset, prediction cascades achieved 95% accuracy when combined with human experts, while requiring human intervention on less than 30% of the queries.

    IDK Cascades: Fast Deep Learning by Learning not to Overthink
    by Xin Wang, Yujia Luo, Daniel Crankshaw, Alexey Tumanov, Fisher Yu, Joseph E. Gonzalez
    https://arxiv.org/pdf/1706.00885v3.pdf

You must be logged in to reply to this topic.