IDK Cascades: Fast Deep Learning by Learning not to Overthink
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 4 months ago.

IDK Cascades: Fast Deep Learning by Learning not to Overthink
Advances in deep learning have led to substantial increases in prediction accuracy but have been accompanied by increases in the cost of rendering predictions. We conjecture that for a majority of realworld inputs, the recent advances in deep learning have created models that effectively “overthink” on simple inputs. In this paper we revisit the question of how to effectively build model cascades to reduce prediction costs. While classic cascade techniques primarily leverage class asymmetry to reduce cost, we extend this approach to arbitrary multiclass prediction tasks. We introduce the “I Don’t Know” (IDK) prediction cascades framework, a general framework for composing a set of pretrained models to accelerate inference without a loss in prediction accuracy. We propose two search based methods for constructing cascades as well as a new costaware objective within this framework. We evaluate these techniques on a range of both benchmark and realworld datasets and demonstrate that prediction cascades can reduce computation by 37%, resulting in up to 1.6x speedups in image classification tasks over stateoftheart models without a loss in accuracy. Furthermore, on a driving motion prediction task evaluated on a large scale autonomous driving dataset, prediction cascades achieved 95% accuracy when combined with human experts, while requiring human intervention on less than 30% of the queries.
IDK Cascades: Fast Deep Learning by Learning not to Overthink
by Xin Wang, Yujia Luo, Daniel Crankshaw, Alexey Tumanov, Fisher Yu, Joseph E. Gonzalez
https://arxiv.org/pdf/1706.00885v3.pdf
You must be logged in to reply to this topic.