Bandits with Knapsacks
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 4 months ago.

Bandits with Knapsacks
Multiarmed bandit problems are the predominant theoretical model of explorationexploitation tradeoffs in learning, and they have countless applications ranging from medical trials, to communication networks, to Web search and advertising. In many of these application domains the learner may be constrained by one or more supply (or budget) limits, in addition to the customary limitation on the time horizon. The literature lacks a general model encompassing these sorts of problems. We introduce such a model, called “bandits with knapsacks”, that combines aspects of stochastic integer programming with online learning. A distinctive feature of our problem, in comparison to the existing regretminimization literature, is that the optimal policy for a given latent distribution may significantly outperform the policy that plays the optimal fixed arm. Consequently, achieving sublinear regret in the banditswithknapsacks problem is significantly more challenging than in conventional bandit problems. We present two algorithms whose reward is close to the informationtheoretic optimum: one is based on a novel “balanced exploration” paradigm, while the other is a primaldual algorithm that uses multiplicative updates. Further, we prove that the regret achieved by both algorithms is optimal up to polylogarithmic factors. We illustrate the generality of the problem by presenting applications in a number of different domains including electronic commerce, routing, and scheduling. As one example of a concrete application, we consider the problem of dynamic posted pricing with limited supply and obtain the first algorithm whose regret, with respect to the optimal dynamic policy, is sublinear in the supply.
Bandits with Knapsacks
by Ashwinkumar Badanidiyuru, Robert Kleinberg, Aleksandrs Slivkins
https://arxiv.org/pdf/1305.2545v8.pdf
You must be logged in to reply to this topic.