Title: Playing Congestion Games with Bandit Feedbacks
Authors: Chen, Po-An
Lu, Chi-Jen
交大名義發表
National Chiao Tung University
Keywords: Mirror-descent algorithm;No-regret dynamics;Convergence
Issue Date: 1-Jan-2015
Abstract: Almost all convergence results from each player adopting specific "no-regret" learning algorithms such as multiplicative updates or the more general mirror-descent algorithms in repeated games are only known in the more generous information model, in which each player is assumed to have access to the costs of all possible choices, even the unchosen ones, at each time step. This assumption in general may seem too strong, while a more realistic one is captured by the bandit model, in which each player at each time step is restricted to know only the cost of her currently chosen path, but not any of the unchosen ones. Can convergence still be achieved in such a more challenging bandit model? We answer this question positively. While existing bandit algorithms do not seem to work here, we develop a new family of bandit algorithms based on the mirror-descent algorithm with such a guarantee in atomic congestion games.
URI: http://hdl.handle.net/11536/151714
ISBN: 978-1-4503-3413-6
Journal: PROCEEDINGS OF THE 2015 INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS & MULTIAGENT SYSTEMS (AAMAS'15)
Begin Page: 1721
End Page: 1722
Appears in Collections:Conferences Paper