91社区

Event

Restless Bandits: Indexability, Whittle Index Computation and Learning

Friday, July 9, 2021 10:15to11:15
ZOOM, CA

Virtual Informal Systems Seminar (VISS), Centre for Intelligent Machines (CIM) and Groupe d'Etudes et de Recherche en Analyse des Decisions (GERAD)


Meeting ID: 910 7928 6959 听 听 听 听
Passcode: VISS

Speaker: Nima Akbarzadeh, PhD Candidate, Electrical and Computer Engineering, 91社区

Abstract:
Restless bandits are a class of sequential resource allocation problems concerned with allocating one or more resources among several alternative processes where the evolution of the process depends on the resource allocated to them. 听In 1988, Whittle proposed an index heuristic for restless bandit problems which has emerged as a popular solution approach due to its simplicity and strong empirical performance. The Whittle index heuristic is applicable if the model satisfies a technical condition known as indexability. We present two general sufficient conditions for indexability and identify simpler to verify refinements of these conditions for fully-observable models and a class of indexable models for the partially-observable ones. We show that a generalization of the adaptive greedy algorithm computes the Whittle index for all indexable restless bandits. Finally, we discuss a learning problem in restless bandits when the dynamics of all processes are unknown initially. The learning algorithm uses a Thompson sampling approach to estimate the unknown parameters and uses estimated Whittle index to choose actions. We provide an upper-bound on the regret with respect to an oracle who knows the true parameters.

Bio:
Nima Akbarzadeh is a Ph.D. candidate in electrical and computer engineering at 91社区. He received the B.Sc. degree in electrical and computer engineering from Shiraz University, Iran, in 2014, the M.Sc. in electrical and electronics engineering from Bilkent University, Turkey, in 2017. He is a recipient of 2020 FRQNT PhD Scholarship. His research interests include stochastic control, reinforcement learning and multi-armed bandits.

Back to top