Aaron Smith (University of Ottawa)
TITLE / TITRE Free lunches and subsampling Monte Carlo It is well-known that the performance of MCMC algorithms degrades quite quickly when targeting computationally expensive posterior distributions, including the posteriors for even simple models when the dataset is large. This has motivated the search for MCMC variants that scale well for large datasets. One simple approach, taken by several research groups, has been to look at only a subsample of the data at every step. This method is known to work quite well for optimization, and variants of stochastic gradient descent are the workhorse of modern machine learning. In this talk, we focus on a simple "no-free-lunch" result which shows that no algorithm of this sort can provide substantial speedups for Bayesian computation. We briefly sketch the main steps in the proof, illustrate how these generic results apply to realistic statistical problems and proposed algorithms, and discuss some special examples that can avoid our generic results and provide a free (or at least cheap) lunch. We also mention recent work "in both directions," extending our basic conclusion to some non-reversible chains and showing explicitly how it can be avoided for more complex posteriors (Based on joint with Patrick Conrad, Andrew Davis, James Johndrow, Zonghao Li, Youssef Marzouk, Natesh Pillai, Pengfei Wang and Azeem Zaman.) PLACE /聽LIEU Hybride - CRM, Salle / Room 5340, Pavillon Andr茅 Aisenstadt |