Bandit Algorithms
- Authors:
- Tor Lattimore, University of Alberta
- Csaba Szepesvári, University of Alberta
- Date Published: July 2020
- availability: Available
- format: Hardback
- isbn: 9781108486828
Hardback
Other available formats:
eBook
Looking for an inspection copy?
This title is not currently available on inspection
-
Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.
Read more- Over 350 exercises, many with solutions, provide an excellent resource for a graduate course or self-study
- Presents mathematical rigor as well as the explanations behind these techniques and approaches
- This self-contained text is both broad and complete
Reviews & endorsements
'This year marks the 68th anniversary of 'multi-armed bandits' introduced by Herbert Robbins in 1952, and the 35th anniversary of his 1985 paper with me that advanced multi-armed bandit theory in new directions via the concept of 'regret' and a sharp asymptotic lower bound for the regret. This vibrant subject has attracted important multidisciplinary developments and applications. Bandit Algorithms gives it a comprehensive and up-to-date treatment, and meets the need for such books in instruction and research in the subject, as in a new course on contextual bandits and recommendation technology that I am developing at Stanford.' Tze L. Lai, Stanford University
See more reviews'This is a timely book on the theory of multi-armed bandits, covering a very broad range of basic and advanced topics. The rigorous treatment combined with intuition makes it an ideal resource for anyone interested in the mathematical and algorithmic foundations of a fascinating and rapidly growing field of research.' Nicolò Cesa-Bianchi, University of Milan
'The field of bandit algorithms, in its modern form, and driven by prominent new applications, has been taking off in multiple directions. The book by Lattimore and Szepesvári is a timely contribution that will become a standard reference on the subject. The book offers a thorough exposition of an enormous amount of material, neatly organized in digestible pieces. It is mathematically rigorous, but also pleasant to read, rich in intuition and historical notes, and without superfluous details. Highly recommended.' John Tsitsiklis, Massachusetts Institute of Technology
Customer reviews
Not yet reviewed
Be the first to review
Review was not posted due to profanity
×Product details
- Date Published: July 2020
- format: Hardback
- isbn: 9781108486828
- length: 536 pages
- dimensions: 252 x 182 x 32 mm
- weight: 1.07kg
- availability: Available
Table of Contents
1. Introduction
2. Foundations of probability
3. Stochastic processes and Markov chains
4. Finite-armed stochastic bandits
5. Concentration of measure
6. The explore-then-commit algorithm
7. The upper confidence bound algorithm
8. The upper confidence bound algorithm: asymptotic optimality
9. The upper confidence bound algorithm: minimax optimality
10. The upper confidence bound algorithm: Bernoulli noise
11. The Exp3 algorithm
12. The Exp3-IX algorithm
13. Lower bounds: basic ideas
14. Foundations of information theory
15. Minimax lower bounds
16. Asymptotic and instance dependent lower bounds
17. High probability lower bounds
18. Contextual bandits
19. Stochastic linear bandits
20. Confidence bounds for least squares estimators
21. Optimal design for least squares estimators
22. Stochastic linear bandits with finitely many arms
23. Stochastic linear bandits with sparsity
24. Minimax lower bounds for stochastic linear bandits
25. Asymptotic lower bounds for stochastic linear bandits
26. Foundations of convex analysis
27. Exp3 for adversarial linear bandits
28. Follow the regularized leader and mirror descent
29. The relation between adversarial and stochastic linear bandits
30. Combinatorial bandits
31. Non-stationary bandits
32. Ranking
33. Pure exploration
34. Foundations of Bayesian learning
35. Bayesian bandits
36. Thompson sampling
37. Partial monitoring
38. Markov decision processes.
Sorry, this resource is locked
Please register or sign in to request access. If you are having problems accessing these resources please email [email protected]
Register Sign in» Proceed
You are now leaving the Cambridge University Press website. Your eBook purchase and download will be completed by our partner www.ebooks.com. Please see the permission section of the www.ebooks.com catalogue page for details of the print & copy limits on our eBooks.
Continue ×Are you sure you want to delete your account?
This cannot be undone.
Thank you for your feedback which will help us improve our service.
If you requested a response, we will make sure to get back to you shortly.
×