Skip to content
Register Sign in Wishlist

Bandit Algorithms

  • Date Published: July 2020
  • availability: Available
  • format: Hardback
  • isbn: 9781108486828

Hardback

Add to wishlist

Other available formats:
eBook


Looking for an inspection copy?

This title is not currently available on inspection

Description
Product filter button
Description
Contents
Resources
Courses
About the Authors
  • Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.

    • Over 350 exercises, many with solutions, provide an excellent resource for a graduate course or self-study
    • Presents mathematical rigor as well as the explanations behind these techniques and approaches
    • This self-contained text is both broad and complete
    Read more

    Reviews & endorsements

    'This year marks the 68th anniversary of 'multi-armed bandits' introduced by Herbert Robbins in 1952, and the 35th anniversary of his 1985 paper with me that advanced multi-armed bandit theory in new directions via the concept of 'regret' and a sharp asymptotic lower bound for the regret. This vibrant subject has attracted important multidisciplinary developments and applications. Bandit Algorithms gives it a comprehensive and up-to-date treatment, and meets the need for such books in instruction and research in the subject, as in a new course on contextual bandits and recommendation technology that I am developing at Stanford.' Tze L. Lai, Stanford University

    'This is a timely book on the theory of multi-armed bandits, covering a very broad range of basic and advanced topics. The rigorous treatment combined with intuition makes it an ideal resource for anyone interested in the mathematical and algorithmic foundations of a fascinating and rapidly growing field of research.' Nicolò Cesa-Bianchi, University of Milan

    'The field of bandit algorithms, in its modern form, and driven by prominent new applications, has been taking off in multiple directions. The book by Lattimore and Szepesvári is a timely contribution that will become a standard reference on the subject. The book offers a thorough exposition of an enormous amount of material, neatly organized in digestible pieces. It is mathematically rigorous, but also pleasant to read, rich in intuition and historical notes, and without superfluous details. Highly recommended.' John Tsitsiklis, Massachusetts Institute of Technology

    See more reviews

    Customer reviews

    Not yet reviewed

    Be the first to review

    Review was not posted due to profanity

    ×

    , create a review

    (If you're not , sign out)

    Please enter the right captcha value
    Please enter a star rating.
    Your review must be a minimum of 12 words.

    How do you rate this item?

    ×

    Product details

    • Date Published: July 2020
    • format: Hardback
    • isbn: 9781108486828
    • length: 536 pages
    • dimensions: 252 x 182 x 32 mm
    • weight: 1.07kg
    • availability: Available
  • Table of Contents

    1. Introduction
    2. Foundations of probability
    3. Stochastic processes and Markov chains
    4. Finite-armed stochastic bandits
    5. Concentration of measure
    6. The explore-then-commit algorithm
    7. The upper confidence bound algorithm
    8. The upper confidence bound algorithm: asymptotic optimality
    9. The upper confidence bound algorithm: minimax optimality
    10. The upper confidence bound algorithm: Bernoulli noise
    11. The Exp3 algorithm
    12. The Exp3-IX algorithm
    13. Lower bounds: basic ideas
    14. Foundations of information theory
    15. Minimax lower bounds
    16. Asymptotic and instance dependent lower bounds
    17. High probability lower bounds
    18. Contextual bandits
    19. Stochastic linear bandits
    20. Confidence bounds for least squares estimators
    21. Optimal design for least squares estimators
    22. Stochastic linear bandits with finitely many arms
    23. Stochastic linear bandits with sparsity
    24. Minimax lower bounds for stochastic linear bandits
    25. Asymptotic lower bounds for stochastic linear bandits
    26. Foundations of convex analysis
    27. Exp3 for adversarial linear bandits
    28. Follow the regularized leader and mirror descent
    29. The relation between adversarial and stochastic linear bandits
    30. Combinatorial bandits
    31. Non-stationary bandits
    32. Ranking
    33. Pure exploration
    34. Foundations of Bayesian learning
    35. Bayesian bandits
    36. Thompson sampling
    37. Partial monitoring
    38. Markov decision processes.

  • Authors

    Tor Lattimore, University of Alberta
    Tor Lattimore is a research scientist at DeepMind. His research is focused on decision making in the face of uncertainty, including bandit algorithms and reinforcement learning. Before joining DeepMind he was an assistant professor at Indiana University and a postdoctoral fellow at the University of Alberta.

    Csaba Szepesvári, University of Alberta
    Csaba Szepesvári is a Professor in the Department of Computing Science at the University of Alberta and a Principal Investigator of the Alberta Machine Intelligence Institute. He also leads the 'Foundations' team at DeepMind. He has co-authored a book on nonlinear approximate adaptive controllers and authored a book on reinforcement learning, in addition to publishing over 200 journal and conference papers. He is an action editor of the Journal of Machine Learning Research.

Related Books

Sorry, this resource is locked

Please register or sign in to request access. If you are having problems accessing these resources please email [email protected]

Register Sign in
Please note that this file is password protected. You will be asked to input your password on the next screen.

» Proceed

You are now leaving the Cambridge University Press website. Your eBook purchase and download will be completed by our partner www.ebooks.com. Please see the permission section of the www.ebooks.com catalogue page for details of the print & copy limits on our eBooks.

Continue ×

Continue ×

Continue ×
warning icon

Turn stock notifications on?

You must be signed in to your Cambridge account to turn product stock notifications on or off.

Sign in Create a Cambridge account arrow icon
×

Find content that relates to you

Join us online

This site uses cookies to improve your experience. Read more Close

Are you sure you want to delete your account?

This cannot be undone.

Cancel

Thank you for your feedback which will help us improve our service.

If you requested a response, we will make sure to get back to you shortly.

×
Please fill in the required fields in your feedback submission.
×