class: center, middle, inverse, title-slide .title[ # Multi-armed bandits ] .author[ ### Lars Relund Nielsen ] --- layout: true --- ## Learning outcomes * Define a k-armed bandit and understand the nature of the problem. * Define the reward of an action (action-reward). * Describe different methods for estimating the action-reward. * Explain the differences between exploration and exploitation. * Formulate an `\(\epsilon\)`-greedy algorithm for selecting the next action. * Interpret the sample-average (variable step-size) versus exponential recency-weighted average (constant step-size) action-reward estimation. * Argue why we might use a constant stepsize in the case of non-stationarity. * Understand the effect of optimistic initial values. * Formulate an upper confidence bound action selection algorithm. --- ## The k-armed bandit problem .pull-left[ * Multi-armed bandits attempt to find the best action (among `\(k\)` actions) by learning through trial and error. * The name derives from “one-armed bandit,” a slang term for a slot machine. * After each choice, you receive a reward from an unknown probability dist. * Objective is to maximise total reward over some time period. * Natural strategy: try each action at random for a while (exploration). After a while start playing the higher paying ones (exploitation). ] .pull-right[ * The agent observe samples of the reward of an action and can use this to estimate the expected reward of that action. * The process only has a single state `\(\Rightarrow\)` find a policy that chooses the action with the highest expected reward. <img src="img/bandit.png" width="100%" style="display: block; margin: auto;" /> ] --- ## Possible application * Multi-armed bandits can be used in e.g. [digital advertising](https://research.facebook.com/blog/2021/4/auto-placement-of-ad-campaigns-using-multi-armed-bandits/). * You are an advertiser seeking to optimize which ads ( `\(k\)` to choose among) to show a visitor on a particular website. * Your goal is to maximize the number of clicks over time. <img src="img/bandit-choose.png" width="80%" style="display: block; margin: auto;" /> --- ## Estimating the value of an action * Want to estimate the expected reward of an action `\(q_*(a) = \mathbb{E}[R_t | A_t = a]\)`. * Assume that at time `\(t\)` action `\(a\)` has been chosen `\(N_t(a)\)` times. Then the estimated action value is `$$Q_t(a) = \frac{R_1+R_2+\cdots+R_{N_t(a)}}{N_t(a)},$$` * Storing `\(Q_t(a)\)` this way is cumbersome since memory and computation requirements grow over time. Instead an *incremental* approach is better. * If we assume that `\(N_t(a) = n-1\)` and set `\(Q_t(a) = Q_n\)` then `\(Q_{n+1}\)` is `$$Q_{n+1} = Q_n + \frac{1}{n} \left[R_n - Q_n\right].$$` * We can update the estimate of the value of `\(a\)` using the previous estimate, the observed reward and how many times the action has occurred. --- ## Incremential value estimation * A greedy approach for selecting the next action is `\begin{equation} A_t =\arg \max_a Q_t(a). \end{equation}` Here `\(\arg\max_a\)` means the value of `\(a\)` for which `\(Q_t(a)\)` is maximised. * A pure greedy approach do not explore other actions. * For exploration use an `\(\varepsilon\)`-greedy approach: with probability `\(\varepsilon\)` we take a random draw among all of the actions (choosing each action with equal probability). --- ## Let us try to code the algorithm - We need two classes: the agent and the environment. * The agent class store - `\(Q_t(a)\)` (a vector) - `\(N_t(a)\)` (a vector) - A method (function) that select the next action based on an `\(\varepsilon\)`-greedy approach. - A method (function) that update the `\(Q_t(a)\)` using the incremental formula. -- - The environment class store - The true mean rewards `\(\mu(a)\)` (in a real life application these are unknown). - A method (function) that return a sample reward from `\(R \sim N(\mu(a),1)\)`. --- ## The role of the step-size * In general we update the reward estimate of an action using `\begin{equation} Q_{n+1} = Q_n +\alpha_n(a) \left[R_n - Q_n\right] \end{equation}` * Sample average `\(\alpha_n(a)= 1/n\)`; however, other choices of `\(\alpha_n(a)\)` is possible. In general we will converge to the true reward if `\begin{equation} \sum_n \alpha_n(a) = \infty \quad\quad \mathsf{and} \quad\quad \sum_n \alpha_n(a)^2 < \infty. \end{equation}` Meaning that the coefficients must be large enough to recover from initial fluctuations, but not so large that they do not converge in the long run. --- ## The role of the step-size (2) * Non-stationary process: the expected reward of an action change over time. * Convergence is undesirable and we may want to use a constant `\(\alpha_n(a)= \alpha \in (0, 1]\)` instead: `\begin{align} Q_{n+1} &= Q_n +\alpha \left[R_n - Q_n\right] \nonumber \\ &= \alpha R_n + (1 - \alpha)Q_n \nonumber \\ & \vdots \nonumber \\ &= (1-\alpha)^n Q_1 + \sum_{i=1}^{n} \alpha (1 - \alpha)^{n-i} R_i \\ \end{align}` * More recent rewards are given more weight. * The weight given to each reward decays exponentially into the past (exponential recency-weighted average). --- ## Upper-Confidence Bound Action Selection * `\(\epsilon\)`-greedy algorithm: choose the action to explore with equal probability in an exploration step. * Better to select among non-greedy actions according to their potential. * This can be done by taking into account both how close their estimates are to being maximal and the uncertainty in those estimates. * One way to do this is to select actions using the *upper-confidence bound*: `\begin{equation} A_t = \arg\max_a \left(Q_t(a) + c\sqrt{\frac{\ln t}{N_t(a)}}\right), \end{equation}` --- ## The square root term is a measure of the uncertainty .left-column-wide[ * The more time has passed, and the less we have sampled an action, the higher UCB. * As the timesteps increases, the denominator dominates the numerator as the ln term flattens. * Each time we select an action our uncertainty decreases because `\(N\)` is the denominator. * If `\(N_t(a) = 0\)` then we consider `\(a\)` as a maximal action, i.e. we select first among actions with `\(N_t(a) = 0\)`. * The parameter `\(c>0\)` controls the degree of exploration. Higher `\(c\)` results in more weight on the uncertainty. ] .right-column-small[ <img src="img/bandit-srt-1.png" width="100%" style="display: block; margin: auto;" /> ] [BSS]: https://bss.au.dk/en/ [bi-programme]: https://kandidat.au.dk/en/businessintelligence/ [course-help]: https://github.com/bss-osca/rl/issues [cran]: https://cloud.r-project.org [cheatsheet-readr]: https://rawgit.com/rstudio/cheatsheets/master/data-import.pdf [course-welcome-to-the-tidyverse]: https://github.com/rstudio-education/welcome-to-the-tidyverse [DataCamp]: https://www.datacamp.com/ [datacamp-signup]: https://www.datacamp.com/groups/shared_links/cbaee6c73e7d78549a9e32a900793b2d5491ace1824efc1760a6729735948215 [datacamp-r-intro]: https://learn.datacamp.com/courses/free-introduction-to-r [datacamp-r-rmarkdown]: https://campus.datacamp.com/courses/reporting-with-rmarkdown [datacamp-r-communicating]: https://learn.datacamp.com/courses/communicating-with-data-in-the-tidyverse [datacamp-r-communicating-chap3]: https://campus.datacamp.com/courses/communicating-with-data-in-the-tidyverse/introduction-to-rmarkdown [datacamp-r-communicating-chap4]: https://campus.datacamp.com/courses/communicating-with-data-in-the-tidyverse/customizing-your-rmarkdown-report [datacamp-r-intermediate]: https://learn.datacamp.com/courses/intermediate-r [datacamp-r-intermediate-chap1]: https://campus.datacamp.com/courses/intermediate-r/chapter-1-conditionals-and-control-flow [datacamp-r-intermediate-chap2]: https://campus.datacamp.com/courses/intermediate-r/chapter-2-loops [datacamp-r-intermediate-chap3]: https://campus.datacamp.com/courses/intermediate-r/chapter-3-functions [datacamp-r-intermediate-chap4]: https://campus.datacamp.com/courses/intermediate-r/chapter-4-the-apply-family [datacamp-r-functions]: https://learn.datacamp.com/courses/introduction-to-writing-functions-in-r [datacamp-r-tidyverse]: https://learn.datacamp.com/courses/introduction-to-the-tidyverse [datacamp-r-strings]: https://learn.datacamp.com/courses/string-manipulation-with-stringr-in-r [datacamp-r-dplyr]: https://learn.datacamp.com/courses/data-manipulation-with-dplyr [datacamp-r-dplyr-bakeoff]: https://learn.datacamp.com/courses/working-with-data-in-the-tidyverse [datacamp-r-ggplot2-intro]: https://learn.datacamp.com/courses/introduction-to-data-visualization-with-ggplot2 [datacamp-r-ggplot2-intermediate]: https://learn.datacamp.com/courses/intermediate-data-visualization-with-ggplot2 [dplyr-cran]: https://CRAN.R-project.org/package=dplyr [debug-in-r]: https://rstats.wtf/debugging-r-code.html [google-form]: https://forms.gle/s39GeDGV9AzAXUo18 [google-grupper]: https://docs.google.com/spreadsheets/d/1DHxthd5AQywAU4Crb3hM9rnog2GqGQYZ2o175SQgn_0/edit?usp=sharing [GitHub]: https://github.com/ [git-install]: https://git-scm.com/downloads [github-actions]: https://github.com/features/actions [github-pages]: https://pages.github.com/ [gh-rl-student]: https://github.com/bss-osca/rl-student [gh-rl]: https://github.com/bss-osca/rl [happy-git]: https://happygitwithr.com [hg-install-git]: https://happygitwithr.com/install-git.html [hg-why]: https://happygitwithr.com/big-picture.html#big-picture [hg-github-reg]: https://happygitwithr.com/github-acct.html#github-acct [hg-git-install]: https://happygitwithr.com/install-git.html#install-git [hg-exist-github-first]: https://happygitwithr.com/existing-github-first.html [hg-exist-github-last]: https://happygitwithr.com/existing-github-last.html [hg-credential-helper]: https://happygitwithr.com/credential-caching.html [hypothes.is]: https://web.hypothes.is/ [osca-programme]: https://kandidat.au.dk/en/operationsandsupplychainanalytics/ [Peergrade]: https://peergrade.io [peergrade-signup]: https://app.peergrade.io/join [point-and-click]: https://en.wikipedia.org/wiki/Point_and_click [pkg-bookdown]: https://bookdown.org/yihui/bookdown/ [pkg-openxlsx]: https://ycphs.github.io/openxlsx/index.html [pkg-ropensci-writexl]: https://docs.ropensci.org/writexl/ [pkg-jsonlite]: https://cran.r-project.org/web/packages/jsonlite/index.html [R]: https://www.r-project.org [RStudio]: https://rstudio.com [rstudio-cloud]: https://rstudio.cloud/spaces/176810/join?access_code=LSGnG2EXTuzSyeYaNXJE77vP33DZUoeMbC0xhfCz [r-cloud-mod12]: https://rstudio.cloud/spaces/176810/project/2963819 [r-cloud-mod13]: https://rstudio.cloud/spaces/176810/project/3020139 [r-cloud-mod14]: https://rstudio.cloud/spaces/176810/project/3020322 [r-cloud-mod15]: https://rstudio.cloud/spaces/176810/project/3020509 [r-cloud-mod16]: https://rstudio.cloud/spaces/176810/project/3026754 [r-cloud-mod17]: https://rstudio.cloud/spaces/176810/project/3034015 [r-cloud-mod18]: https://rstudio.cloud/spaces/176810/project/3130795 [r-cloud-mod19]: https://rstudio.cloud/spaces/176810/project/3266132 [rstudio-download]: https://rstudio.com/products/rstudio/download/#download [rstudio-customizing]: https://support.rstudio.com/hc/en-us/articles/200549016-Customizing-RStudio [rstudio-key-shortcuts]: https://support.rstudio.com/hc/en-us/articles/200711853-Keyboard-Shortcuts [rstudio-workbench]: https://www.rstudio.com/wp-content/uploads/2014/04/rstudio-workbench.png [r-markdown]: https://rmarkdown.rstudio.com/ [ropensci-writexl]: https://docs.ropensci.org/writexl/ [r4ds-pipes]: https://r4ds.had.co.nz/pipes.html [r4ds-factors]: https://r4ds.had.co.nz/factors.html [r4ds-strings]: https://r4ds.had.co.nz/strings.html [r4ds-iteration]: https://r4ds.had.co.nz/iteration.html [stat-545]: https://stat545.com [stat-545-functions-part1]: https://stat545.com/functions-part1.html [stat-545-functions-part2]: https://stat545.com/functions-part2.html [stat-545-functions-part3]: https://stat545.com/functions-part3.html [slides-welcome]: https://bss-osca.github.io/rl/slides/00-rl_welcome.html [slides-m1-3]: https://bss-osca.github.io/rl/slides/01-welcome_r_part.html [slides-m4-5]: https://bss-osca.github.io/rl/slides/02-programming.html [slides-m6-8]: https://bss-osca.github.io/rl/slides/03-transform.html [slides-m9]: https://bss-osca.github.io/rl/slides/04-plot.html [slides-m83]: https://bss-osca.github.io/rl/slides/05-joins.html [sutton-notation]: https://bss-osca.github.io/rl/misc/sutton-notation.pdf [tidyverse-main-page]: https://www.tidyverse.org [tidyverse-packages]: https://www.tidyverse.org/packages/ [tidyverse-core]: https://www.tidyverse.org/packages/#core-tidyverse [tidyverse-ggplot2]: https://ggplot2.tidyverse.org/ [tidyverse-dplyr]: https://dplyr.tidyverse.org/ [tidyverse-tidyr]: https://tidyr.tidyverse.org/ [tidyverse-readr]: https://readr.tidyverse.org/ [tidyverse-purrr]: https://purrr.tidyverse.org/ [tidyverse-tibble]: https://tibble.tidyverse.org/ [tidyverse-stringr]: https://stringr.tidyverse.org/ [tidyverse-forcats]: https://forcats.tidyverse.org/ [tidyverse-readxl]: https://readxl.tidyverse.org [tidyverse-googlesheets4]: https://googlesheets4.tidyverse.org/index.html [tutorial-markdown]: https://commonmark.org/help/tutorial/ [tfa-course]: https://bss-osca.github.io/tfa/ [Udemy]: https://www.udemy.com/ [vba-yt-course1]: https://www.youtube.com/playlist?list=PLpOAvcoMay5S_hb2D7iKznLqJ8QG_pde0 [vba-course1-hello]: https://youtu.be/f42OniDWaIo [vba-yt-course2]: https://www.youtube.com/playlist?list=PL3A6U40JUYCi4njVx59-vaUxYkG0yRO4m [vba-course2-devel-tab]: https://youtu.be/awEOUaw9q58 [vba-course2-devel-editor]: https://youtu.be/awEOUaw9q58 [vba-course2-devel-project]: https://youtu.be/fp6PTbU7bXo [vba-course2-devel-properties]: https://youtu.be/ks2QYKAd9Xw [vba-course2-devel-hello]: https://youtu.be/EQ6tDWBc8G4 [video-install]: https://vimeo.com/415501284 [video-rstudio-intro]: https://vimeo.com/416391353 [video-packages]: https://vimeo.com/416743698 [video-projects]: https://vimeo.com/319318233 [video-r-intro-p1]: https://www.youtube.com/watch?v=vGY5i_J2c-c [video-r-intro-p2]: https://www.youtube.com/watch?v=w8_XdYI3reU [video-r-intro-p3]: https://www.youtube.com/watch?v=NuY6jY4qE7I [video-subsetting]: https://www.youtube.com/watch?v=hWbgqzsQJF0&list=PLjTlxb-wKvXPqyY3FZDO8GqIaWuEDy-Od&index=10&t=0s [video-datatypes]: https://www.youtube.com/watch?v=5AQM-yUX9zg&list=PLjTlxb-wKvXPqyY3FZDO8GqIaWuEDy-Od&index=10 [video-control-structures]: https://www.youtube.com/watch?v=s_h9ruNwI_0 [video-conditional-loops]: https://www.youtube.com/watch?v=2evtsnPaoDg [video-functions]: https://www.youtube.com/watch?v=ffPeac3BigM [video-tibble-vs-df]: https://www.youtube.com/watch?v=EBk6PnvE1R4 [video-dplyr]: https://www.youtube.com/watch?v=aywFompr1F4 [wiki-snake-case]: https://en.wikipedia.org/wiki/Snake_case [wiki-camel-case]: https://en.wikipedia.org/wiki/Camel_case [wiki-interpreted]: https://en.wikipedia.org/wiki/Interpreted_language [wiki-literate-programming]: https://en.wikipedia.org/wiki/Literate_programming [wiki-csv]: https://en.wikipedia.org/wiki/Comma-separated_values [wiki-json]: https://en.wikipedia.org/wiki/JSON