博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
【转载】Recommendations with Thompson Sampling (Part II)
阅读量:5338 次
发布时间:2019-06-15

本文共 7831 字,大约阅读时间需要 26 分钟。

[原文链接:。]

[本文链接:,转载请注明出处]

 

Recommendations with Thompson Sampling

06/05/2014 • Topics: , ,

by

Sergey Feldman

This is the second in a series of three blog posts on bandits for recommendation systems.

If you read the , you should now have a good idea of the challenges in building a good algorithm for dishing out recommendations in the bandit setting.  The most important challenge is to balance exploitationwith exploration.  That is, we have two somewhat conflicting goals: (a) quickly find the best arm to pull and (b) pull the best arm as often as possible.  What I dubbed the naive algorithm in the preceding blog post fulfilled these two goals in a direct way: explore for a while, and then exploit forever. It was an OK approach, but we found that more sophisticated approaches, like the UCB family of bandit algorithms, had significantly better performance and no parameters to tune.

In this post, we'll introduce another technique: Thompson sampling (also known as probability matching).  This has been well covered , but mostly for the binary reward case (zeros and ones).  I'll also go over Thompson in the log-normal reward case, and offer some approximations that can work in for any reward distribution.

Before I define Thompson sampling, let's build up some intuition.  If we have an infinite amount of pulls, then we know exactly what the expected rewards are, and there is no reason to ever explore.  When we have a finite number of pulls, then we have to explore and the reason is that we are uncertain of what is the best arm.  The right machinery to quantify uncertainty is the probability distribution.  The UCB algorithms are implicitly using a probability distribution, but only one number from it: the upper confidence bound.  In Bayesian thinking, we want to use the entire probability distribution.  In the preceding post I defined \(p_a(r)\),  the probability distribution from which rewards are drawn.  That's what controls the bandit.  It would be great if we could estimate this entire distribution, but we don't need to.  Why?  Because all we care about is its mean \( \mu_a \) .  What we will do is encode our uncertainty of μa in the probability distribution \( p(\mu_a | \text{data}_a ) \) , and then use that probability distribution to decide when to explore and when to exploit.

You may be confused by now because there are two related probability distributions floating around, so let's review:

\(p_a(r)\) - the probability distribution that bandit a uses to generate rewards when it is pulled. \( p(\mu_a | \text{data}_a ) \)  - the probability distribution of where we think the mean of  \(p_a(r)\) after observing some data.

With Thompson sampling you keep around a probability distribution  \( p(\mu_a | \text{data}_a ) \)  that encodes your belief about where the expected reward \( \mu_a \)  is for arm a .  For the simple coin-flip case, we can use the convenient , and the distribution at round t after seeing \(S_{a,t}\) successes and \(F_{a,t}\) failures for arm a is simply:

                           \( p(\mu_a|\text{data}_a) = \text{Beta}(S_{a,t} + 1,F_{a,t} + 1) \),

where the added 1's are convenient priors.

So now we have a distribution that encodes our uncertainty of where the true expected reward \( \mu_a \)  is.  What's the actual algorithm?  Here it is, in all its simple glory:

  1. Draw a random sample from \( p(\mu_a | \text{data}_a ) \)  for each arm a .
  2. Pull the arm which has the largest drawn sample.

That's it!  It turns out that this approach is a very natural way to balance exploration and exploitation.  Here is the same simulation from last time, comparing the algorithms from the preceding blog post to Thompson Sampling:

Normal Approximation

The Bernoulli case is well known and well understood.  But what happens when you want to maximize, say, revenue instead of click-through rate?  To find out, I coded up a log-normal bandit where each arm pays out strictly positive rewards drawn from a distribution (the code is messy, so I won't be posting it).  For Thompson sampling, I used a full posterior with priors over the log-normal parameters μand σ as described (note that these are not the mean and standard deviation of the log-normal), and for UCB I used the modified Cox method of computing confidence bounds for the log-normal distribution from .  The normal approximation to exact Thompson sampling is (using Central Limit Theorem arguments):

             \( p(\mu_a|\text{data}) = \mathcal{N}\left(\hat{\mu}_{a,t},\frac{\hat{\sigma}^2_{a,t}}{ N_{a,t}} \right) \),

where \(\hat{\mu}_{a,t}\) and \(\hat{\sigma}^2_{a,t}\) , are the sample mean and sample variance, respectively, of the rewards observed from arm a at round t , and \(N_{a,t}\) is the number of times arm a has been pulled at round t .

The UCB normal approximation is identical to that in the previous simulation.  For all algorithms, I used the log of the observed rewards to compute sufficient statistics.  The results:

Observations:

  1. Epsilon-Greedy is still the worst performer.
  2. The normal approximations are slightly worse than their hand-designed counterparts - green is worse than orange and gray is worse than purple.
  3. UCB is doing better than Thompson sampling over this horizon, but Thompson sampling is maybe poised to do better in the long run.

The Trouble with UCB

In the above experiments UCB beat out Thompson sampling.  It sounds like a great algorithm and performs well in simulations, but it has a key weakness when you actually get down to productionizing your bandit algorithm.  Let's say that you aren't Google and you have limited computational resources, which means that you can only update your observed data in batch every 2 hours.  For the delayed batch case, UCB will pull the same arm every time for those 2 hours because it is deterministic in the absence of immediate updates.  While Thompson sampling relies on random samples, which will be different every time even if the distributions aren't updated for a while, UCB needs the distributions to be updated every single round to work properly.

Given that the simulated performance differences between Thompson sampling and UCB are small, I heartily recommend Thompson sampling over UCB; it will work in a larger variety of practical cases.

Avoiding Trouble with Thompson Sampling

RichRelevance sees gobbles of data.  This is usually great, but for Thompson sampling this may mean a subtle pitfall.  To understand this point, first note that the variance of a Beta distribution with parametersα and β is:

                             \( \frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)} \).

For our recommender system, α is the number of successes (clicks) and β is the number of failures (non-clicks).  As the amount of total data α+β goes up, the variance shrinks, and quickly.  After a while, the posteriors will be so narrow, that exploration will effectively cease.  This may sound good - after all, didn't we learn what the best lever to pull is?  In practice, we're dealing with a moving target so it is a good idea to put an upper bound on α+β , so that exploration can continue indefinitely.  For details, see Section 7.2.3 .

Analogously, if you're optimizing for revenue instead of click-through rate and using a Normal approximation, you can compute sample means and sample variances in an incremental fashion, using decay as per the last page .  This will ensure that older samples have less influence than newer ones and allow you to track changing means and variances.

Coming up next: contextual bandits!

About :

Sergey Feldman is a data scientist & machine learning cowboy with the RichRelevance Analytics team. He was born in Ukraine, moved with his family to Skokie, Illinois at age 10, and now lives in Seattle. In 2012 he obtained his machine learning PhD from the University of Washington. Sergey loves random forests and thinks the Fourier transform is pure magic.

转载于:https://www.cnblogs.com/breezedeus/p/3775339.html

你可能感兴趣的文章
[转载]电脑小绝技
查看>>
windos系统定时执行批处理文件(bat文件)
查看>>
thinkphp如何实现伪静态
查看>>
BZOJ 2243: [SDOI2011]染色( 树链剖分 )
查看>>
BZOJ 1925: [Sdoi2010]地精部落( dp )
查看>>
c++中的string常用函数用法总结!
查看>>
界面交互之支付宝生活圈pk微信朋友圈
查看>>
[DLX精确覆盖+打表] hdu 2518 Dominoes
查看>>
SuperMap iServerJava 6R扩展领域开发及压力测试---判断点在那个面内(1)
查看>>
Week03-面向对象入门
查看>>
一个控制台程序,模拟机器人对话
查看>>
web.xml 中加载顺序
查看>>
pycharm激活地址
查看>>
hdu 1207 四柱汉诺塔
查看>>
Vue 2.x + Webpack 3.x + Nodejs 多页面项目框架(上篇——纯前端多页面)
查看>>
display:none与visible:hidden的区别
查看>>
我的PHP学习之路
查看>>
【题解】luogu p2340 奶牛会展
查看>>
对PostgreSQL的 SPI_prepare 的理解。
查看>>
解决响应式布局下兼容性的问题
查看>>