Girish Mahajan (Editor)

Thompson sampling

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In artificial intelligence, Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists in choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Contents

Description

Consider a set of contexts X , a set of actions A , and rewards in R . In each round, the player obtains a context x X , plays an action a A and receives a reward r R following a distribution that depends on the context and the issued action. The aim of the player is to play actions such as to maximize the cumulative rewards.

The elements of Thompson sampling are as follows:

  1. a likelihood function P ( r | θ , a , x ) ;
  2. a set Θ of parameters θ of the distribution of r ;
  3. a prior distribution P ( θ ) on these parameters;
  4. past observations triplets D = { ( x ; a ; r ) } ;
  5. a posterior distribution P ( θ | D ) P ( D | θ ) P ( θ ) , where P ( D | θ ) is the likelihood function.

Thompson sampling consists in playing the action a A according to the probability that it maximizes the expected reward, i.e.

I [ E ( r | a , x , θ ) = max a E ( r | a , x , θ ) ] P ( θ | D ) d θ ,

where I is the indicator function.

In practice, the rule is implemented by sampling, in each round, a parameter θ from the posterior P ( θ | D ) , and choosing the action a that maximizes E [ r | θ , a , x ] , i.e. the expected reward given the parameter, the action and the current context. Conceptually, this means that the player instantiates his or her beliefs randomly in each round, and then he acts optimally according to them.

History

Thompson sampling was originally described in an article by Thompson from 1933 but has been largely ignored by the artificial intelligence community. It was subsequently rediscovered numerous times independently in the context of reinforcement learning. A first proof of convergence for the bandit case has been shown in 1997. The first application to Markov decision processes was in 2000. A related approach (see Bayesian control rule) was published in 2010. In 2010 it was also shown that Thompson sampling is instantaneously self-correcting. Asymptotic convergence results for contextual bandits were published in 2011. Nowadays, Thompson Sampling has been widely used in many online learning problems: Thompson sampling has also been applied to A/B testing in website design and online advertising; Thompson sampling has formed the basis for accelerated learning in decentralized decision making; a Double Thompson Sampling (D-TS) algorithm has been proposed for dueling bandits, a variant of traditional MAB, where feedbacks come in the format of pairwise comparison.

Probability matching

Probability matching is a decision strategy in which predictions of class membership are proportional to the class base rates. Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, the observer using a probability-matching strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances.

Bayesian control rule

A generalization of Thompson sampling to arbitrary dynamical environments and causal structures, known as Bayesian control rule, has been shown to be the optimal solution to the adaptive coding problem with actions and observations. In this formulation, an agent is conceptualized as a mixture over a set of behaviours. As the agent interacts with its environment, it learns the causal properties and adopts the behaviour that minimizes the relative entropy to the behaviour with the best prediction of the environment's behaviour. If these behaviours have been chosen according to the maximum expected utility principle, then the asymptotic behaviour of the Bayesian control rule matches the asymptotic behaviour of the perfectly rational agent.

The setup is as follows. Let a 1 , a 2 , , a T be the actions issued by an agent up to time T , and let o 1 , o 2 , , o T be the observations gathered by the agent up to time T . Then, the agent issues the action a T + 1 with probability:

P ( a T + 1 | a ^ 1 : T , o 1 : T ) ,

where the "hat"-notation a ^ t denotes the fact that a t is a causal intervention (see Causality), and not an ordinary observation. If the agent holds beliefs θ Θ over its behaviors, then the Bayesian control rule becomes

P ( a T + 1 | a ^ 1 : T , o 1 : T ) = Θ P ( a T + 1 | θ , a ^ 1 : T , o 1 : T ) P ( θ | a ^ 1 : T , o 1 : T ) d θ ,

where P ( θ | a ^ 1 : T , o 1 : T ) is the posterior distribution over the parameter θ given actions a 1 : T and observations o 1 : T .

In practice, the Bayesian control amounts to sampling, in each time step, a parameter θ from the posterior distribution P ( θ | a ^ 1 : T , o 1 : T ) , where the posterior distribution is computed using Bayes' rule by only considering the (causal) likelihoods of the observations o 1 , o 2 , , o T and ignoring the (causal) likelihoods of the actions a 1 , a 2 , , a T , and then by sampling the action a T + 1 from the action distribution P ( a T + 1 | θ , a ^ 1 : T , o 1 : T ) .

References

Thompson sampling Wikipedia