Suvarna Garge (Editor)

Expander walk sampling

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In the mathematical discipline of graph theory, the expander walk sampling theorem states that sampling vertices in an expander graph by doing a random walk is almost as good as sampling the vertices independently from a uniform distribution. The earliest version of this theorem is due to Ajtai, Komlós & Szemerédi (1987), and the more general version is typically attributed to Gillman (1998).

Contents

Statement

Let G = ( V , E ) be an expander graph with normalized second-largest eigenvalue λ . Let n denote the number of vertices in G . Let f : V [ 0 , 1 ] be a function on the vertices of G . Let μ = E [ f ] denote the mean of f , i.e. μ = 1 n v V f ( v ) . Then, if we let Y 0 , Y 1 , , Y k denote the vertices encountered in a k -step random walk on G starting at a random vertex Y 0 , we have the following for all γ > 0 :

Pr [ 1 k i = 0 k f ( Y i ) μ > γ ] e Ω ( γ 2 ( 1 λ ) k ) .

Here the Ω hides an absolute constant 1 / 10 . An identical bound holds in the other direction:

Pr [ 1 k i = 0 k f ( Y i ) μ < γ ] e Ω ( γ 2 ( 1 λ ) k ) .

Uses

This theorem is useful in randomness reduction in the study of derandomization. Sampling from an expander walk is an example of a randomness-efficient sampler. Note that the number of bits used in sampling k independent samples from f is k log n , whereas if we sample from an infinite family of constant-degree expanders this costs only log n + O ( k ) . Such families exist and are efficiently constructible, e.g. the Ramanujan graphs of Lubotzky-Phillips-Sarnak.

References

Expander walk sampling Wikipedia