Harman Patil (Editor)

Computational indistinguishability

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In computational complexity, if { D n } n N and { E n } n N are two distribution ensembles indexed by a security parameter n (which usually refers to the length of the input), then we say they are computationally indistinguishable if for any non-uniform probabilistic polynomial time algorithm A, the following quantity is a negligible function in n:

δ ( n ) = | Pr x D n [ A ( x ) = 1 ] Pr x E n [ A ( x ) = 1 ] | .

denoted D n E n . In other words, every efficient algorithm A's behavior does not significantly change when given samples according to Dn or En in the limit as n . Another interpretation of computational indistinguishability, is that polynomial-time algorithms actively trying to distinguish between the two ensembles cannot do so: That any such algorithm will only perform negligibly better than if one were to just guess.

Implicit in the definition is the condition that the algorithm, A , must decide based on a single sample from one of the distributions. One might conceive of a situation in which the algorithm trying to distinguish between two distributions, could access as many samples as it needed. Hence two ensembles that cannot be distinguished by polynomial-time algorithms looking at multiple samples are deemed indistinguishable by polynomial-time sampling. If the polynomial-time algorithm can generate samples in polynomial time, or has access to a random oracle that generates samples for it, then indistinguishability by polynomial-time sampling is equivalent to computational indistinguishability.

References

Computational indistinguishability Wikipedia


Similar Topics