Kalpana Kalpana (Editor)

Extensions of Fisher's method

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In statistics, extensions of Fisher's method are a group of approaches that allow approximately valid statistical inferences to be made when the assumptions required for the direct application of Fisher's method are not valid. Fisher's method is a way of combining the information in the p-values from different statistical tests so as to form a single overall test: this method requires that the individual test statistics (or, more immediately, their resulting p-values) should be statistically independent.

Contents

Dependent statistics

A principle limitation of Fisher's method is its exclusive design to combine independent p-values, which renders it an unreliable technique to combine dependent p-values. To overcome this limitation, a number of methods were developed to extend its utility.

Brown's method

Fisher's method showed that the log-sum of k independent p-values follow a χ2-distribution with 2k degrees of freedom:

X = 2 i = 1 k log e ( p i ) χ 2 ( 2 k ) .

In the case that these p-values are not independent, Brown proposed the idea of approximating X using a scaled χ2-distribution, 2(k’), with k’ degrees of freedom.

The mean and variance of this scaled χ2 variable are:

E [ c χ 2 ( k ) ] = c k , Var [ c χ 2 ( k ) ] = 2 c 2 k .

This approximation is shown to be accurate up to two moments.

Kost's method: t approximation

It should be noted that the method does require the test statistics' covariance structure to be known up to a scalar multiplicative constant. See reference 2.

References

Extensions of Fisher's method Wikipedia


Similar Topics