In statistics, the Šidák correction, or Dunn–Šidák correction, is a method used to counteract the problem of multiple comparisons. It is a simple method to control the familywise error rate. When all null hypotheses are true, the method provides familywise error control that is exact for tests that are stochastically independent, is conservative for tests that are positively dependent, and is liberal for tests that are negatively dependent. It is credited to a 1967 paper by the statistician and probabilist Zbyněk Šidák.
Given m different null hypotheses and a familywise alpha level of
α
, each null hypotheses is rejected that has a p-value lower than
α
S
I
D
=
1
−
(
1
−
α
)
1
m
.
This test produces a familywise Type I error rate of exactly
α
when the tests are independent from each other and all null hypotheses are true. It is less stringent than the Bonferroni correction, but only slightly. For example, for
α
= 0.05 and m = 10, the Bonferroni-adjusted level is 0.005 and the Šidák-adjusted level is approximately 0.005116.
One can also compute confidence intervals matching the test decision using the Šidák correction by using 100(1 − α)1/m% confidence intervals.
The Šidák correction is derived by assuming that the individual tests are independent. Let the significance threshold for each test be
α
1
; then the probability that at least one of the tests is significant under this threshold is (1 - the probability that none of them are significant). Since it is assumed that they are independent, the probability that all of them are not significant is the product of the probabilities that each of them are not significant, or
1
−
(
1
−
α
1
)
m
. Our intention is for this probability to equal
α
, the significance level for the entire series of tests. By solving for
α
1
, we obtain
α
1
=
1
−
(
1
−
α
)
1
/
m
.
Šidák correction for t-test