In statistics, when performing multiple comparisons, the term false positive ratio, also known as the false alarm ratio, usually refers to the probability of falsely rejecting the null hypothesis for a particular test. The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive (false positives) and the total number of actual negative events (regardless of classification).
Contents
- Definition
- Classification of multiple hypothesis tests
- Difference from type I error rate and other close terms
- References
The false positive rate (or "false alarm rate") usually refers to the expectancy of the false positive ratio.
Definition
The false positive rate is
where FP is the number of false positives, TN is the number of true negatives and N=FP+TN is the total number of negatives.
The level of significance that is used to test each hypothesis is set based on the form of inference (simultaneous inference vs. selective inference) and its supporting criteria (for example FWER or FDR), that were pre-determined by the researcher.
When performing multiple comparisons in a statistical framework such as above, the false positive ratio (also known as the false alarm ratio, as opposed to false positive rate / false alarm rate ) usually refers to the probability of falsely rejecting the null hypothesis for a particular test. Using the terminology suggested here, it is simply
Since V is a random variable and m_0 is a constant (
The false positive rate (or "false alarm rate") usually refers to the expectancy of the false positive ratio, expressed by
It is worth noticing that the two definitions ("false positive ratio" / "false positive rate") are somewhat interchangeable. For example, in the referenced article
Classification of multiple hypothesis tests
The following table defines the possible outcomes when testing multiple null hypotheses. Suppose we have a number m of null hypotheses, denoted by: H1, H2, ..., Hm. Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant. Summing each type of outcome over all Hi yields the following random variables:
In
Difference from "type I error rate" and other close terms
While the false positive rate is mathematically equal to the type I error rate, it is viewed as a separate term for the following reasons:
The false positive rate should also not be confused with the familywise error rate, which is defined as
Lastly, it is important to note the profound difference between the false positive rate and the false discovery rate: while the first is defined as