In statistics (classical test theory), Cronbach's
Contents
- Definition
- Internal consistency
- Generalizability theory
- Intraclass correlation
- Factor analysis
- References
It has been proposed that
Cronbach's
It was first named alpha by Lee Cronbach in 1951, as he had intended to continue with further coefficients. The measure can be viewed as an extension of the Kuder–Richardson Formula 20 (KR-20), which is an equivalent measure for dichotomous items. Alpha is not robust against missing data. Several other Greek letters have been used by later researchers to designate other measures used in a similar context. Somewhat related is the average variance extracted (AVE).
This article discusses the use of
Definition
Suppose that we measure a quantity which is a sum of
where
If the items are scored 0 and 1, a shortcut formula is
where
Alternatively, Cronbach's
where
The standardized Cronbach's alpha can be defined as
where
Cronbach's
The theoretical value of alpha varies from zero to 1, since it is the ratio of two variances. However, depending on the estimation procedure used, estimates of alpha can take on any value less than or equal to 1, including negative values, although only positive values make sense. Higher values of alpha are more desirable. Some professionals, as a rule of thumb, require a reliability of 0.70 or higher (obtained on a substantial sample) before they will use an instrument. Although Nunnally (1978) is often cited when it comes to this rule, he has actually never stated that 0.7 is a reasonable threshold in advanced research projects. And obviously, this rule should be applied with caution when
This has resulted in a wide variance of test reliability. In the case of psychometric tests, most fall within the range of 0.75 to 0.83 with at least one claiming a Cronbach's alpha above 0.90.
Internal consistency
Cronbach's alpha will generally increase as the intercorrelations among test items increase, and is thus known as an internal consistency estimate of reliability of test scores. Because intercorrelations among test items are maximized when all items measure the same construct, Cronbach's alpha is widely believed to indirectly indicate the degree to which a set of items measures a single unidimensional latent construct. It is easy to show, however, that tests with the same test length and variance, but different underlying factorial structures can result in the same values of Cronbach's alpha. Indeed, several investigators have shown that alpha can take on quite high values even when the set of items measures several unrelated latent constructs. As a result, alpha is most appropriately used when the items measure different substantive areas within a single construct. When the set of items measures more than one construct, coefficient omega_hierarchical is more appropriate.
Alpha treats any covariance among items as true-score variance, even if items covary for spurious reasons. For example, alpha can be artificially inflated by making scales which consist of superficial changes to the wording within a set of items or by analyzing speeded tests.
A commonly accepted rule for describing internal consistency using Cronbach's alpha is as follows, though a greater number of items in the test can artificially inflate the value of alpha and a sample with a narrow range can deflate it, so this rule should be used with caution.
Generalizability theory
Cronbach and others generalized some basic assumptions of classical test theory in their generalizability theory. If this theory is applied to test construction, then it is assumed that the items that constitute the test are a random sample from a larger universe of items. The expected score of a person in the universe is called the universe score, analogous to a true score. The generalizability is defined analogously as the variance of the universe scores divided by the variance of the observable scores, analogous to the concept of reliability in classical test theory. In this theory, Cronbach's alpha is an unbiased estimate of the generalizability. For this to be true the assumptions of essential
Intraclass correlation
Cronbach's alpha is said to be equal to the stepped-up consistency version of the intraclass correlation coefficient, which is commonly used in observational studies. But this is only conditionally true. In terms of variance components, this condition is, for item sampling: if and only if the value of the item (rater, in the case of rating) variance component equals zero. If this variance component is negative, alpha will underestimate the stepped-up intra-class correlation coefficient; if this variance component is positive, alpha will overestimate this stepped-up intra-class correlation coefficient.
Factor analysis
Cronbach's alpha also has a theoretical relation with factor analysis. As shown by Zinbarg, Revelle, Yovel and Li, alpha may be expressed as a function of the parameters of the hierarchical factor analysis model which allows for a general factor that is common to all of the items of a measure in addition to group factors that are common to some but not all of the items of a measure. Alpha may be seen to be quite complexly determined from this perspective. That is, alpha is sensitive not only to general factor saturation in a scale but also to group factor saturation and even to variance in the scale scores arising from variability in the factor loadings. Coefficient Ωhierarchical has a much more straightforward interpretation as the proportion of observed variance in the scale scores that is due to the general factor common to all of the items comprising the scale.