Cross-battery assessment refers to the process by which psychologists use information from multiple test batteries (i.e., various IQ tests) to help guide diagnostic decisions and to gain a fuller picture of an individual’s cognitive abilities than can be ascertained through the use of single-battery assessments. The cross-battery approach (XBA) was first introduced in the late 1990s by Dawn Flanagan, Samuel Ortiz and Kevin McGrew. It offers practitioners the means to make systematic, valid and up-to-date interpretations of intelligence batteries and to augment them with other tests in a way that is consistent with the empirically supported Cattell–Horn–Carroll (CHC) theory of cognitive abilities.
Contents
- Application of the XBA Approach
- Implementation of the XBA Approach Step by Step
- The Seven Deadly Sins in SLD Evaluation
- 1 Relentless searching for ipsative or intra individual discrepancies
- 2 Failure to distinguish between a relative weakness and a normative weakness
- 3 Obsession with the severe discrepancy calculation
- 4 Belief that IQ is a near perfect predictor of potential
- 5 Failure to apply current theory and research
- 6 Over reliance on findings from a single sub test
- 7 Belief that aptitude and ability are the same
- References
Application of the XBA Approach
It is recommended that practitioners adhere to several guiding principles in order to ensure that XBA procedures are psychometrically and theoretically sound. First, one should select an intelligence battery that best addresses referral concerns. Second, subtests and clusters or composites from a single battery should be utilized whenever possible in order to best represent the broad CHC abilities (i.e., use actual norms whenever possible). Third, it is important to construct CHC broad and narrow ability clusters through acceptable methods, such as CHC theory driven factor analyses or expert consensus content-validity studies. Fourth, when two or more qualitatively different indicators of a broad abilities of interest are not assessed or available on the core battery, then one may supplement it for broad ability indicators from another battery. Finally, when crossing batteries, select tests that were developed and normed within a few years of one another. Sixth, in order to minimize the effect of spurious differences between test scores, select tests from the smallest number of batteries. Underlining the importance of the above points of consideration is the fact that the overzealous attitude of few psychologists to use this XBA approach has led to several cases of its abuse resulting in wrong and misguiding results.
Implementation of the XBA Approach Step-by-Step
- Select primary intelligence battery for assessment
- Identify represented CHC abilities
- Select tests to measure CHC abilities not measured by the primary battery
- Administer the primary battery (and any other supplemental tests)
- Enter data into the XBA DMIA (provided in "Essentials of Cross Battery Assessment: Second Edition"
- Follow XBA guidelines
The "Seven Deadly Sins" in SLD Evaluation
Specific learning disability (SLD) is the largest disability identified among school-aged children. According to Flanagan, Ortiz and Alfonso, in order to receive a diagnosis of SLD the following criteria must be met following these steps: a deficit in academic functioning is determined, academic difficulties are not due to secondary exclusionary factors (e.g., neurological issues, etc.), a deficit in cognitive ability is determined, exclusionary factors are reviewed again to determine that the academic and cognitive deficits are not due to secondary factors, underachievement is established, the academic deficits are shown to have a negative effect on daily life. Flanagan, Ortiz and Alfonso suggest "seven deadly sins" as a metaphor for understanding the misconceptions surrounding SLD evaluation that continue to undermine its reliability and validity.
1. Relentless searching for ipsative or intra-individual discrepancies
One of the most common practices in SLD evaluations is when the scores are ipsatized. Ipsatized scores are scores that have been averaged and subtracted from the overall average in order to determine the degree of deviation from the average. This suggests that when scores deviate from the mean they are clinically important indicators of either relative weaknesses (lower) or relative strengths (higher). Thus, weaknesses are thought of as evidence of SLD. This approach only focuses on the identification of discrepancies that exist within the individual. The vast majority of people do not have flat cognitive profiles and instead show significant variability in their profile of cognitive ability scores. The assumption that people who have certain scores in one domain will show similar ability in all domains is erroneous. Instead of looking for discrepancies wherever they might be found, theory should guide comparison between different sub-tests.
2. Failure to distinguish between a relative weakness and a normative weakness
A lower score does not automatically gain clinical significance simply because the discrepancy has been determined to be real (statistically significant). Statistical significance only means that the difference between the two scores is not due to chance (i.e., that they are different from one another), that is, it does not mean that the difference between the two scores in the comparison is clinically meaningful or indicative of impairment.
3. Obsession with the severe discrepancy calculation
The ability-achievement discrepancy has been regarded as important to definitions and diagnostic criteria of SLD that practitioners often resort to calculating every sub-test score obtained at an evaluation. Given the high number of discrepancies available to calculate, it would be surprising if at least one significant discrepancy was not found. The significant ability-achievement discrepancy should not be synonymous with nor a necessary condition for a SLD diagnosis.
4. Belief that IQ is a near perfect predictor of potential
This ability-achievement discrepancy was likely fostered by the notion that IQ and other global ability composites are near-perfect predictors of an individual's academic achievement. For instance, scores of general ability, like the FSIQ, only account for about 35 to 50% of total achievement variance, which leaves about 50 to 65% of the variance unexplained. Thus, practitioners must recognize that there are other important factors that explain significant variance in achievement and global ability.
5. Failure to apply current theory and research
In evaluating SLD, practitioners may not always be privy to or able to implement procedures that are based on modern theory and research. Practitioners often omit contemporary psychometric theory and current research on SLD that aid in determining identification and diagnosis of SLD.
6. Over-reliance on findings from a single sub-test
Diagnostic decisions are often based on the results from either a single sub-test score or scores used to screen individuals. The reliance on these single scores may not be suitable for the purpose of diagnosis or high-stakes decision making. For instance, one of the fundamental properties of psychometrics is that a single sub-test can't be considered a reliable indicator by itself of the construct it is intended to measure. One sub-test is not sufficient to indicate the presence of an SLD or other impairment.
7. Belief that aptitude and ability are the same
Aptitude and ability are two concepts that are often mistakenly confused. It is important to differentiate between the two given the shift in understanding SLD which is based on the difference between ability and aptitude. When evaluating SLD, looking at aptitude is important because those abilities are associated with long-term academic outcomes.