![]() | ||
In metrology, measurement uncertainty is a non-negative parameter characterizing the dispersion of the values attributed to a measured quantity. All measurements are subject to uncertainty and a measurement result is complete only when it is accompanied by a statement of the associated uncertainty. By international agreement, this uncertainty has a probabilistic basis and reflects incomplete knowledge of the quantity value.
Contents
- Background
- Indirect measurement
- Propagation of distributions
- Type A and Type B evaluation of uncertainty
- Sensitivity coefficients
- Uncertainty evaluation
- Models with any number of output quantities
- Alternative perspective
- References
The measurement uncertainty is often taken as the standard deviation of a state-of-knowledge probability distribution over the possible values that could be attributed to a measured quantity. Relative uncertainty is the measurement uncertainty relative to the magnitude of a particular single choice for the value for the measured quantity, when this choice is nonzero. This particular single choice is usually called the measured value, which may be optimal in some well-defined sense (e.g., a mean, median, or mode). Thus, the relative measurement uncertainty is the measurement uncertainty divided by the absolute value of the measured value, when the measured value is not zero.
Background
The purpose of measurement is to provide information about a quantity of interest – a measurand. For example, the measurand might be the size of a cylindrical feature, the volume of a vessel, the potential difference between the terminals of a battery, or the mass concentration of lead in a flask of water.
No measurement is exact. When a quantity is measured, the outcome depends on the measuring system, the measurement procedure, the skill of the operator, the environment, and other effects. Even if the quantity were to be measured several times, in the same way and in the same circumstances, a different measured value would in general be obtained each time, assuming the measuring system has sufficient resolution to distinguish between the values.
The dispersion of the measured values would relate to how well the measurement is performed. Their average would provide an estimate of the true value of the quantity that generally would be more reliable than an individual measured value. The dispersion and the number of measured values would provide information relating to the average value as an estimate of the true value. However, this information would not generally be adequate.
The measuring system may provide measured values that are not dispersed about the true value, but about some value offset from it. Take a domestic bathroom scale. Suppose it is not set to show zero when there is nobody on the scale, but to show some value offset from zero. Then, no matter how many times the person's mass were re-measured, the effect of this offset would be inherently present in the average of the values.
Measurement uncertainty has important economic consequences for calibration and measurement activities. In calibration reports, the magnitude of the uncertainty is often taken as an indication of the quality of the laboratory, and smaller uncertainty values generally are of higher value and of higher cost. The American Society of Mechanical Engineers (ASME) has produced a suite of standards addressing various aspects of measurement uncertainty. ASME B89.7.3.1, Guidelines for Decision Rules in Determining Conformance to Specifications addresses the role of measurement uncertainty when accepting or rejecting products based on a measurement result and a product specification. ASME B89.7.3.2, Guidelines for the Evaluation of Dimensional Measurement Uncertainty, provides a simplified approach (relative to the GUM) to the evaluation of dimensional measurement uncertainty. ASME B89.7.3.3, Guidelines for Assessing the Reliability of Dimensional Measurement Uncertainty Statements, examines how to resolve disagreements over the magnitude of the measurement uncertainty statement. ASME B89.7.4, Measurement Uncertainty and Conformance Testing: Risk Analysis, provides guidance on the risks involved in any product acceptance/rejection decision.
The "Guide to the Expression of Uncertainty in Measurement", commonly known as the GUM, is the definitive document on this subject. The GUM has been adopted by all major National Measurement Institutes (NMIs), by international laboratory accreditation standards such as ISO/IEC 17025 General requirements for the competence of testing and calibration laboratories which is required for international laboratory Accreditation, and employed in most modern national and international documentary standards on measurement methods and technology. See Joint Committee for Guides in Metrology.
Indirect measurement
The above discussion concerns the direct measurement of a quantity, which incidentally occurs rarely. For example, the bathroom scale may convert a measured extension of a spring into an estimate of the measurand, the mass of the person on the scale. The particular relationship between extension and mass is determined by the calibration of the scale. A measurement model converts a quantity value into the corresponding value of the measurand.
There are many types of measurement in practice and therefore many models. A simple measurement model (for example for a scale, where the mass is proportional to the extension of the spring) might be sufficient for everyday domestic use. Alternatively, a more sophisticated model of a weighing, involving additional effects such as air buoyancy, is capable of delivering better results for industrial or scientific purposes. In general there are often several different quantities, for example temperature, humidity and displacement, that contribute to the definition of the measurand, and that need to be measured.
Correction terms should be included in the measurement model when the conditions of measurement are not exactly as stipulated. These terms correspond to systematic errors. Given an estimate of a correction term, the relevant quantity should be corrected by this estimate. There will be an uncertainty associated with the estimate, even if the estimate is zero, as is often the case. Instances of systematic errors arise in height measurement, when the alignment of the measuring instrument is not perfectly vertical, and the ambient temperature is different from that prescribed. Neither the alignment of the instrument nor the ambient temperature is specified exactly, but information concerning these effects is available, for example the lack of alignment is at most 0.001° and the ambient temperature at the time of measurement differs from that stipulated by at most 2 °C.
As well as raw data representing measured values, there is another form of data that is frequently needed in a measurement model. Some such data relate to quantities representing physical constants, each of which is known imperfectly. Examples are material constants such as modulus of elasticity and specific heat. There are often other relevant data given in reference books, calibration certificates, etc., regarded as estimates of further quantities.
The items required by a measurement model to define a measurand are known as input quantities in a measurement model. The model is often referred to as a functional relationship. The output quantity in a measurement model is the measurand.
Formally, the output quantity, denoted by
where
It is taken that a procedure exists for calculating
Propagation of distributions
The true values of the input quantities
Consider estimates
The use of available knowledge to establish a probability distribution to characterize each quantity of interest applies to the
The figure below depicts a measurement model
Once the input quantities
Often an interval containing
Prior knowledge about the true value of the output quantity
Type A and Type B evaluation of uncertainty
Knowledge about an input quantity
In Type A evaluations of measurement uncertainty, the assumption is often made that the distribution best describing an input quantity
For a Type B evaluation of uncertainty, often the only available information is that
Sensitivity coefficients
Sensitivity coefficients
with
The standard uncertainty
which is known as the law of propagation of uncertainty.
When the input quantities
Uncertainty evaluation
The main stages of uncertainty evaluation constitute formulation and calculation, the latter consisting of propagation and summarizing. The formulation stage constitutes
- defining the output quantity
Y (the measurand), - identifying the input quantities on which
Y depends, - developing a measurement model relating
Y to the input quantities, and - on the basis of available knowledge, assigning probability distributions — Gaussian, rectangular, etc. — to the input quantities (or a joint probability distribution to those input quantities that are not independent).
The calculation stage consists of propagating the probability distributions for the input quantities through the measurement model to obtain the probability distribution for the output quantity
- the expectation of
Y , taken as an estimatey ofY , - the standard deviation of
Y , taken as the standard uncertaintyu ( y ) associated withy , and - a coverage interval containing
Y with a specified coverage probability.
The propagation stage of uncertainty evaluation is known as the propagation of distributions, various approaches for which are available, including
- the GUM uncertainty framework, constituting the application of the law of propagation of uncertainty, and the characterization of the output quantity
Y by a Gaussian or at -distribution, - analytic methods, in which mathematical analysis is used to derive an algebraic form for the probability distribution for
Y , and - a Monte Carlo method, in which an approximation to the distribution function for
Y is established numerically by making random draws from the probability distributions for the input quantities, and evaluating the model at the resulting values.
For any particular uncertainty evaluation problem, approach 1), 2) or 3) (or some other approach) is used, 1) being generally approximate, 2) exact, and 3) providing a solution with a numerical accuracy that can be controlled.
Models with any number of output quantities
When the measurement model is multivariate, that is, it has any number of output quantities, the above concepts can be extended. The output quantities are now described by a joint probability distribution, the coverage interval becomes a coverage region, the law of propagation of uncertainty has a natural generalization, and a calculation procedure that implements a multivariate Monte Carlo method is available.
Alternative perspective
Most of this article represents the most common view of measurement uncertainty, which assumes that random variables are proper mathematical models for uncertain quantities and simple probability distributions are sufficient for representing all forms of measurement uncertainties. In some situations, however, a mathematical interval rather than a probability distribution might be a better model of uncertainty. This may include situations involving periodic measurements, binned data values, censoring, detection limits, or plus-minus ranges of measurements where no particular probability distribution seems justified or where one cannot assume that the errors among individual measurements are completely independent.
A more robust representation of measurement uncertainty in such cases can be fashioned from intervals. An interval [a,b] is different from a rectangular or uniform probability distribution over the same range in that the latter suggests that the true value lies inside the right half of the range [(a + b)/2, b] with probability one half, and within any subinterval of [a,b] with probability equal to the width of the subinterval divided by b – a. The interval makes no such claims, except simply that the measurement lies somewhere within the interval. Distributions of such measurement intervals can be summarized as probability boxes and Dempster–Shafer structures over the real numbers, which incorporate both aleatoric and epistemic uncertainties.