Test and Measurement Equipment (T&ME) must be calibrated on a periodic basis to assure that it is operating within its specified parameters and if not, aligned so that it performs within its designed specifications. The uncertainty of the calibration system used to calibrate the T&ME should not add appreciable error to this process.
The calibration process usually involves comparison of the T&ME to a standard having like functions with better accuracies. The comparison between the accuracy of the Unit Under Test (UUT) and the accuracy of the standard is known as a Test Accuracy Ratio (TAR). However, this ratio does not consider other potential sources of error in the calibration process.
Errors in the calibration process are not only associated with the specifications of the standard, but could also come from sources such as environmental variations, other devices used in the calibration process, technicians errors, etc. These errors should be identified and quantified to get an estimation of the calibration uncertainty. These are typically stated at a 95% confidence level (k=2). The comparison between the accuracy of the UUT and the estimated calibration uncertainty is known as a Test Uncertainty Ratio (TUR). This ratio is more reliable because it accounts for possible sources of error in the calibration process that the TAR does not.
Also important is the selection of the test points. These should be chosen carefully in order to give a high degree of confidence that the UUT is operating within its specified parameters. The TUR should be large enough to provide reliability of the calibration.
Some quality standards attempt to define what this ratio should be. ANSI/NCSLZ540-1-1994 states “The laboratory shall ensure that calibration uncertainties are sufficiently small so that the adequacy of the measurement is not affected” It also states “Collective uncertainty of the measurement standards shall not exceed 25% of the acceptable tolerance (e.g. Manufacturer specifications)”. This 25% equates to a TUR of 4:1. Other quality standards have recommended TUR's as high as 10:1. For some, a TUR of 3:1, 2:1 or even 1:1 is acceptable. Any of these may be acceptable to a specific user who understands the risks that are involved with lower TUR's or builds these into his/her measurement process. When accepting a TUR less than 4:1, it is important to consider the UUT's tolerance band where its “As Found” reading is determined to lie and more important, where the UUT is left during the calibration process.
A 4:1 TUR is the point to which most high-quality calibration labs strive. It is the point at which the level of in-tolerance probability stays at 100% the longest, with the best economies of scale. In some cases, a 4:1 TUR may be unachievable. Factors that could cause a situation where the TUR is <4:1 include:
• Availability of adequate standards
• The technology of the respective T&ME is approaching the intrinsic level of the specific discipline.
The user may accept the higher risk associated with the achievable TUR (e.g., 2:1) as opposed to demanding the achievement of a 4:1 TUR. In cases where a 4:1 TUR is necessary, the calibration provider may incur a substantial capital investment expense to purchase the appropriate lab standards. This might lead to an increase in the calibration price, which is the other alternative: choosing to pay higher costs for better measurement assurance (and reduced risk).