Safety & Quality Standard

Safety & Quality Standard

What is electrical calibration?

Electrical calibration refers to the process of verifying the performance of, or adjusting, any instrument that measures or tests electrical parameters. This discipline is usually referred to as dc and low frequency electrical metrology.  Principal parameters include voltage, current, resistance, inductance, capacitance, time and frequency.  Other parameters, including electrical power and phase, are also in this segment of metrology.  Ratio metric comparisons of similar parameters are often performed to compare a known parameter to an unknown similar parameter.

Electrical calibration involves the use of precise devices that evaluate the performance of key properties for other devices called units under test (UUTs).  Because these precise devices have thoroughly known performance characteristics compared to the UUT, performance evaluation and/or calibration adjustment of the UUT to identify or minimize errors is possible.  Typically, the performance of such precision devices should be four or more times better than the UUT. 

These precision devices fall into two broad categories. Electrical signal sources are often referred to as either calibrators or standards. Precision measurement devices are often classified as precision digital multimeters, measurement standards, or ratio bridges.

Calibrators and standards

A calibrator is usually able to provide a wide range of precision output signals, such as voltage settings ranging from as little as a few microvolts, increasing through several decades of millivolts, and volts, up to a common maximum of about one kilovolt

Additionally, modern calibrators commonly provide outputs for various different electrical functions (such as voltage, resistance, and current).

A standard is considered to be even more precise than a calibrator.  It is capable of performance that is about four or more times better than a calibrator.  But this improved performance is usually limited compared to a calibrator.  Often a standard can provide only one electrical function, and with only a single output setting, or a few output settings.

Precision digital multimeters, measurement standards, and ratio bridges

Precision digital multimeters (dmms) provide excellent measurement performance of various electrical parameters, through a number of decades of values.  Measurement functions usually include voltage, current, and resistance.  Less commonly, frequency, capacitance, and others can also be included.  The higher performance category of a measurement device is termed a measurement standard, or possibly even a measurement bridge.  These devices commonly have fewer functions but better performance than precision dmms.

For proper calibration, there are additional needs beyond only using precision devices to evaluate the UUT.  These precision devices must themselves be routinely calibrated in a manner which agrees with (or is traceable to) an international standard of the parameter being evaluated.  This is evidenced through an unbroken chain of documented comparisons with increasingly better standards.  Eventually this chain of compared standards includes a recognized national, international, or intrinsic standard.

Selecting a Solution

The workload for dc and low frequency ac electrical metrology can include a wide variety of test and measurement instruments:  digital multimeters – analog or digital,  bench or handheld  – oscilloscopes, ScopeMeter® Test Tools, power and energy meters, RTD and thermocouple thermometers, process instrumentation, data loggers, strip and chart recorders, and more. The functions of the calibrator must meet all, or essentially all, of the test equipment functions being calibrated.  Most importantly, the calibrator must have better performance than the required test specifications of the workload.

The normal rule of thumb is that a calibrator or standard must be four or more times better than the specification being evaluated for performance, or the specification referenced for an adjustment process.

Successful selection of a calibrator involves a thorough analysis of the specifications of both the equipment to be tested and the calibrating standards.  This analysis is usually based on the manufacturer’s recommended specifications for the tested equipment versus the standard specifications of the calibrating instrumentation.

The term compliance describes the amount of electrical drive a calibrating source can provide to the electrical load created by the measuring instrument being tested.  A calibrator has a specifically limited drive while not compromising the accuracy of its signal.  Certain test instruments (for example, certain analog meters and panel meters) are relatively large loads and require significant electrical drive from the calibrator.  Therefore, compliance is an important consideration within specification performance.

Measurement and sourcing

Calibration involves both types of applications where precision measurement and precision sourcing are required.  A precision source is used to test a measurement instrument, and a precision measurement device tests a sourcing instrument.  It is considered a requirement of good metrology to have a quality assurance program that incorporates a routine check of the lab’s calibrating instrumentation.  Because of this, a lab should be equipped with precision measurement instruments with similar performance specifications to the lab’s precision sourcing instruments (both by functionality and spec).  Routine inter comparison will then insure confidence in the consistency of the lab’s instruments and will detect problems early so proper corrective actions can be taken.  Hence, it is considered best practice to have and use both categories of instrumentation in a calibration facility.