Terms such as resolution, accuracy, linearity, and repeatability are frequently misunderstood despite their common use. Engineers need to ensure that they fully understand the terminology prior to a purchase decision, since the terms are important factors in the selection of many precision measuring instruments such as displacement sensors. Unfortunately, direct comparisons are not always possible, as not all displacement sensor specifications are presented in the right way.
To select displacement and distance sensors, and other measuring instruments, the terminology applied is critical though it can be confusing. Engineers could pay more than what is needed for over-specified sensors if they do not get this part right. Conversely, if the displacement sensor does not satisfy the required specification, a product or control system may lack critical performance.
One of the most frequently misunderstood and poorly-defined descriptions of performance is resolution. A simple resolution statement in a datasheet does not always provide enough information for a fully informed sensor selection.
The resolution of a sensor is defined as the smallest possible change it can detect in the quantity that it is measuring. Resolution is different from accuracy. A low resolution sensor can be accurate for some applications, and an inaccurate sensor could have a high resolution.
In practice, the resolution is determined by the signal-to-noise ratio, taking into account the acquired frequency range. The least significant digit in a digital display will often fluctuate, indicating that changes of that magnitude are only just resolved. The resolution is interrelated to the measurement precision.
The primary factor limiting the smallest possible measurement in a sensor’s output is the electrical noise. For instance, if a sensor has a 10 µm of noise in the output, a 5 µm displacement cannot be measured. Therefore, it is necessary that the resolution of the selected sensor is substantially lower than the smallest required measurement.
For best results, a resolution needs to be at least 10 times more than the required measurement accuracy. Additionally, resolution is only meaningful within the context of the system bandwidth, the application, unit of measure, and the measurement method used by the sensor manufacturer.
The accuracy of a displacement sensor describes the maximum measuring error considering all of the factors that have an influence on the real measurement value. These factors include the resolution, linearity, long-term stability, temperature stability, and a statistical error (which can be removed by calculation).
Repeatability is a quantitative specification of the deviation of mutually independent measurements, which are taken under the same conditions. It defines how good the electrical output is for the same input if tried repeatedly under the same conditions. For displacement sensors, repeatability is a measure of the sensor’s stability over time.
As less time is used to average the measurement, sample-to-sample repeatability can be lower for quick sample rates. Repeatability will improve when the sample rate is lowered, although this does not continue indefinitely. Beyond some slower sample rate, repeatability will start to worsen as long-term drift in the components and temperature variations cause changes in the sensor’s output.
Signal-to-Noise Ratio (SNR)
SNR can state the quality of a transmitted useful signal, and often restricts the accuracy of some measurements being taken. Any data transmission gives rise to noise. The transmitted data reconstructed from the signal will be more stable if the separation between noise and useful signal is higher.
For instance, during digital sampling if the useful signal power and the noise power become too close, an incorrect value may be detected and the information may be corrupted. The SNR is determined by dividing the mean useful power by the mean noise power. The SNR is the ratio of the detected powers (not amplitudes) and is usually expressed in decibels.
Generally, the SNR definition refers to electrical powers in the output of a detector or sensor. It is common in optical measurements that a light beam impinging on a photodetector such as a photodiode, generates a photocurrent proportional to the optical power with some electronic noise. Depending on the situation, the SNR may be restricted either by noise generated by the sensor electronics or by optical noise effects.
The non-linearity or linearity of the sensor is the maximum deviation between an ideal straight-line characteristic and the real characteristic. It is usually provided as a percentage of full-scale output (% FSO) or a percentage of the measuring range.
In many applications, the determination of the actual measurement accuracy largely depends on the sensor non-linearity. Users often use the resolution value of a device when the linearity figure is needed. Usually, the linearity figure is about 10 to 20 times higher than the resolution, so the measurement sensor will under-perform if incorrectly specified.
The stability of measurement systems or sensors can change over a period of time, despite the use of high quality components. With the same ambient conditions and input quantity, the possible change of the output signal over a certain time period is measured. This figure is typically stated in % FSO / month.
Many suppliers of low-cost laser sensors do not state the ‘temperature stability’ of their sensors in the technical datasheet. So how is the actual measurement error determined or how are the results corrected to account for this? Typically, measurement errors can be as high as 400 ppm/K, which can significantly impact the measurement accuracy.
In contrast, high performance laser sensor suppliers are more likely to state a sensor’s temperature stability on the datasheet. In addition, they also provide active temperature compensation algorithms for the sensor, reducing temperature stability to as low as 100 ppm/K or more.
The measuring range is defined as the space of a sensor where the measurement object must be placed to satisfy the specified technical data. The extreme regions of this space are known as the start and end of the measuring range.
Some sensors have a free space between the front of the sensor and measuring range and the sensor. The measuring range for a contact sensor is the distance between the mechanical minimum and maximum possible distance of the sensor mounting to the measurement object.
The offset distance of a sensor varies between suppliers and between sensor principles. It corresponds to the distance between the edge of the sensor and the start of the measuring range or the center of the range.
Response time is defined as the period from the time of the event to the signal output. When 90% of the signal output is achieved, the response time is deemed to have been achieved. Several sensor specifications do not specify the response time, and it is incorrect to assume that the time is equal to the stated measurement frequency or speed. Many times, the response time can vary in relation to the position of the measurement object.
For instance, the response time can be substantially longer than the stated measurement frequency or speed, if the measurement object is initially out of the measuring range and later moves into the range.
Additionally, if the measurement object is in the measuring range but moves quickly over a large percentage of the range, say more than 50%, then the response time will be more than the stated measurement speed. This can cause problems, especially in closed loop control applications or applications in which fast moving individual components move through the measuring range, as in a production process.
This information has been sourced, reviewed and adapted from materials provided by Micro Epsilon.
For more information on this source, please visit Micro Epsilon.