Pixels in OLED (organic light-emitting diode), miniLED, and microLED displays are individual emitters, each producing their own light. As such, these displays may exhibit variability in luminance and color output from pixel to pixel.
This variability appears as non-uniformity or mura across the display, leading to low yield of high-quality displays as well as rejection of costly components and expensive rework. Automated visual inspection of displays has been proven for identification of imperfections such as non-uniformity—while maintaining rapid cycle times, quantitative pass-fail results, and decreased operational costs necessary for commercialization and mass production.
For emissive displays, pixel and subpixel measurement techniques have enabled calibration of display uniformity by measuring, identifying, and correcting the luminance output of each pixel. This procedure – known as pixel uniformity correction, or ‘demura’ – depends on the precision of pixel-level luminance measurement for the purpose of calculating accurate correction coefficients for each pixel. Guaranteeing precise measurements for qualification and correction at the pixel level becomes more of a challenge as the resolution of emissive displays increases and pixels become smaller, more numerous, and closer together.
The latest smartphones can consist of anywhere from two to over four million pixels per display (with the number of subpixels three or four times greater) at 400 to over 500 pixels per inch (PPI). In all measurement scenarios, image-based display measurement systems must apply several image sensor pixels across each display pixel to improve the repeatability and accuracy of measured pixel-level luminance values. However, covering and isolating each display pixel while utilizing multiple image sensor pixels is demanding, certainly when the high quantity of pixels in the display relatively limits the sensor resolution of the measurement system. Using multiple images to measure a display can improve the effective imaging resolution per pixel, but single-image measurement is key for correcting a display at reduced takt times, supporting efficient high-volume production processes and the commercialization of new types of displays.
Pixel-level measurement depends first and foremost on pixel registration, a technique of dynamically locating and setting a region of interest (ROI) around each pixel in the measurement image. This technique was initially patented by US7907154B21 for measuring individual LED pixels in large-format outdoor screens, where measurements are conducted over long time periods and multi-image measurement of a single display is common. In these applications, ROI are configured as a uniform grid aligned to the measurement system’s image sensor array. This is suitable for multi-image measurement, which enhances sensor resolution per display pixel. However, smaller displays (i.e., smartwatches, phones, or other microdisplays) demand single-image measurement to provide suitable production speeds, which decreases each display pixel’s sensor resolution. In these instances, it is unlikely that the display pixel’s center will align with the center of a sensor pixel, increasing moiré effects at each display pixel and reducing the capacity of the ROI to accurately cover and isolate each display pixel.
A new method of pixel registration and measurement utilizes fractional image sensor pixels to enhance pixel-level measurement precision for high-resolution displays. This technique establishes a display pixel ROI based on a floating-point limit (as opposed to centering the ROI on the image sensor pixel). Subsequently, the method better isolates the fractional sensor pixel area contained within the ROI to calculate a measurement, thus improving the accuracy of measured luminance values over conventional (whole pixel) techniques of display qualification and demura. The advantages of fractional pixel method are precise display pixel or subpixel measurement while making use of reduced imaging resolution, allowing the optimization of the measurement system for cost effectiveness and higher testing speeds.
Fractional Pixel Method
Because multiple image sensor pixels per display pixel are required for precise pixel-level luminance measurement, the center of each display pixel in an image will not necessarily align with the center of a single image sensor pixel (see Figures 1 and 2, where the center of the display pixel is actually the intersection of four image sensor pixels). ROI centered on an image sensor pixel can eliminate considerable areas of the display pixel (see Figure 1, left), particularly when a reduced number of image sensor pixels are applied. In comparison to conventional whole pixel methods, a fractional technique establishes an ROI that better approximates the limitations of a display pixel by weighting the mean luminance values across image sensor pixels to identify a center of gravity for individual display pixels (as shown in Figure 1, right).
Figure 1. Illustration of the effect of centering a display pixel ROI on a single image sensor pixel (left) where significant area of the display pixel may be excluded from measurement, versus weighting luminance values of fractional pixels to determine the true center of the display pixel for alignment of the ROI (right). Image Credit: Radiant Vision Systems
That is to say, a floating-point representation (i.e., fraction) of a number of sensor pixels is used to determine the ROI. Consequently, a fractional pixel method can more precisely register pixels and subpixels within close proximities in contrast to whole pixel methods using a single-image measurement – where limitations are relative to the imaging system resolution. To establish the centroid location of a display pixel ROI, the fractional pixel method utilizes the primary input for a single display pixel location, then scans the display to set preliminary ROI for all display pixels of the device based on pixel pitch. These ROI are refined to calculate the center of gravity of each display pixel weighted to the highest average luminance across image sensor pixels. The ROI is then configured to be a bounded area surrounding the display pixel center and decisions are made as to whether the image sensor pixels are associated partially within, or outside the ROI. To determine an overall value of the ROI, the fractional luminance values in each of the image sensor pixels within the ROI are totaled up (e.g., the luminance per display pixel). See Figure 2.
Figure 2. Illustration of a traditional whole pixel measurement method versus the fractional pixel method. In the traditional method (left), display pixels are measured using 100% of the data from sensor pixels whose area is more than 50% inside the ROI, and 0% of the data from sensor pixels whose area is less than 50% inside the ROI. Using the fractional pixel method (right), display pixels are measured using a percentage of data based on the percentage of sensor pixel area inside the ROI. Image Credit: Radiant Vision Systems
The fractional pixel method has a considerable influence on measurement accuracy when applied to high-resolution, pixel-dense displays where pixel-level luminance measurements are concerned. As resolutions in displays increase and single-image measurement resolution is restricted, a measurement image may include image sensor pixels that fall partly inside or outside of a single display pixel ROI. This increases the probability of measurement error according to the luminance values factored in or out of the total measurement of a target pixel.
Moreover, an individual image sensor pixel that partly falls inside an ROI may introduce values for both the target display pixel as well as a neighboring pixel. Therefore, it is vital that ROI be centered accurately in accordance to each display pixel, and image sensor pixels positioned partly within the display pixel ROI are weighted to secure nearby pixel values not factored into the quantification of the target pixel.
Figures 3 and 4 depict the fractional pixel measurement method, as described in US Patent 10971044.2

Figure 3. A portion of a display measurement image with ROI whose center is aligned to the display pixel center. Image Credit: Radiant Vision Systems
Figure 4. A schematic illustration of display pixels (560) and a single circular ROI (570) having a center (572) and a radius (R). Some of the sensor pixels (560b) are wholly contained within the ROI, while other sensor pixels (560a) are partially inside and partially outside the ROI. Image Credit: Radiant Vision Systems
Evaluating Method Effectiveness
To assess the accuracy of a fractional pixel measurement technique, an experiment was carried out by Radiant Vision Systems utilizing the measurements of an OLED device. This experiment compared pixel-level luminance measurements from whole and fractional pixel methods to a reference measurement. To determine a reference for pixel-level luminance values, a Radiant Vision Systems ProMetric® Imaging Photometer combined with a microscope objective lens was put in place to capture an ultra-high-resolution measurement image of an OLED display with 577 pixels per inch (subframed to an area at the center of the display, 150 H x 200 W display pixels). See Figure 5.

Figure 5. Reference measurement image of an OLED display with ROI measuring 30 x 30 image sensor pixels per display pixel. Image Credit: Radiant Vision Systems

Figure 6. Down-sampled measurement image representing a typical measurement resolution with ROI measuring 3.2 x 3.2 image sensor pixels per display pixel. Image Credit: Radiant Vision Systems
As exhibited in the reference measurement image (Figure 5), display pixel ROI are 30 image sensor pixels in diameter (30 x 30 sensor pixels per display pixel). To simulate a standard measurement resolution, a measurement image of the same OLED display was created by down-sampling the high-resolution reference measurement image. The display pixel ROI are 3.2 image sensor pixels in diameter (3.2 x 3.2 sensor pixels per display pixel) as illustrated in the down-sampled image (Figure 6).
Each horizontal row of pixels was measured using the reference measurement image (Figure 5), supplying the true luminance values of the display pixels. Subsequently, luminance values were then taken for each row of pixels in the down-sampled measurement image (Figure 6) utilizing whole and fractional pixel methods, giving the luminance values for a standard measurement scenario.

Figure 7. Synthetic measurement showing pixel-level luminance of the reference measurement image. Measurement image shown in “false color” scale to represent luminance values. Image Credit: Radiant Vision Systems

Figure 8. Synthetic measurement showing pixel-level luminance measured by the whole pixel method. Image Credit: Radiant Vision Systems

Figure 9. Synthetic measurement showing pixel-level luminance measured by the fractional pixel method. Image Credit: Radiant Vision Systems
Synthetic measurement images were produced using values measured by reference, whole, and fractional pixel methods (Figures 7 through 9). In synthetic images, every image sensor pixel records the luminance value for an individual display pixel (i.e., one-to-one image resolution to display resolution). The visual evidence presented in these images shows that luminance values measured using a whole pixel method (Figure 8) differ more significantly from pixel to pixel than the values measured by the reference (Figure 7) and fractional pixel method (Figure 9), which appear to be very similar.
The extreme deviations in the whole pixel method are due to inaccurate calculations of the luminance values for each display pixel ROI over multiple mage sensor pixels, which may include nearby display pixel values (the pixel’s measured luminance is higher than its true luminance), or excluding target display pixel values (the pixel’s measured luminance is lower than its true luminance).
Results
Using whole and fractional pixel methods, figures 10 and 11 show the pixel-level luminance values measured for a single row of pixels, in contrast to the reference (true) luminance of each pixel. In Figure 10, there is a close match between fractional pixel (solid orange line) and reference (solid gray line) measurements, while whole pixel measurements (dotted blue line) clearly diverge from reference values.

Figure 10. Normalized luminance (Lv) measured by whole and fractional pixel measurement methods (using down-sampled image) and reference luminance (using reference image) for the same row of pixels (row 100 of 200). Image Credit: Radiant Vision Systems

Figure 11. Percentage error in luminance (ΔLv, %) measured by whole and fractional pixel measurement methods (using down-sampled image) versus reference luminance (using reference image) for the same row of pixels (row 100 of 200). Image Credit: Radiant Vision Systems
Furthermore, measuring each pixel by whole and fractional methods, Figure 11 plots the percentage error in luminance (ΔLv, %) in contrast to the reference luminance values. The pixel-level luminance values acquired using the fractional method demonstrate minimal deviation from the reference values, less than 2%, which is evidence of the accuracy of the fractional pixel measurement method. This demonstrates that a measurement system with standard resolution can achieve comparable pixel-level luminance accuracy to an exceptional high-resolution measurement system throughout single-image capture when applying a fractional pixel method.
The whole pixel measurement method deviates from the reference values as much as 10%. This deviation suggests a misalignment of the measurement ROI with the display pixels, and exclusion or inclusion of important luminance data within the ROI of each pixel. In cases where correction (demura) is applied, inaccurate correction factors may be calculated from this data. The application of inaccurate correction factors yields residual or additional mura in the ‘corrected’ display.
Conclusion and Impact
OLED, microLED, and other emissive displays offer challenging measurement scenarios: their resolutions and pixel densities continue to increase dramatically relative to existing image-based measurement systems, and the variability of pixel-to-pixel luminance is an inherent feature which makes display correction an essential quality control measure. Developed for large displays and multi-image measurement scenarios, whole pixel methods are insufficient for the processes of measurement and correction applied to new display resolutions at required takt times. Whole pixel methods present considerable errors in the final measurement, which can exhibit as strong pixel-to-pixel fluctuations in the final demura process, undermining the advantages of correction.
To the human eye, this effect can be easily detected as poor calibration and non-uniform display luminance, which is often unacceptable for display manufacturers. However, a new fractional pixel method allows measurement systems with comparatively low sensor-to-display resolutions to attain measurably greater accuracy over whole pixel methods. A fractional method closely matches the precision of high-resolution systems for single-image measurement. As such, this method ensures the effectiveness of demura correction for an exceptionally high-quality display appearance, safeguarding manufacturing resources and streamlining production operations.
The fractional pixel method has been integrated into demura processes utilized by Radiant Vision Systems OLED display customers, who accomplished considerable increases in production yield, thereby advancing commercialization of OLED displays within the international smart device market. These applications incorporated benefits of several measurement techniques from Radiant to enhance the precision and accuracy of data measured within each pixel for displays of increasing resolutions: fractional pixel method, moiré removal using in-focus measurement and a proprietary moiré filter processes, and spaced-pixel pattern measurement method.3
References
- Rykowski, R, Albrecht, RE, inventors; Radiant Vision Systems LLC, assignee. Method and apparatus for on-site calibration of visual displays. United States patent US7907154B2. 2011 March 15.
- Pedeville GR, Rouse JH, inventors; Radiant Vision Systems LLC, assignee. Methods and systems for measuring electronic visual displays using fractional pixels. United States patent US10971044. 2021 April 6.
- Rykowski R, inventor; Radiant Vision Systems LLC, assignee. Methods and systems for measuring and correcting electronic visual displays. United States patent US9135851B2. 2015 Sept 15.
- DSCC Releases Latest OLED Forecast [Internet]. Display Supply Chain Consultants. 2018 [cite 22 November 2019]. Available from: https://www.displaysupplychain.com/ blog/dscc-releases-latest-oled-forecast
- Micro-LED Market by Application [Internet]. MarketsandMarkets. 2019 [cited 22 November 2019]. Available from: https://www.marketsandmarkets.com/Market-Reports/ micro-led-market-119830236.html
- Lindsay JW, Rhoads GB, inventors; Tektronix Inc, assignee. Focusing and screen calibration method for display screen coupled to video camera. United States patent US4754329A. 1988 June 28.
- Yuan JY, et al. A High‐Accuracy DeMura Algorithm Based on Sub‐Pixel Registration. SID Intl Symp Digest of Tech Papers. 2019;50(S1):459–461.
- Yang Q, Hoffman D, Smith P, Pfeiffer M, inventors; Brillian Corp, assignee. Testing liquid crystal microdisplays. United States patent US20030215129A1. 2003 Nov 20.
This information has been sourced, reviewed and adapted from materials written by: Gary R. Pedeville, Joshua H. Rouse, Douglas F. Kreysar; Radiant Vision Systems, LLC

This information has been sourced, reviewed and adapted from materials provided by Radiant Vision Systems.
For more information on this source, please visit Radiant Vision Systems