CCD vs CMOS – Which is Better?

The relative advantages of CCD versus CMOS imagers have been highlighted a number of times. It appears that the debate has continued on for as long as most people can recall with no definitive conclusion in sight. Since the topic is not static, it is not surprising that a definitive answer is elusive. The evolution of technologies and markets not only affects what is technically feasible but also impacts what is commercially viable. Imager applications are varied, with diverse and changing requirements. Some applications are best served by CCD imagers, some by CMOS imagers. This article attempts to add some clarity to the discussion by assessing the various situations, elucidating some of the lesser known technical trade-offs, and introducing cost considerations into the picture.


Teledyne DALSA CCD (left) and CMOS (right) image sensors

In the Beginning...

CMOS (complementary metal oxide semiconductor) and CCD (charge coupled device) image sensors are two different technologies used for capturing images digitally. Each imager has unique strengths and weaknesses, providing advantages in many different applications.

Both types of imager sensors can convert light into electric charge and process it into electronic signals. In the case of a CCD sensor, every pixel’s charge is transferred through an extremely reduced number of output nodes (often just one) to be converted to voltage, buffered, and sent off-chip as an analog signal. All of the pixel can be dedicated to light capture, and the output's uniformity (a vital factor in image quality) is high.

In the case of a CMOS sensor, each pixel comes with its own charge-to-voltage conversion, and the sensor often also features digitization circuits, amplifiers, and noise-correction, so that the chip can output digital bits. These other functions not only increase the design complexity but also reduce the area available for light capture. Since each pixel is doing its own conversion, uniformity is lower, but it is also enormously parallel, enabling high total bandwidth for high speed.

CCD and CMOS imagers

CCD and CMOS imagers both depend on the photoelectric effect to create electrical signal from light

Both CCDs and CMOS imagers were developed in the late 1960s and 1970s (DALSA founder Dr. Savvas Chamberlain was a pioneer in developing both technologies). CCD image sensors became dominant, mainly because they delivered much more superior images with the fabrication technology available. On the other hand, CMOS image sensors required smaller features and more uniformity than silicon wafer foundries could deliver at the time. Not until the 1990s, lithography developed to the point that designers could start making a case for CMOS image sensors again.

Renewed interest in CMOS imagers was based on expectations of camera-on-a-chip integration, lowered power consumption, and reduced fabrication costs from the reuse of mainstream logic and memory device fabrication. In reality, achieving these advantages in practice while simultaneously delivering high image quality has taken far more process adaptation, time, and money than original projections suggested, but CMOS image sensors have joined CCDs as mainstream, mature technology.

High Volume Imagers for Consumer Applications

With the promise of higher integration for smaller components and lower power consumption, CMOS designers concentrated their efforts on imagers for the highest volume image sensor application in the globe – mobile phones. Therefore, a significant amount of investment was made to develop and fine tune CMOS image sensors and the fabrication processes that manufacture them. As a result of this investment, major improvements were witnessed in image quality, even as the sizes of pixels reduced. Hence, in the case of high-volume consumer area and line scan imagers, based on nearly every performance parameter imaginable, CMOS imagers supersede CCD imagers.

Mobile phones drive CMOS imager volume

Mobile phones drive CMOS imager volume

Imagers for Machine Vision

In the context of machine vision, area and line scan imagers rode on the coattails of the massive mobile phone imager investment to displace CCD imagers. CCDs are also a technology of the past for most machine vision area and line scan imagers.

Imagers for Machine Vision

For machine vision, the performance benefit of CMOS image sensors over CCDs merits a short explanation. Speed and noise are the key parameters for machine vision.

Both CCD and CMOS imagers vary in the way that signals are converted from signal charge to an analog signal and then finally to a digital signal. In CMOS area and line scan imagers, the front end of this data path is enormously parallel. This enables each amplifier to have low bandwidth. By the time the signal reaches the data path bottleneck, which is usually the interface between the off-chip circuitry and the imager, CMOS data are firmly in the digital domain. While high-speed CCDs have a large number of parallel fast output channels, they are not as vastly parallel as high-speed CMOS imagers. Therefore, each CCD amplifier has higher bandwidth, which leads to higher noise. As a result, it is possible to design high-speed CMOS imagers that have much lower noise than high-speed CCDs.

Nevertheless, there are major exceptions to this general statement.

Near Infrared Imagers

Imagers are required to have a thicker photon absorption region to image in the near infrared (700 to 1000 nm). The reason behind this is that infrared photons are absorbed deeper than visible photons in silicon.

Cracks in silicon solar cells are obvious with NIR imaging

Cracks in silicon solar cells are obvious with NIR imaging

The majority of CMOS imager fabrication processes are tuned for high-volume applications that only image in the visible. These imagers are fairly insensitive to the near infrared (NIR) because they are actually designed to be as insensitive as possible in the NIR. In case the thicker epi layer is not combined with higher pixel bias voltages or a lower epi doping levels, increasing the thickness of the substrate (or more accurately, the epitaxial or epi layer thickness) to enhance the infrared sensitivity will reduce the ability of the imager to resolve spatial features. Changing the voltage or epi doping will impact the operation of the CMOS digital and analog circuits.

It is possible to fabricate CDs with thicker epi layers while protecting their capability to resolve fine spatial features. In certain near infrared CCDs, the epi has a thickness of more than 100 microns, as opposed to the 5 to 10 micron thick epi in most CMOS imagers. In addition, the CCD pixel bias and epi concentration need to be modified for thicker epi, but the impact on CCD circuits can be more easily managed than in CMOS.

CCDs that are particularly designed to be highly sensitive in the near infrared are considerably more sensitive than CMOS imagers.

Ultraviolet Imagers

As ultraviolet photons are absorbed very close to the surface of silicon, UV imagers should not have nitride, polysilicon, or thick oxide layers that obstruct the absorption of UV photon. Contemporary UV imagers are therefore backside-thinned, most with just a very thin layer of AR coating over the silicon imaging surface.


Today's deep submicron lithography requires deep UV light for quality inspection

While backside thinning is widely used in mobile imagers these days, UV response is not. To attain stable UV response, the imager surface needs specialty surface treatment, irrespective of whether the imager is CCD or CMOS. Several backside thinned imagers developed for visible imaging have thick oxide layers that can discolor and absorb UV after prolonged UV exposure. Certain backside-thinned imagers have imaging surfaces that are passivated by a highly doped boron layer that extends too deep into the silicon epi, resulting in a large fraction of UV photo-generated electrons being lost to recombination.

Backside thinning and UV response can be realized in all line scan imagers, but not all area imagers. None of the global shutter area CCD can be backside-thinned. The situation is better in CMOS area imagers, but not without trade-offs. CMOS area imagers with rolling shutter can be backside-thinned. Traditional CMOS global shutter area imagers possess storage nodes in each pixel that has to be shielded during thinning, provided these UV sensitive imagers will also be imaging in the visible. In backside-thinned area imagers, part of the pixel cannot be effectively protected from incident illumination, without considerably degrading the imager’s fill factor (the ratio of the light sensitive area to the total pixel area). Other types of CMOS global shutter area imagers are available that do not have light-sensitive storage nodes, but possess higher noise, lower full well, rolling shutter, or a combination of these.

Time Delay and Integration Imagers

Besides area and line scan imagers, another important type of imager is available. Time delay and integration (TDI) imagers are usually used in machine vision and remote sensing and they function much like line scan imagers, with the exception that a TDI has plenty, often hundreds, of lines. A snapshot of the object is captured by each line, as the image of the object moves past it. TDIs are very useful when there are weak signals because the multiple snapshots of the object are merged together to generate a stronger signal.

TDI imagers combine multiple exposures synchronized with object motion

TDI imagers combine multiple exposures synchronized with object motion

The multiple snapshots are now summed differently by CCD and CMOS TDIs. CCDs and CMOS TDIs combine signal charges and voltage signals, respectively. In CCDs, the summing operation is noiseless but this is not the case with CMOS. If there are more than a certain number of rows in CMOS TDI, the noise from the summing operation adds up to the point that even the most sophisticated CMOS TDI will have more noise than a modern CCD TDI.

A step in the right direction for CMOS TDIs is to mimic CCD TDIs by having CCD-like pixels which can subsequently sum charges. This can be referred to as the charge domain CMOS TDI. While the charge domains of CMOS TDIs are technically practicable, they would require considerable investment to develop, fine tune, and perfect. The CMOS area and line scan imagers are economical, but the same cannot be said for charge domain CMOS TDIs. Mobile phones require neither TDIs nor charge summing. Therefore, there is no coattail for CMOS TDIs to ride on.

Electron Multiplication

Electron multiplication CCDs (EMCCDs) are CCDs with structures to multiply the signal charge packet in a way that restricts the noise added during the multiplication process. This leads to a net signal-to-noise ratio (SNR) gain. In applications where the signal is so weak that it is barely above the imager noise floor, EMCCDs can detect signals that were indiscernible before.

EMCCDs are useful for very low signal applications, typically in scientific imaging

EMCCDs are useful for very low signal applications, typically in scientific imaging

In comparison to CMOS, EMCCDs are most beneficial when the imager is not required to image at high speed. The read noise in CCDs is increased by higher speed operation. Therefore, even after the SNR improvement from the EMCCD, there may not be much of a difference between a CMOS and an EMCCD imager, particularly when compared to scientific CMOS imagers that are exclusively designed to have minimal read noise. High-speed EMCCDs also dissipate considerably more power than conventional imagers.

Low-noise CMOS imagers may not have the UV, NIR, or TDI integrating advantages of a CCD. As a result, since the signal can be much weaker, even when the read noise is comparable to what an EMCCD can attain, an EMCCD solution may still be superior on the whole.

Cost Considerations

So far, the performance differences between CCD and CMOS imagers have been discussed. It cannot be taken for granted that business decisions are purely based on performance trade-offs. To many business decision-makers, value or the performance received for the price paid, is what matters more.

Leverage, volume, yield, and the number of devices per wafer all affect cost

Leverage, volume, yield, and the number of devices per wafer all affect cost

As the cost picture can be complex, only a few important points are discussed.

First, leverage is key. Obviously, imagers that are readily available on the market will cost much less than a complete custom imager, irrespective of whether it is a CCD or a CMOS imager. When customization is required, unless the change is minor, it is usually more economical to develop a custom CCD than it is to develop a custom CMOS imager. CMOS uses more expensive deep submicron masks and hence it is generally more expensive to develop a CMOS imager. Also, much more circuitry is required to design a CMOS device. Consequently, even in applications where a custom CMOS imager evidently performs better, the value proposition can still lean towards a custom CCD.

Secondly, volume matters. While the cost to develop a new CMOS imager is relatively higher, CMOS imagers that can leverage from larger economies of scale will cost lower per unit. Where high volumes are concerned, a low unit cost can be financially more vital than a low development cost.

Third, supply security is important. It can be very costly if left with a product that is designed around an imager which is no longer of use. Despite having a better value proposition, it may be wiser to decide on the company which is best able to produce the imager –CCD or CMOS – long term.


CCD vs CMOS – Which is Better?

Selecting the right imager for an application is not an easy job. Different applications demand different requirements. These requirements impose limitations that affect both price and performance. Due to these complexities, it is indeed impossible to make a universal statement about CCD versus CMOS imagers for all applications.

CMOS area and line scan imagers perform better than CCDs in most visible imaging applications. CMOS TDIs are outperformed by TDI CCDs, in high-speed, low-light level applications. When there is a need to image in the NIR, CCDs can be a more suitable option for certain area and line scan applications. To image in the UV, the surface treatment post backside thinning is key, as is the global shutter requirement. The need for very low noise brings in new constraints, with CMOS generally still outperforming CCDs at high readout speeds. The price-performance trade-off can favor either CMOS or CCD imagers, based on leverage, volume, and supply security.

Teledyne DALSA

This information has been sourced, reviewed and adapted from materials provided by Teledyne DALSA.

For more information on this source, please visit Teledyne DALSA.


Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Teledyne DALSA. (2023, April 25). CCD vs CMOS – Which is Better?. AZoSensors. Retrieved on July 19, 2024 from

  • MLA

    Teledyne DALSA. "CCD vs CMOS – Which is Better?". AZoSensors. 19 July 2024. <>.

  • Chicago

    Teledyne DALSA. "CCD vs CMOS – Which is Better?". AZoSensors. (accessed July 19, 2024).

  • Harvard

    Teledyne DALSA. 2023. CCD vs CMOS – Which is Better?. AZoSensors, viewed 19 July 2024,

Ask A Question

Do you have a question you'd like to ask regarding this article?

Leave your feedback
Your comment type

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.