Skip to main content

ADUs per square micron

12 posts / 0 new
Last post
CrossoverManiac's picture
ADUs per square micron

While on this thread, I did some calculations for the full well capacity per square micron for the ASI183MM and ASI178MM CMOS cameras.  Both have a full well capacity of 15,000 electrons per pixel (IIRC, this is actually where linearity is set so the actual FWC is a bit higher) with each pixel being 2.4µm on the side or 5.76µm2.  So both would hold 2604 electrons per square micron.  The SBIG ST-402ME NABG has a full well capacity of 100,000 elections per 9µm x 9µm pixel or 1235 electrons per square micron.  However, the ADU count is what is measured in the end for photometry.

While all of the results are in 16-bits, only the ST-402ME is a true 16-bit camera while the ASI183MM is 12-bit and the ASTI178MM is 14-bith with the ADU count is converted in the software.  So, if the ASI183MM has initially 2000 ADUs naturally, the number is boosted to 32,000 ADUs in 16-bits.  So, in actually, each unit in 12-bits is turned into 16 units for 16-bits (216/212=24=16).

Translating full well capacity square micron of pixel space into max ADUs per square microns with both ZWO cameras set at unity Gain with 1 ADU = 2-12=1/4096 full well capacity (3.66 e-/ADU) for the ASI183MM and 1 ADU=2-14=1/16384 full well capacity (0.915 e-/ADU) for the ASI178MM.  SBIG ST-402ME gain is set at 1.5 electrons per ADU which is roughly 2-16=1/65536 full well capacity.

ASI183MM: 2604 e-/µm2 ÷ 3.66 e-/ADU = 711 ADUs/µm2

ASI178MM: 2604 e-/µm2 ÷ 0.915 e-/ADU =  2846 ADUs/µm2

ST-402ME: 1235 e-/µm2 ÷ 1.5 e-/ADU = 823 ADUs/µm2

All of these are in actual ADUs and not the altered ADU range for the CMOS cameras.  From what I can tell, the ASI178MM seems to have the largest range.

Tell me what you think?  Is there something more to this than calculating pixel area and bit numbers?  Are there some other benefit to using CCD over CMOS?

Bikeman's picture
In theory it's also important

In theory it's also important to bring in Quantum Efficiency, so how likely a photon that has made it to the sensor is to kick an electron into that to speak. Because if we want to compare sensors, we want to start with the photons that will have a certain number per second per square microns on the focal plane as the basis of the comparison.

But at least for the three cameras you are comparing, this doesn't change anything. The ST-402ME seems to have a quite impressive QE,

but the CMOS ZWO ASI cameras are now quite comparable :


WGR's picture
sCMOS cameras for Photometry

Hello CrossoverManiac:

The full well in one important spec for comparing cameras.  However, equally, if not more important is the nearly 10x lower read noise in the sCMOS chips/cameras.  This results in a dynamic range of 96 db rather than 75 for CCD.  This results in higher signal to ratio of measurements of nearly 10x--which translates to 3x lower error bars for photometry.  I have given a presentation at AAVSO Flagstaff, NEAF, SAS, & ESV2019 on this subject and have committed to talk on this at CCAS and AAS(Poster) in January.  Perhaps a sCMOS workshop will be organized in the future.  I would be glad to present if asked.   I have 2 years of observations with sCMOS and there are some thnkgs that should be done differently.  These observations are in the AID.



CrossoverManiac's picture
However, equally, if not more

However, equally, if not more important is the nearly 10x lower read noise in the sCMOS chips/cameras.  This results in a dynamic range of 96 db rather than 75 for CCD.  This results in higher signal to ratio of measurements of nearly 10x--which translates to 3x lower error bars for photometry.


That's something I don't fully understand.  Read noise seems to be treated like the smallest unit of measurement on a ruler and that the lower the read noise the more range the camera has, but read noise is something added on to the signal, which would grow vastly larger than the read noise the longer the exposure until it's neglible to something like dark noise, and is not an individual unit/resolution of measurement like Gain would be.

Bikeman's picture
Read Noise: all true, but one

Read Noise: all true, but one really has to put all into one picture and do the math for the concrete comparision, bcause pixel size does matter.

For simplicity, let's assume the star you want to do photometry on happens to be so bright that it covers the 9 micon x 9 micon pixel and fills it's well (100k electrons) in a given time. QE is equal with the other cameras so let's stick to electrons instead of photons.

Take the 178mm CMOS for comparison. It has 2.5 micron pixels. So the same area is covered by 3.6 times 3.6 pixels, or (for simplicity's sake) ca 13 pixels  (This is good, because it has a much smaller full well capacity per pixel (factor of 6), so conveniently we can assume that we can do the same integration time (the staer light will not evenly illuminate all those pixels, but I think it's fair to assume none of the pixels would exceed 15k electrons). 

So while the CCD has higher readout noise per pixel, we need to read out 13 of those tiny CMOS pixels for every readout of one of those huge ST-402ME pixels! Now, this uncorrelated noise adds-up in quadrature, not linearly, so the combined read-out noise of 13 pixels is not 13 times the readout noise of one pixel, but just sqrt(13) or close to 4 times the noise compared to reading just one pixel.

So in reality, taking the smaller pixels into account, the CMOS camera used here still has lower read-out noise, but it's more like a factor of 2...3 . 

In reality you would probably not fully saturate your full well capacity to ensure linearity, let's assume you are in the regime where you can chose your exposure time to half-saturate your pixel, so about 50k electrons. That means that the unavoidable shot noise of the photons are of order of 200 electrons! So yes, the read-out noise get's pretty insignificant compared to the shot noise.

But if we are doing very faint targets, we might not be able to do exposures as long as the time needed to half-fill the well, because of tracking/guiding considerations, e.g. the faintest objects I have done so far will give me just 3 or so photons per second! Only in this regime we will get a significant advantage from a lower read-out noise, and smaller error-bars in photometry! But if we can expose long enough to fill the wells pretty much up to capacity, by definition, we are *always* dominated by shot noise rather than equipment-specific read-out noise.

And we haven't even looked at dark current noise (which increases with total exposure time, no matter wether we do many short exposures or fewer longer exposures), or the sky background noise which plays a similar role as dark noise.  Anyway, this video of a talk by the author of SharpCap is excellent in expaining it all, especially the role light polution plays in all this:



CrossoverManiac's picture
From what I gathered from the

From what I gathered from the video, read noise, while is a discrete, one-time value for every subframe, sets the SNR (except for longer subframes) and thus set the dynamic range.  Is that correct?


Edit: How does offset values in a CCD camera affect SNR and dynamic range.  According to this post from this CN forum thread, offset can eliminate sky background if set at the right value.

Bikeman's picture
That's not  exactly how I

That's not  exactly how I understand the video.For me the take-away message of the video is to not worry about the single subframe SNR (or single frame dynamic range if you want, this is what leads many into falsely believing that they will get a huge benfit from doing extremely long exposures). Instead we should focus on the SNR of the stacked results. And unless you make your subframes too short (and the video explains how to compute what "too short" is depending on all the factors including light polution together with focal ratio), or <ou are dealing with extremely faint objects, read-noise will never be your dominant noise source.  



Bikeman's picture
Let's talk about gain

For completeness, here is a video by Robin Glover, author of SharpCap, that covers the skipped slides (for running out of time) in the original talk, dealing with gain settings:




Linear dynamic range

Hallo, Heinz-Bernd,

I come late to this thread, so forgive me if I am wrong for viewing Glover's 2nd video from a photometrist's perspective.  Mostly the video is very good, but he neglects to mention linear dynamic range.  I suppose if one's purpose is to take pretty pictures, it is sufficient to simply avoid saturation, but for photometry that approach is disastrous.  Amplifiers exhibit nonlinear properties when the input level approaches that which results in a saturated output, so it is necessary to make a series of measurements at each gain setting to characterize the linear dynamic range of the sensor. 

In photometry, one wants to have an ADU count for the brightest star in the measurement set that is near the top of the linear range, and a gain setting that is high enough to produce a good SNR for the dimmest star in the set.

And of course in photometry it is best to produce good measurements on a single frame basis, so that the ensemble of images can be employed to estimate measurement quality statistics.



Bikeman's picture

Yeah I agree, linearity is an  issue for photometry but not so much for pretty pictures.

I wonder tho whether in the real world it's an issue except for low gain settings near unity gain, as my understanding, linearity becomes an issue near the saturation of the full well capacity (especially with sensors geaturing anti-blooming-gates), not saturation at the ADC bandwidth level. Does anyone have examples of significant non-linearity in CCD or CMOS cameras and at what regime they show up?




Ed Wiley_WEY
Ed Wiley_WEY's picture
Linearity testing

I had a big surprise. My linearity graph looked good almost all the way to saturation (drop-off beginning past 60K). But when I imaged a Landolt field squeezing as much SNR out of the stars as possible (ADU count near 60K) I got very variable results, 0.5 variation in estimates of the same Landolt stars over a time series of ten images in BVI . I attributed this to non-linearity before obvious saturation and even obvious drop off of the regression line and I now set my NABG 1603 chip CCD camera to 50K max to insure that I am still linear. I attribute the seemingly good linearity test to be due to not enough sampling at high ADU. Fortunately my regular program varaibles never reach so high, so my past data areOK.


TRE's picture
I used a pulse generator, LEDs and a white salad bowel

I used a pulse generator,  a couple white LEDs and a white salad bowel over the business end of the optics. Then took an average of all the pixels. By establishing a useable pulse width and exposure time, then varying the pulse rate, I could make a very repeatable curve of the camera response to light. I checked (with a photodiode and oscilloscope) to make sure the light pulses were seperated over the PRF range. Then a linear fit to the data showed where the 1% nonlinearity point was. That is the ADU count that I call saturated.  About 47500 analog to digital units  for my ST8XME. I generally stay well below it, but sometimes approach it when looking at red stars with the IR filter.



Log in to post comments
AAVSO 49 Bay State Rd. Cambridge, MA 02138 617-354-0484