# Matching bit-depth with dynamic range

23 posts / 0 new
CrossoverManiac
Matching bit-depth with dynamic range

My question is about matching dynamic range with A/D conversion.  The way I have been doing this is by finding the dynamic range and converting it to the number of stops.

stops = log10(full well capacity ÷ read noise) ÷ log102

So, if I have a camera with a full well capacity of 32,000 electrons and a read noise of 4 electrons, then the number of stops would be

stops = log10(32,000 electrons ÷ 4 electrons) ÷ log102

stops = log10(8,000) ÷ log102

stops = 12.97

So, it would be roughly 13 stops.  From this, I would conclude that the optimal A/D conversion would be 14-bits.  12-bits would mean losing precision and 16-bits, while that would work, would be a tad overkill.  However, someone said my calculations were off by two bits and that a 14-bit A/D would cause the loss of information and only a 16-bit would do that camera justice.  Is this true?

SFS
https://www.analog.com/media
CrossoverManiac
From https://www.cloudynights

Also one other wrinkle with CCDs which folks ignore is read noise. Comparing output bits is absolutely useless. What you need to compare is the dynamic range of the sensor. There is no such thing as a 16 bit dynamic range.

For example the SVX-H9 has 7e-12e read noise as per the camera's manual. Which means anywhere from bottom 3-4 bits (2^3 = 8) in the 16-bits is just noise assuming unity gain. Only gets worse if you increase the gain. So the actual dynamic range for a single exposure is only 12-13 bits.

If we take a 14bit ADC CMOS like the ASI178 you are comparing a 14 bit camera with <2e read noise to a camera with 12-13 bit real dynamic range.

All the CMOS sensors have the same dynamic range as a CCD (typically in the 70-72 db range). Also you can very effectively use stacking for photometry which I do quite regularly with excellent results. This whole debate is very misleading

From what I understand, the actual amount of precision in either a CCD or CMOS is found by converting the read noise into an f-stop value and subtracting from the bit depth.  Apparently, that's where that extra 2 bits for noise was mentioned in other threads came from.

Richard Berry
Dynamic Range and Bit Depth in Images

The useful data range for an image sensor is limited at the top end by the full-well capacity, or more conservatively, but the upper end of the linear section of its response curve, and at the bottom end by the sum in quadrature of shot noise, read noise, thermal noise, pattern noise, and quanization noise. So you can do the numbers and determine the minimum number of bits needed to capture all of the information in the image. A sensible manufacturer might use an ADC with a few bits more than the strict engineering requirement.

However, unless you're dealing with a camera that saves images in a proprietary format, the data will be save as a FITS file. FITS supports 8-bit integer and 16-bit integer data arrays, so whatever the bit depth of the image, each pixel is going to take two bytes in the file. If saved as 12-bit values 0 to 4095, the first four bits will be 0s, and if saved as 14-bit values 0 to 16383, so two 0 bits ahead of the value bits. Simply by placing the 0 bits after the value bits, the numbers will be stored in a 0 to 65535 range, apparently 16-bit values. There is no storage penalty for doing this. It simply means that the measured gain (electrons per ADU) will be larger, but the readout noise is measured in e- rms, not ADUs.

Once the data goes into an image processing program, it will be changed to a 32-bit floating-point number internally, so again there's no penalty for the storage format used to convey it from the camera to the computer.

Since most of the software we're using today was written for 16-bit CCD images, the images will "look and feel" native to the software. The software may contain hard-coded values based on the assumption that the image began life as a 16-bit CCD image, and perform operations based on that assumption.

Also, the next generation of CMOS cameras will be using clever methods to increase the dynamic range of their data. For example, each charge packet can be measured twice, once at a high gain and then at a low gain, and the values combined in some fashion to span a true 16-bit range, or a padded 14-bit or 15-bit range. Whether these methods will be sufficiently linear for photometry is something we need to be thinking about and testing for.

If anyone iclose to the industry can shed more light on this subject, please pipe up and tell us how 16-bit CMOS and sCMOS cameras will do these things that are forecast for the near future

--Richard

.

Tonisee
If an image is taken with 12

If an image is taken with 12-bit camera and signal level 4000 ADU is recorded in the case of gain = 1 e-/ADU, then it means that 4000 electrons/photons are collected. Now if that value 4000 ADU is written directly to 16-bit FITS, it is value is still 4000 ADU and gain is still 1 e-/ADU. However, when software is doing any kind of scaling (e.g. multiplying by 16 / bit-shifting 4x), numerical gain value is changing but the reality (4000 collected photons) won't change of course. :-)

Best wishes,
Tõnis

Richard Berry
You are correct. If the

You are correct. If the camera software "pads" a 0-4095 range of ADUs to 0-65535 ADUs by multiplying each pixel value by 16, the gain will also be 16x larger, so if you were to measure the gain, you would find it to be 0.061 e-/ADU. The noise statistics would come out the same when converted back to read noise in electrons r.m.s., and the same for shot noise. You aren't going to fool nature.

If your goal is to measure the weakest signals possible, you might want to oversample the voltage coming from the nominally 12-bit sensor at 14 bits to insure accurate characterisztion of the bias, readout, dark, pattern, and shot noise. At high values it won't matter: in a signal 3600 electrons, the shot noise is sqrt(3600) = 60, dominating all other noise sources. But at low signal levels, the bias, dark, and so on may not depend on the electron count from the sensor, but charge detector, amp noise, etc. The cell-phone engineer may differ from the science-camera engineer in the best bit depth used for sampling.

--Richard

Richard Berry
If an image is taken with 12...

You are correct. If the camera software "pads" a 0-4095 range of ADUs to 0-65535 ADUs by multiplying each pixel value by 16, the gain will also be 16x larger, so if you were to measure the gain, you would find it to be 0.061 e-/ADU. The noise statistics would come out the same when converted back to read noise in electrons r.m.s., and the same for shot noise. You aren't going to fool nature.

If your goal is to measure the weakest signals possible, you might want to oversample the voltage coming from the nominally 12-bit sensor at 14 bits to insure accurate characterisztion of the bias, readout, dark, pattern, and shot noise. At high values it won't matter: in a signal 3600 electrons, the shot noise is sqrt(3600) = 60, dominating all other noise sources. But at low signal levels, the bias, dark, and so on may not depend on the electron count from the sensor, but charge detector, amp noise, etc. The cell-phone engineer may differ from the science-camera engineer in the best bit depth used for sampling.

--Richard

arx
Defining precision

On 08/11/2020, Tim wrote: "From what I understand, the actual amount of precision in either a CCD or CMOS is found by converting the read noise into an f-stop value and subtracting from the bit depth."

I'm not at all clear how you define precision in a CCD or CMOS. Do you mean repeatability of captured signal across the sensor? I calculate precision by taking a number of images of the same star field, measure standard stars, calculate the instrumental magnitudes for those stars, then calculate the standard deviation of the difference between the instrumental magnitudes for pairs of standard stars. The lower the SD the better the precision. Gain and exposure duration would be set to optimize signal/noise ratio.

How does this relate to read noise, f-stops or bit depth? As for read noise, it would be swamped by shot noise.

Roy

Eric Dose
careful with precision

(1) Detectors do not have precision. Measurements have precision.

(2) The bit depth only gives you a (not the only) maximum limit on measurement precision from that detector.

CrossoverManiac
Are you referring to the

Are you referring to the dynamic range because that's something I brought up in an earlier thread?

Eric Dose
No.

No. Dynamic range refers to the instrument; precision refers to measurements.

arx

Tim, read the the last paragraph in your post #3 from 8/11/2020. You clearly use the word "precision", and do not use the words "dynamic range" in that paragraph.

Roy

CrossoverManiac
That's because I've been

That's because I've been using the wrong terminology.  I should have said resolution of the magnitude of a target star.  For exoplanet transits, the drop in brightness is measured in millimagnitude.  That's why it's important to get a grasp of how dynamic range and A/D conversion would affect the measurement of a transit event.  If the magnitude resolution is too low, a transit event would go undetected.

arx
In my opinion, you are

In my opinion, you are writing about achieving sufficient precision in time series photometry of exoplanet transits, which will be determined by your telescope + camera setup and how you use it with particular targets.

Roy

Richard Berry
The Real Question You're Asking is...

It seems that the real "elephant in the room" behind this thread is expected accuracy of photometry from a 12-bit CMOS camera. Or perhaps I am reading too much between the lines.

This is covered by Craine, Tucker, Janesick, Howell, Hendon, and others in their published writings. It may be covered in a document on the AAVSO wbsite. You may want to check out Chapter 10 in The Hanbook of Astronomical Image Processing, section 10.1.2.4. on statistical uncertainty in aperture photometry. The relationship between the various noise sources is spelled out, and the contribution of each defined, and summarized in Equ 10.8 and 10.13, these yielding a SNR for a star image.

There is probably at least one webiste that offers an Excel spreadsheet that evaluates this "CCD Equation" with inputs scaled to convenient telescope, camera, sky brightness, and magnitude units. (Anyone know where?) If not, perhaps it should be a project for the someone in the Instrumentation & Equipment Section to create and document such a potentially relevant tool.

--Richard

arx

If you want to read about actual tests of the accuracy of a 12 bit CMOS camera imaging through a small refractor see post #54 and the attached document dated 2/13/2020 in the topic "CMOS cameras for photometry" in the Photometry forum.

Roy

CrossoverManiac
I will check it out but,

I will check it out but, unfortunately, I have caused some misunderstanding.  My question was not about accuracy but magnitude resolution as in how small of a change in magnitude could be measured.  The drop in magntitude for an exoplanet transit is measured in millimags and I'd like to know how to determine the smallest change in magnitude that can be detected by a camera and if there would be a loss in magnitude resolution by going with a 12-bit camera rather than a 16-bit camera.

Richard Berry
The SNR tells you the

The SNR tells you the magnitude resolution you can expect. A magnitude difference of 0.010 magnitudes is ~1%, for which you need a SNR of about 100. You obviosyly would like a higher SNR, if possible.

If you set your exposure so both the star and its comps have large signal levels but not saturated, and a bit out of focus to cover enough pixels to get more than 10,000 electrons = SNR ~100 (this should be easy), then you can get a 1% drop in light.

The example posted above suggests what can be done.

Richard

TRE
determine smallest change

It is straight-forward to calculate a smallest detectable  magnitude change, but quite a different matter to see it with your equipment. Brighter stars yield smaller magnitude changes than dimmer stars. Apterture makes a difference. I found that the only way to determine it is to quit talking about it and do it. Try several exoplanets and see how it goes. Errors from other sources are likely to dominate. You get a bonus from time series photometry when you find out just how repeatable your measurements are.

Ray

Richard Berry
Here's a direct link to the

Here's a direct link to the paper TRE referenced above:

ZWO ASI1600MM for AAVSO Forum 13Feb2020.docx

This is a fine description of calibrating a system in terms of exposure, linearity, and the optimum defous amount. If you convert your ADUs to electrons, you should find the electron statistics will line up with your empirical determination. Great light curves!

--Richard

Richard Berry
I fished around trying to

I fished around trying to find that document in the Photometry Forum but could not find it by either title or date. Is there a search function I don't know about? Please could you post a direct link to it?
Also, what is the status of the proposed Guide/Best Practises for CMOS Photometry now?

--Richsrd

arx
Go to the Photometry forum,

Go to the Photometry forum, scroll down about 20 topics to find 'CMOS cameras for photometry', go to page 2 and scroll down to post #54.

arx