Hello! I'm trying to wrap my head around the systemic benefits/problems of binning (rather than one-off problems like cosmic ray hits and the chance overlap of hot pixels in the star image - we are supposed to look at an image anyway to detect these). I would appreciate guidance.
From what I've read, (e.g. Arne's discussion of QHY600) binning often reduces the dynamic range of a camera. For example, if the well is 100,000 e- per pixel, the binned 2x2 pixel is still 100,000 rather than 400000. As a result, binning reduces dynamic range and means that faint targets cannot be imaged that perhaps could be imaged in 1x1 mode. The benefits of binning would be better precision since there would be less read noise since the larger pixel is read once rather than smaller pixels being read 4 times.
So, there is a trade off between reduced dynamic range against improved precision with binning
Has anyone done any tests to compare the practical significance of these two?
For example, one could image the same target under the same conditions (hopefully!) and find the precision/error range of the target with 1x1, 2x2, 3x3, binning, etc. Then obtain the same information for targets of various brightness down to the limits of the sensor at the same binning levels. Repeating this for different levels of oversampling (? different telescopes? Perhaps adding a focal extender?) would then yield additional information.
This would allow a practical answer the question has to how much binning actually reduces dynamic range ( 1 mag? 2 mag?, etc.) vs. the improvement in photometric precision that binning actually produces at various magnitudes and allow us to determine when oversampling might become a significant problem.
For example, maybe we can place an unbinned QHY600 on a Celestron C14 operating at f10 and get good results. Maybe we can only place one on a Tak Epsilon 170 at f2.8 and get good results. Maybe the limit is someplace between. Best regards.
Just following up. I do not have experience or education in this area. I may be "barking up the wrong tree," and I'm afraid of making a mistake.
Does anyone have any data on how much binning might improve photometric precision compared with ovesampling?
The best experiment that I can think of is to use a camera and focal length such that 2x2 binning (or even 3x3 binning) results in 2 to 3 large pixels FWHM consistent with the Nyquist theorem. Then repeat the image with 1x1 binning, which would result in oversampling. The error range/precision could then be obtained for the binned and the unbinned images and compared.
Would the following work? Take two sequential images of M67 of the same integration/filter, etc. - but with one binned and one unbinned - with appropriate calibration of each image. Choose 3 to 4 standard stars to use as comps, then use those to get photometric magnitudes of other standard stars in the field at a variety of magnitudes. Since everything would be the same except for the time needed to change the camera settings to 2x2 binning (and hopefully, the sky would be the same between integrations!), this should give information on how binning affects precision at different stellar magnitudes (and possibly could be plotted as DeltaPrecision vs magnitude?). I do not know if color affects binning precision. If it does, that would need to be taken into consideration. And, of course, the camera's linearity at each binning level would need to be known.
I do not have a system that can do such an experiment, but I'm curious if anyone has performed something similar in the past.
Might there be another way to check how significant a change in precision might be when binning is used compared with unbinned images?
Thank you and best regards.
This is an interesting question.
Binning with a CCD camera is a noiseless operation (readnoise for one native pixel is the same for the binned pixel). So if you are always oversampled (say, 5 pixels per fwhm or more), you can bin without noise penalty, yet have sufficient resolution of the seeing disk to be able to perform good photometry. Every CCD camera that I know of implements binning by dividing the resultant sum by the number of native pixels in the sum, so that you get a 16-bit scaled output. Each binned pixel then holds, say, 4x the number of electrons.
Binning with a CMOS camera is a different situation. First of all, binning is done in software rather than on the sensor. So for 2x2 binning, the readnoise increases by sqrt(4) = 2x. In some low-signal situations where readnoise is dominant, such as very short exposures of faint targets without significant sky background, or spectroscopy, this increase in readnoise will lessen the signal/noise. It will not compromise photometry of bright targets, as in that case, the Poisson noise of the target dominates.
In addition, with CMOS cameras, I've seen two different ways of handling binning, either by summing and then dividing appropriately to yield a 16-bit result (ZWO) or summing and then truncating to yield a 16-bit result (QHY). At least, that is how things show up in Maxim DL. Because of this, binning gets more complicated and you may want to adjust the CMOS gain factor to give the best results.
I think using the CCD Equation as given by Steve Howell and others is adequate to show the effect based on target signal level and the various noise sources involved, A live test may have multiple factors, some external to the basic question, and so is probably not as useful. The effect is primarily seen in low-signal regimes where the target Poisson noise makes comparisons difficult. If you can account for all of the differences in the sky and camera calibration, an experimental test to compare with the theory can't hurt, but I'm not sure it is worth the effort!
Thank you! I see that a similar question was just asked in the instrumentation section for CMOS - asking if the main advantage of CMOS binning is reduced download time as opposed to noise reduction as compared to the downside of reduced dynamic range.
With CCDs, we were encouraged to match the camera to the telescope. However, for CMOS, from what I've read on the posts in the forums, oversampling seems less concerning
I bin my QHY600M 4x4 with summing of the pixels. That is possible with the newest driver from QHY. Before I binned 2x2 (as that was the maximum the driver allowed) and then averaged in MAXIM DL 2x2. This to reduce the amount of storage needs as even a 4x4 binned image is about 7.5 MBytes.
I see compared to my FLI ML16803 (binned 3x3) a brightness gain of about a factor of 2. I run the QHY in mode 3 with Gain 0 and offset 10.
MY FL on both the scopes for the QHY and FLI are 2.7m. Download times are not an issue as the CMOS readout is about the same amount of time as the FLI 3x3 binned CCD readout.
I get about 500-1000 images a night with the FLI and a similar amount with the QHY depending on the target exposure.
Hence even with nowadays large disks storage is an issue in my case, hence the binning.