I'd appreciate some help understanding the existing guidance in the current AAVSO CCD Photometry Guide re FWHM and pixel size (pages 17+ in the Guide). I'm looking at new cameras, and built some models and spreadsheets to understand candidate camera performance from my location, and my numbers don't mirror the advice from the Guide.
The Guide states,
"In order to get the best results you can out of your photometry, you should strive to sample such that the FWHM of your seeing disk is spread across two to three pixels. This will help to optimize the signal–to–noise ratio (SNR) and improve the accuracy of your measurements."
But when I do a noise analysis of various candidate cameras and optical configurations, I find that one of two situations usually applies:
- My target star is faint, and SNR is dominated by skyglow. As long as I keep the photometry aperture size (as measured in arcseconds) constant, focal length doesn't seem to matter (hence, pixel size doesn't matter) because the total number of skyglow photons from a given patch of the sky stays constant, no matter what the focal length. Thus, the total number of photons from the star stays the same and the total number of skyglow photons stays the same; the skyglow noise stays the same and SNR stays the same.
- My target star is bright enough to swamp skyglow. In this case SNR is determined strictly by Poisson counting statistics from the target star. Again, as long as I keep the photometry aperture size in arcseconds constant, it doesn't matter how many pixels are in that aperture, because the total number of target star photons remains constant as focal length is varied, and that causes target star Poisson (shot) noise to stay constant; SNR stays the same no matter how that patch of the sky is split up into pixels.
The only situation where matching pixel size to FWHM seems to provide a benefit is when the observation is limited by read noise (which scales with pixel count, not with sky area). But with my local skyglow contributing something on the order of one electron/second per square arcsecond, in just a few seconds the skyglow contribution to total noise will make the read noise relatively unimportant.
The attached pair of tables shows the calculations for the same camera (QHY600M, Mode 1, Gain 56) for two different effective focal lengths giving a FWHM of almost 10 pixels compared to a FWHM of 4 pixels.
What are the assumptions behind the advice in the Guide? Is it based on observing from a site dark enough (or with a camera noisy enough) that read noise far exceeds skyglow noise and target shot noise? Or am I completely misunderstanding what's going on?
- Mark M
What are the assumptions behind the advice in the Guide?
Well, one of the first assumptions behind the CCD Guide is that it applies to CCDs. QHY600 is a CMOS. Its read noise and similar properties that you're relying on may differ a lot from those of CCDs.
But yes, in our age of larger pixel counts, the Guide might instead advise "seeing disk spread across at least two or three pixels". The main thing is to avoid undersampling; with a single pixel taking all the photons, lest you not be able to distinguish a target source from a hot pixel or cosmic ray. Also, if you focus all the images' source photons onto single pixels each, you will have very little information about image quality.
Oversampling, that is, spreading the FWHM over much more than three pixels' diameter, is probably less bad than undersampling, if for no other reason than oversampling collects more total light in a given image before a source's central pixels saturate. For a given camera and sensor size, the main downside to oversampling (higher focal length) is loss of image (and guider) field of view--in the end, you'll need to be comfortable with that as well.
Less strictly statistical concerns like those above frequently dominate over purely statistical/Poisson concerns for realistic optical train-camera pairings.
But yes, in our age of larger pixel counts, the Guide might instead advise "seeing disk spread across at least two or three pixels".
My understanding is that the term "seeing disk" means the actual image of the star, which is very different from FWHM. I think it is not difficult to confuse the two when conjuring up a mental image of so many pixels spread across the FWHM,
FWHM is constant across a range of stellar magnitudes. Seeing disk diameter increases with decreasing magnitude (increasing brightness). The FWHM may be only 2-3 pixels, but the total number of pixels in the seeing disk (image) of a 'bright' star may be a couple of hundred in a well-focussed image.
Terminology in photometry is sometimes confusing. "seeing disk" and "point spread function (PSF)" of a star are roughly comparable. If you look at Figure 9.5 in my old book, you will see that the basic star profile is similar to a Gaussian and the atmospheric scattering extends that profile for very large angular distances. The size of the star image on a CCD frame depends on the display dynamic range, and depends on when the eye no longer discerns a difference between the background sky value and the star contribution. The seeing disk appears to be smaller for fainter stars because of this contrast issue, but in reality all stars in an image follow the same profile.
Because of this, we generally use a specific term, the full width at half maximum (FWHM), to define the "seeing" in a particular image. This value remains constant for all stars in the field of view (if you ignore optical distortions).
The determination of pixel sampling - the number of pixels across the FWHM - is an attempt to optimize the signal-to-noise ratio (SNR) without compromising the measurement technique. If you have too few pixels across the FWHM, then there are practical considerations. The star saturates earlier, so you cannot get as much flux in your measuring aperture and the SNR goes down. An analysis technique that uses a Gaussian fit to the star profile can have larger error. It is hard to tell hot pixels from stars. Centroiding can be erroneous. On the other hand, oversampling issues are primarily related to the sensor. More pixels in the measuring aperture means a better chance of contamination from hot pixels, cosmic rays, dust, etc. More pixels means higher total read noise. More pixels means you have a smaller angular field of view, important for finding comparison stars.
In general, undersampling is far worse than oversampling. As Mark mentions, often sky glow dominates, and so the readnoise contribution in the measuring aperture may be less important. As Roy and Jerry have mentioned in the past, defocus (or oversampling) keeps the central peak down, and you can get far better dynamic range in your image. In addition, as Mark mentions, using CMOS sensors with their inherently lower readnoise also makes that contribution in the measuring aperture less.
That said, Mark's analysis of changing focal length is more complex than he describes. As long as you keep the telescope mirror or objective lens the same size, then varying the focal length, but using the same angular measuring aperture, works as he describes. However, take note of the limitations that I mention above.