Is defocusing necessary for a CCD with antiblooming disabled and should it be done anyway even without the antiblooming feature?
No. I never defocus my non-antiblooming CCD (6303E chip).
I did measure the linear range of my chip (in all filters), and I always carefully manipulate exposure times to keep all relevant peak ADUs well below saturation. And then my photometry comes out fine, nice and linear. I often average multiple images for very short exposures (for bright targets). But I've never defocused images on purpose.
I recommend that you review chapter 3 - ccd camera of the CCD Photometry Guide. This provides a discussion of matching ccd pixel size, image scale, and seeing. It discusses when/why to defocus, if needed.
I often use a flourite apo-refractor for imaging and, because of its short focal length, had to defocus to get FWHM > 1px. However, after many hours of work trying to get useful transformation coefficients I discovered that in defocused B filter images the FWHM of stars show a strong correlation with B-V colour. The apo-refractor is NON-apo when defocused! I still haven't resolved how to deal with this.
I now use a small script which bumps my guide star around a square to blur the star images and have no further problems.
A video on blooming and the differences between CMOS and CCD. The person who gave the talk about photometry used a DSLR and emphasized defocusing and, with this video, I can see why since DSLRs uses CMOS chips and why it's different with CCDs.
I also use a short FL refractor and need to defocus to avoid under sampling. At a set focuser position the star image is obviously different in each filter. My focuser can apply a small offset when changing filters to try to get similar FWHM.
Your script sounds very interesting. What mount do you have and what software do you use to control it? Cheers,
Hello Mark, I had the same problem with a short focal APO. The focus is excellent for imaging but defocusing is a disaster when doing photometry with DSLR. My practice is to use a large defocus, mainly for improving the electron storage capacity and by the way the SNR. With the APO it's not possible to get the same defocus on the three colors, this is at the point red or blue gets to strongly saturate. By the way it's impossible to get simultaneous RGB photometry with DSLR and APO.
I got back using photo lenses from 70 mm to 280mm focal. Their RGB defocus is perfecly in line, in addition with their good "bokeh" that's a must in photography. It helps the SNR, eliminating the spread of photons that is typical from other optics when defocusing. This is made by introducing some spherical aberrations in the optics, very good for us !
I likewise use a short focal length refractor, an ED, and experience differing degrees of defocus as a function of color; typically I defocus to achieve a radius of 9 pixels in green and 10 in blue. I do not regard this as being particularly pernicious, as I have come to realize that defocus enables considerable enhancement of the SNR, and that SNR is the sine qua non of quality photometry.
Sadly nobody has as yet developed an expression for the Cramér-Rao bound for photometric accuracy as a function of SNR, but typically it is proportional to the reciprocal of the square root of the SNR. My current investigations are showing that to be true in the context of ensemble photometry, and one can see how defocus aids in enhancing the SNR by first realizing that the photon noise is related to the flux (expressed for example as photons per square meter per second over the band), so that spreading the photons over a number of pixels has no effect on the photon noise per se but allows one to accumulate more photons while remaining within the linear dynamic range of the ADC. As the photon noise process is Poisson with a variance equal to the square root of the number of photons, collecting more photons improves the SNR if the overall noise is dominated by photon noise, which is generally the case. Of course, this comes at a price, which is an increase in the background noise variance.
I have an iOptron iEQ45 and I use Maxim DL to control mount, camera, and guider. (I use PHD2 for guiding my pretty pictures but I haven't bothered to work out how to script it yet.) The script contains default values for main and guide scope focal lengths, main and guide camera pixel sizes, and a desired blur size in the main camera. It then calculates the relative scaling for how far to move the guide star in the guide camera. I start my autoguiding and my science exposures then start the 'blurguide' script. It just uses the Maxim GuiderMoveStar() function to shift the guide star every 2 to 5 seconds (set manually depending on the length of my science exposures). It just cycles around a 3x3 square continuously using an array of positions in the square (-1,-1), (-1,0) etc. The stars don't actually end up square in the final image but they are sufficiently blurred to enable good photometry.
I use MaxIm DL to control a CCD camera, guider and Celestron CGEM mount. When in sharp focus the 80mm f6 refactor produces star images that are under sampled by the Atik One 6.0 CCD camera. Your 'blurguide' script sounds like it would be very useful in this situation. Are you willing to share your script? Cheers,
Sorry to drop out for a bit there - things have been very busy here.
Sure, here's my script. It's in Perl but should be intelligible. It is also probably in bad Perl - I'm only a hobbyist at coding, and was not written for public consumption so excuse some of the quirks.