Statistical Analysis of "Bad" Images

Affiliation
American Association of Variable Star Observers (AAVSO)
Mon, 09/02/2013 - 00:34

Hello! I'm curious about the data that I obtain from what I would call "bad" images.

    I use MPO Connections for automated image acquisition. At times during a night's run, the autofocus fails for a particular variable, and I have donut images for those variables. Sometimes, the images are oval or mildly trailed even though the focus would seem to be otherwise fine if the image were not trailed.

    I believe I've read that as a star's image is defocused, the pixel light distribution departs from a Gaussian distribution. When I analyze these types images with enlarged apertures, the magnitudes appear to match that which would be expected in the historical light curve. And the error range of the measurements still appears good.

    I'm wondering if it might be worthwhile/educational to check my set-up to see how star magnitudes might vary with the amount of defocus from an evening's optimum FWHM as well as how the amount of star trailing might affect star magnitude. I realize that there is discussion about how aperture size should be set to reflect FWHM of an image, so I would need to try to control for such variables.

    I would think this type of analysis has been done before. Could folks suggest sources of information or papers that might have looked at these types of issues for "bad" images? Thank you and best regards.

 

Mike

   

   

Affiliation
American Association of Variable Star Observers (AAVSO)
 

A very interesting

 

A very interesting question! A number of the highest precision photometry satellite missions intentionally defocussed to increase the signal to noise ratio since most CCDs have a dynamic range of less than 32k-to-1 per pixel. By smoothing out the defocussed image over many more pixels (and exposing longer to compensate for the defocussing's effect on peak pixel reading), higher signal-to-noise ratios are possible.

But - this comes at the cost of crowding becoming a bigger issue (and other effects that used to be smaller than the uncertainty due to detected photons). If you are using an area on the detector that is 16 times larger, then that larger area needs to be free of stars that might have their light spread into the area of the target star by defocussing. Sky is obviously a little more precious when defocussing!

So defocussing is more likely to work well at high galactic latitudes and less likely to work well in the plane of the Milky Way or the galactic bulge.

If you defocus by design, you can take advantage of the extra signal to noise. If you do so unexpectedly, you are more at the mercy of crowding issues and sky noise limiting your peak signal-to-noise.

Cheers,

Doug

Affiliation
American Association of Variable Star Observers (AAVSO)
defocus

As Doug says, one of the main issues with defocusing is blending of star images.  If you are dealing with bright stars, the liklihood of a nearby bright star is pretty low, and you can defocus considerably.

Defocus is best handled by aperture photometry, as that does not make any assumptions about the star profile.  However, most software takes the cursor position and calculates the centroid of the object underneath that cursor, and the centroid calculation may assume knowledge about the star profile.  When I defocus to the point that the image doesn't look Gaussian, I often turn off centroiding and use my manual position as the center of the aperture.  Having the aperture drawn on the screen, as is the case for VPHOT, helps in visually ensuring that all of the star is within the aperture.

Defocus means the star image falls on more pixels, so readnoise and dark current become larger contributors to the total noise of the image.  This is one reason why defocus is not a good technique for faint objects.  However, having the image fall over multiple pixels means any flatfielding random noise is lessened.  Note that flats should be acquired for the specific defocus that you use, as dust in the system needs to have the same focus in order to be removed properly.

I've used nights where the focus failed, and obtained good photometry.  I try to avoid such situations, but sometimes that particular night was an important one.  In general, I throw poorly focused or trailed images away rather than spending the time and energy to handle them properly.

Testing how well your system performs with differing levels of defocus would be an interesting experiment.  I don't recall seeing any published results from such tests.  One of the possible problems is the varying sky, making it difficult to separate outside effects from the defocus effect. Let us know how things go!

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
Suggestions for Data Acquisition and Analysis?

Hello! Thank you each for your comments. I would appreciate additional guidance on the design this experiment and whether my choices would be appropriate. I have an 8" LX200 classic, with an SBIG ST402ME with BVIC filters, and I use MPO Connections and Canopus.

    As I thought about the suggestions, it seems to me that I would need to evaluate and try to separate out two possible effects. The first, the change in magnitude error that might occur with defocus from the best FWHM viewing at the time the images were taken. I'm presuming that this increase in scatter would be Gaussian in distribution, though I don't have anything to back that assertion up at present.

    The second, a systemic effect that might skew the data in a particular direction. I'm wondering that, if this exists and whatever the cause, whether it might be reflected in the final photometry equation as a change in the transformation coefficients, and/or first and second order extinction, and/or zero point as the level of defocus increases?

    As I thought about how to image to evaluate theses effects, I thought that I might be able to image two standard Landolt fields -  one at a high altitude and the other at a low altitude. If I repeat these images at each level of defocus (I am thinking of optimum FWHM, 1.5FWHM, 2FWHM, 2.5FWHM, and 3FWHM) and through my photometry filters (B,V, and I), then I think that I might have a reasonable data set.

    I can use MPO Canopus and use the Modified Hardie method and the transformation coefficient calculaton routine to calculate the transformation coefficients, first and second order extinction coefficients, and the zero points at each level of defocus.

    I think that this data set would take about three hours to obtain. I can choose a high landolt so it transits the meridian during the run (altitude would be about 65 degrees). The low Landolt field (images capture would start at an altitude of about 30degrees) would rise, thus decreasing the altitude difference with the high Landolt field, making the Modified Hardie method less accurate. However, if I repeated the run later in the evening so that initial low Landolt field would be crossing the meridian (becoming the high field) and the original high field would be declining (becoming the low field) it would provide a check to the data?

    To analyze the data, I thought that plotting the transformation coefficients, first and second order extinction coefficients, and zero points against defocus (what multiple of FHWM was used) would identify any systemic changes that would skew the data with defocus?

    I thought that the standard deviation of the transforms plot (e.g. V-v = a(CI) + ZP) at each level of defocus would indicate the random scatter associated with defocus?

    Thank you and best regards.

Mike