Stacking images for photometry

Affiliation
American Association of Variable Star Observers (AAVSO)
Thu, 01/10/2019 - 14:13

Hi All,

     I'm kind of new to photometry and have a question about stacking images to enhance the faint stars. 

I've been doing astrophotography  for years and used  average or standard deviation (1.5) stacking to knock down the noise in images.

For Photometry, what  kind of stacking is used?

Thanks,

Nor

Affiliation
American Association of Variable Star Observers (AAVSO)
Just plain vanilla adding (averaging)

For photometry, you'll want to  do just plain vanilla average stacking with all fancy stacking features (sigma, kappa sigma, entropy-bla.... witchcraft etc) switched off. The stacking is supposed to mimick a longer exposure time, and the only correct way to do that is adding the responses of the individual (calibrated) exposures (shifting the exposures so that the pixels showing the same area in the sky are on top of each other ==> "co-adding" or "stacking"). Assuming all exposures have the same exposure time, doing an average instead of a sum will make no difference for the final photometry result , so average-stacking is OK.

CS

HB

Affiliation
American Association of Variable Star Observers (AAVSO)
Just plain vanilla adding (averaging)

Thanks HB, I was kind of thinking that  but wanted to make sure. I don't think that I will need to do any stacking but there may be some occasions where it would be helpful to get a reading on a Comp or K star.

Cheers

 Nor

Affiliation
American Association of Variable Star Observers (AAVSO)
Great Question. Is stacking recommended?

I am new at this as well. It makes sense to me to average flux of multiple samples taken close together. I have taken the exoplanet detection course and we did not stack. At least I don't think so - AstroimageJ may have during processing. Maybe it makes more sense for other types of photometry - or depending on cadence, or what we are trying to measure?

Hope you don't mind me jumping in. Can someone enlighten us on when it would benefit the precison of data and best practices for using stacking?

Whoops...replies crossed. Anything else to consider?

Randall

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Is stacking recommended?

Yes. 

If your target is relatively faint and you really need an exposure of 12 minutes to reach an acceptable SNR in the target, but your mount can only track reliably for two minutes, then stacking six 2 minute exposures produces an image with (almost) the same SNR as a single 12 minute exposure.  The main difference comes from the additional read noise which is added for each image.  Usually this is a small price to pay for the increase in the SNR gained by stacking.

Stacking also provides a kind of insurance.  If, instead of stacking, you decide to use guiding to produce the 12 minute image any mishap (airplane trail, meteor, accidental bump of the scope) can ruin the image.  You have wasted 12 min.  If you had used stacking only one of those images would be ruined.  The stacked image (now 10 minutes total exposure) would likely still be usable.

Phil

Affiliation
American Association of Variable Star Observers (AAVSO)
I guess the answer is :

I guess the answer is : "stacking N exposures of exposure time t is OK for photometry if you really would prefer to do a single N*t exposure but something is preventing you from doing that" (e.g.  your sensor would saturate or get so close to saturation that the response isn't linear anymore, or your tracking method (if any) is not good enough to keep the target in approximately the same spot on the sensor, or your camera just won't let you do that, or ....). Otherwise, it's  always better to have one single exposure than combining many short exposures because everytime your sensor is read out to create an image, it adds noise to the image, and that won't help. 

Of course if you are interested in catching changes in the lightcurve that are happening on very short timescales, you need to have single exposures that are no longer than that timescale. Think about asteroid occultations for example when you need to very perecisely tell when an occultation happens. Or measuring the "flickering" of certain stars. 

However, if the changes in the lightcurve happen on timescales of minutes,  hours, days, or years, then the main consideration for the total exposure time per datapoint (by stacking or single exposure) will almost always be dictated by the need to catch enough photons from the target object. 

As a rule of thumb, 1% error is considered quite ok for most amateur photometry tasks (detecting exoplanets is certainly one of the more demanding tasks that will usually require better accuracy, tho). To get a measurement with at most 1% error, you have to catch at least 10.000 photons (even if your equipment is otherwise perfect, just from the randomness inherent in the number of photons hitting your telescope from a given source). 

CS

HB

Affiliation
American Association of Variable Star Observers (AAVSO)
Averaging for photometry:

Well, exoplanet photometry actually requires magnitude precision, not accuracy.

Another approach, when event timescale is much longer than imaging timescale (as for the LPVs I do), is to take 3+ sets of images, separately average each set, reduce each to magnitude, and take the mean or median of the magnitude results. You get some of the best of both worlds (i.e., of averaging for noise within each set, and rejection of outliers between them). This approach is tedious and storage-greedy for CCDs with their long readout times, but I predict it may become a norm (especially if each series can be averaged on-camera) for CMOS photometry--which is coming.