Skip to main content

Average multiple image data?

7 posts / 0 new
Last post
Average multiple image data?

Hi all,
Today I submitted my first AAVSO reports (AH Her, observer name: AHM), hurray...
However, I ran into a little problem, that all of you have probably solved already :)
What do I do if I have multiple data images - let say 10 - of the same kind, just taken sequentially? I can easily process the 10 images, e.g. using AIP4Win ( I don't know if Vphot allows multiple image processing), and can create an AAVSO extended report with AIP4Win.
But now all 10 measurements show up as a kind of weird looking cluster in LCG after uploading through WebObs.
Would it make more sense to average all 10 photometry measurements (mean and standard deviation)? Of course the date/time would need to be averaged then, too (what would that be?).
Any comments will be appreciated!
Helmar (AHM)

roe's picture
Average multiple image data

First, I would ask why are you taking 10 images in a short time span?  The main reason I can see is to improve the signal to noise ratio of the individual images, in which case I would average them by stacking the images.  I don't know, but suppose, AIP4Win will do this.  I use MaximDL which will do it as will VPHOT (no particular limitation on number of images for VPHOT).  The software should pick out an appropriate exposure time to report in the FITS header.

If the S/N for the individual images is OK (say 100 or better) I would take only three images and the spend the rest of the time on some other worthwhile target.  VPHOT will handle multiple images in the time series mode.

Jim Roe [ROE]

Average Multiple Image Data

You  can average the 10 images and take the standard deviation.  The average will be your data point and the standard deviation will be your estimated error.  The time should be the midpoint of all the images that you used in the average.

Averaging is essentially the same as stacking.  Stacking takes a pixel by pixel average.  Some programs will calculate the midpoint when stacking images and include it in the fits header.  MaxIm does and includes a keyword MIDPOINT.  As I understand it, this is a nonstandard keyword and all programs may not approach the issue the same way.  When you use MaxIm photometry tool to measure stacked images, it automatically uses the midpoint in the photometery report.

I don't remember what AIP does.

Either stacking the images or averaging the data is a good way to increase SNR.  When I am working with variables that change value slowly, I usually use 3 to 5 images.  Then I can calculate the standard deviation which yields an estimate of my error.

Jim Jones, JJI





HQA's picture

AH Her is a Z Cam cataclysmic, with about a 6hr orbital and 20-day outburst cycle time.  Most cataclysmics flicker, so taking high cadence time series is often reasonable for this class of variable star.  However, high cadence low signal/noise data is of little value, except when either carefully correlated with other observers or when looking for periodic signals (such as the orbital period).  Even with the orbital period, exposures shorter than, say, 4 minutes don't give you much astrophysical information, as 4 minutes is about 1/100 of the orbital period and gives adequate resolution of smoothly varying functions.  Now, if there were an eclipse during the orbit, or if you were looking for the (faster) spin period of the white dwarf, then higher cadence might be useful.

Back to the original question: as Jim/Jim mention, taking the average and calculating the standard deviation is a good way to go - it doesn't give you as high a cadence, but improves the signal to noise and gives you a very good estimate of the measurement uncertainty.  Always use the midpoint time of the images that are averaged, and especially with highly variable objects like cataclysmics, be very careful that you have your computer clock set accurately.  You might also consider adding a note indicating how many images were averaged and over what time span, so that any researcher knows the history of the observation.



Thank you, Arne and Jim and Jim,

For your responses!

I think I have taken all your advice(s), averaged (stacked) the images in Vphot (until Jim mentioned it I hadn't seen that option in Vphot..., thanks Jim!). I deleted the "weird cluster" of my image results and replaced it by this one average measurement to reduce clutter :) in the database. Vphot calculates the time "average" automatically, so does AIP, nice...

It turns out that Mike Simonsen took data at almost the same time :/. The good part is, the magnitude values are very close to each other, too.

I have taken this series of images because  currently I am trying to figure out how my telescope mount (LX-200 8" classic) and camera (ST-8) are performing. I have a homemade steel pier on concrete "foundation" in my backyard, and these measurements are the first serious attempts. So far, so good (sky conditions were not so great - moon and some clouds present).

In case you want to look at the LCG curve for AH Her:

Again, thanks for your valuable comments!


WBY's picture
How are you determining

How are you determining uncertainty associated with the measurement you are reporting? You are averaging (taking the mean of) 10 images and therefore you might think you should report the standard deviation of the mean [~1/SQRT(N) x standard deviation of the underlying measurements]. Actually you want to report the error of the sample from which the mean was derived. If your Check star is close in magnitude, say, within +/- about 1 magnitude, you can report the sample standard deviation [using the 1/(N-1),  version of the standard deviation formula – STDEV.S in Excel] of the Check star measurements. If the Check star is dimmer than the target and the standard deviation of the target measurements is smaller than the check you can even use the Standard deviation of the target measurements. If neither  these methods is appropriate or you aren’t averaging at least 3 measurements (5 is better) you can use the error value that your program produces. Commonly, this is the result of the CCD error formula calculation which doesn’t include all sources of uncertainty.


Another thing to do is to check your flats. If you are using a light box or other artificial flat source, this can be important.  Image the same star in a grid pattern (5 wide by 3 high, for example) across your camera field of view using the same exposure time and filter. Pick a good night with good transparency and no clouds particularly those thin high cirrus clouds.  Take several, at least 5, images at each position and average. Even with an SBIG camera, pick a star that allows at least a 5 second exposure to avoid variations due to shutter action.  Averaging exposures at the various points reduces scintillation effects. Then, after calibrating the images including flat field correction using a master flat composed of at least 10 flat images, see how close to the same net counts you get for each position. If highest to lowest net count should vary by less than 1%.  A flat light source that isn’t flat will introduce a systematic error in your measurements that is a function of the relative positions of your Target, Comp and Check stars in the field of view. I have found from experience that flat fields often aren’t flat. Light boxes often suffer from radial variation and flat field screens may have both radial and linear gradients. Sky flats can have gradients as well plus you may need more than one night to get a sufficient number. The camera can't move at all between flats and your data images and that can be a real problem with sky flats,  particularly if you are using the internal guiding chip in the ST8. You may have to rotate the camera when you move from target to target. If your LX is on a fork mount, at least you don't have to rotate 180 degrees when you flip the meridian even though you remain on the same target.  Then you have to take before flip and after flip flats. 

Flat fielding is a necessary pain,  particularly when you are taking them at the "stupid time of night" in the early morning hours


Brad Walter, WBY

WBY's picture
Averaging Multiple Image Data

One thing I didn't see in any of the e-mails concerning stacking is that when possible, you should use an alignment technique that moves whole pixels only and doesn't redistribute flux between pixels to get a better alignment. Some sohpisticated alignment algorithms Try to locate the centroid to sub pixel accuracy rather than just picking the center of a pixel and then redistributes flux among neighboring pixels based on essentially a fitted function. That may provide better alignment but it can introduce small errors in magnitude calculations. Some programs including Maxim allow you a choice of methods.  A manual 1 star alignment, for example does a shift only. It doesn't scale resample or rotate. for images that are taken within a short period, say an hour or less, you will get little rotation if your polar alignment is reasonably good. For longer time periods you may need to rotate as well as shift. I am not sure if Maxim rotates by whole pixels only. The Fancier techniques all resample and align to sub pixel "accuracy."

Brad Walter,  WBY

Log in to post comments
AAVSO 49 Bay State Rd. Cambridge, MA 02138 617-354-0484