time-series stacking; uncertainty

Affiliation
American Association of Variable Star Observers (AAVSO)
Fri, 08/09/2013 - 19:03

Hi everyone:

When doing a single-night time-series on a star: if I decide that I want to stack rather than submit a single data point for each image, does AAVSO have a preferred protocol for data points and uncertainty measurements?

As an example, let's say I have 20 images.  I could stack them in groups of four to get five data points: data point 1 is from images 1-4, data point 2 is from images 5-8, data point 3 from images 9-12 and so on.

But could I overlap the images, e.g. data point 1 from images 1-4, data point 2 from images 2-5, data point 3 from images 3-6, etc.?

For either/both of these scenarios, should my uncertainty for each submitted data point be determined from the entire night's set of images (e.g. the SD of the comp star over the entire night's session), or determined from only the subset of images that creat each point (e.g. for the first data point, the SD of the comp star for images 1-4)?

Thanks.

Steve Smith SSTB

Affiliation
American Association of Variable Star Observers (AAVSO)
time series averaging

Hi Steve,

Echoing what Matt said, the first case, where you average 1-4, 5-8, 9-12 is the best approach, and is called "box averaging".  The other one you mention is commonly called the "running average", and each datapoint is then dependent on its neighbors and is not as statistically valid.

Regarding uncertainties: generate the uncertainty for each set of averaged measures of the comp/check star.  That will give more relevant uncertainty than using a value based on all frames for that night.

You should usually do this box averaging if you take many frames during the night, but either the signal/noise for a single frame is low or the variable is not expected to vary on a more rapid timescale.  For example, a cepheid might have a 4-day period, or about 100 hours.  The usual rule of thumb is that you only need about 100 measures per cycle to produce a good light curve on a smoothly periodic variable, which means that you don't need to create more than one datapoint per hour.  Any greater resolution than that doesn't provide much additional data, but box averaging points gives you a better uncertainty estimate, which is more valuable in this case than the higher time resolution.  Submitting observations for such a 4-day cepheid with 5-second time resolution just boosts your observer totals without providing any real scientific return.  There are exceptions to this, of course, but keep it in mind.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
Uncertainty of Bin Averages

When calculating the uncertainty you want to report the uncertainty of the underlying sample not the uncertainty of the mean. You can use the sample standard deviation of Check star mags  (using 1/(N-1) factor which is STDEV.S in Excel) if the check star magnitude is within about  +/- 1 magnitude or so of the target. If not, and the target STDEV.S is less than the check star's, you might want to use the STDEV of the target star magnitudes. If the check star is is much more than a magnitude different from the target and the target standard deviation is significantly more than the check star, you may have to use the CCD equation error value for the target given by many photometry programs. However, this often underestimates the total error. If you are doing ensemble photometry you want to combine the CCD equation error in quadrature with the zero point error of your ensemble comps. The zero point error  is often larger than the CCD equation error and should not be ignored. Many programs that do ensemble photometry use a least squares fit of comp photometric values to the values you enter for them (or weighted least squares fit if you weight the comps), so that the target magnitude determined from each comp is the same value. If I recall, VPhot gives you either the differences or the RMS of the differences in the target values obtained by using the comp star magnitudes you enter,which is another way of determining zero point error. 

In general it is better if you can calculate the standard error empirically, directly from magnitudes of the individual binned measurements rather than indirectly from error estimating formulas since those formulas don't include all possible contributions to error.

Brad Walter, WBY