Is it common to bin data for time-series observations? I've been observing a fairly bright star recently (mag 7.6ish) and a 20s cadence has gotten good saturation for my camera. I'm getting a good 600-1000 data points per night as is, and there's a bit of jitter to them. Is is better to send them all in, or bin in time somehow?
What type of variable is this target and what is its period? About 100 data points per period is often reasonable to define light curve adequately. Bin data as appropriate.
Three strategies come to mind.
1. Bin the data, i.e., average groups of (say) three or four observations. Mark Blackford has a spreadsheet to do this.
2. Increase the duration of the pause between images.
3. Defocus, if possible. Allows longer exposures without saturation.
With reference to item 1) in your post, do you know how I might be able to get hold of this spreadsheet ? I am currently writing one myself but it would be good to check it against Mark's before I use it.
Thanks for that info. You have some clear steps to learn what to do with exoplanet imaging.. Specifically, determine whether you want to pursue transit modeling with AstroImageJ or Exotic. Start by looking at the Exoplanet Section webpage. Read the Exoplanet Observing Manual. It will give some thoughts about an appropriate cadence for exoplanet imaging.
Decide whether you want to analyze your data with AIJ and submit the Exoplanet Report to the AAVSO as opposed to just submitting the data to the AAVSO International Database (AID). Alternatively, you can use Exotic to analyze your data and submit report to AAVSO and NASA.
AIJ is more sophisticated and will permit you to test whether binning of the raw data helps reduce the noise in the transit light curve. Exotic is easier to use but it is mainly geared to help refine times of transit.
IOW, it is better to learn how to analyze your data and submit high quality transit time info rather than simply dumping your data into the AID.
PS: My previous comment about variable period is the best way to think about how many data points to submit to the AID for other types of variables. Roy mentioned the things to consider when thinking about collecting the images. Just submitting lots of data for the sake of using all your images is not a good way to think about photometry. Quality will generally always trump quantity. You already mentioned 'jitter' in your light curve. IF that 'jitter' is not real but just noise, binning may certainly smooth the light curve / reduce random error. IF that 'jitter' is real, then it is fair to submit all the raw data and let a researcher use the data as appropriate. BTW, if you upload your images to VPhot, you can analyze/bin your time series data and see how well it smooths the light curve and improves the random error/noise. You will quickly answer your question about binning data.
I can Bin in MaxIM, but I can't bin in VPHOT. In VPHOT I stack or average 10 to 100 images. In my estimation, that is time averaging.
Binning, to me, is making one pixel out of 4 or 9 or 16 pixels of a single image.
Here is the method for my madness:
For Exoplanets I stack 10 or 20 for magnitudes 7 to 13 and overlap stack. 1000 images in a night becomes about 330 images for the night with a decent cadence and S/N 2 to 3 times better than a string of 20 second exposures. 300 images become 100 images etc. If the target and comps are bright, use a time delay between exposures. Uploading and stacking 1000 images every day gets tiring.
When I first began to stack, I wondered how many images to stack and how many was to many. A gut feeling said the optimum number was between zero and infinity. I made up a spreadsheet to compare the S/N improvement vs magnitude vs number of images stacked. It works only for my 305 mm aperture.
It turns out that seven stacked images is generally a good number to begin with, but 10 is easier to manipulate in VPHOT without getting cross-eyed. Things don't improve much at over 20 stacked images for stars having S/N ~100. But for faint stars on an image, I routinely stack 30 or 40 images. The fainter you go, the larger the error. Another use for an average of 20 to 100 images is the I band for an LPV. It averages out the twinkle. One decent datapoint per night. Works pretty good for me.
When the measurement error is plotted with number of images averaged, one gets an exponential curve, ie: diminishing improvement in VPHOT-reported error.
To make sure that I am not stacking to many images for each datapoint (under-time-sampling or aliasing), I occasionally compare my stacked series to an unstacked series of the same rapidly changing variable. if the wave amplitude and phase is essentially the same, I use the stacked-averaged series to take advantage of the improved S/N. I have not found a light curve that changes so rapidly that it needs to be sampled 3X per minute. At most I find 12 to 20X per hour is adequate for most jobs including delta-scuti. I don't do FRBs or gravity flashes or milli-second pulsars, so my cadence seems appropriate to my data.
Another benefit of stacking vs long exposure is that shorter exposures keep me in the CCD linear range. Another benefit is that I can throw away a few satellite-streaked images and just average around them. Another benefit is that I avoid artificially racking up artificially big numbers of observations.
For the fast sampling crowd, I want to work up the physics/engineering to see if signals and device sensitivities match up to get RF amplitude data at less than a couple GHz. AAVSO can't use the billion data points per second, but it would be fun to do. Need more hours per day, Could gain another 37 minutes 22 seconds per day working on Mars.
Ray et al:
This is an example of the complexity of our language. There are at least two distinct technical definitions of 'bin'.
1. Bin can mean 'combining native pixels' in your camera to create larger pixels. When you bin 2x2 in Maxim, you combine 4 native pixels to create one larger pixel. This is useful tool to match your pixel size with your seeing (FWHM).
2. Bin can alternatively mean 'combining data points' to average the individual values. You can, in fact, bin in VPhot. Look at the upper left corner of the Time Series Results page. Note the pull down box labeled 'bins'. With this box, you can combine/average groups of data points/magnitudes (e.g., bins 2). This helps smooth out / reduce any noise in your time series light curve. This is very useful when you have scintillation present in your short exposure images.
3. Stack is a different but similar method of 'combining images' to average (or sum) the adu counts in each of the corresponding pixels in the images. Nominally, the SNR of your stacked image yields an SNR that is increased by the square root of the number (N) of images stacked together.
Both items 2 and 3 are considered 'averaging' in the time domain. Item 1 would be considered 'combining' in the spatial domain.
Never tried that. Always thought Bin 1 was a setting for un-binned data, so I left it at one.
Just tried it on a list of 17 stars in 112 images. Darned thing crashed with yellow screen-o-death. Maybe the list needs to be just one star?
If it works for a single target star, I suppose I'll need to bin 5 about 5 times to get a bin 20. Then do it 17 times. Or maybe stacking 20 is quicker?
Then I won't get so confused when I fire up the new camera that will bin 2x2 to avoid long upload times.