Hello! I've started to get the mechanics of taking images consistent. For example, for my time series of the Cepheid ASAS182612 the error range seems to be from 0.01 to 0.05 mag with single 60 second images.
Another example, being dimmer, T PYX has a larger error range about 0.1 to 0.25, with one datga point with an error of about 0.5 mag. These are typically single or a stack of two 60 second images through my 8in LX200 classic with ST-402ME BVIC filters.
What type of error range should I strive for with my data? With this information, I can try to estimate the number of images to stack. As I plan my imaging runs, is there a way to estimate the number of images I should obtain or to estimate the correlation I that I m ight expect betyween the number of images stacked and the error range for my system? I realize that this would depend on the magnitude of the variable being imaged. And that there would be a diminishing benefit with the narrowing of the uncertainty range as the number of images are stacked. Thank you and best regards.