Does anyone knows how MaximDL compute magerr .
I *think* it just some combination of 1/SNR (or one of the variations of 1/SNR) of the target and the comp. Probably taken in quadature.
If I really want to know my error I use another method. If I am measuring a star that isn't varying very rapidly, I will take the average and standard deviation(sd) of 3 to 5 images. The sd is then my error.
If I am taking the time series of a rapidly varying star like a cataclysmic variable, I will calculate k-c and take the running sd of k-c taken 3 images at a time.
Usually I find that these error calculations are reasonably close to MaxIm's errors with MaxIm understating the error a bit.
I have never found a formula or a complete explanation. I can only assume that they are using the CCD noise equation but I don't know how complete a version they are using or exactly how they do the background subtraction. If you e-mail them you might get an answer and there is a Maxim DL Yahoo support group. Someone there may be able to give you an answer. The manual states "performs a careful background subtraction using median-mean techniques, and also takes partial pixels into account when integrating the light inside the measurement aperture." That is the most complete explanation I have found and that is pretty cryptic. In any case I am sure that it is measuring stochastic uncertainty (random error) only. No zero point error (you need several comps to estimate that) and no systematic errors from whatever cause such as differential conditions across the field of view from focus, flat field errors, sky variations and probably a dozen more things I haven't thought about.
The most complete version of the CCD error equation I have seen is in Steve Howell's book where NOISE = SQRT(Nstar + npix(1+npix/nB)(Ns+Nd+Nr^2+(G^2*sigmaf^2)
All flux following flux values are in e- not ADU except for sigmaf which is converted to e- by G
Nstar = Total flux from the star in the measurement annulus
npix = number of pixels in the measurement aperture
nB = number of pixels in the background annulus
NS = Mean background sky level per pixel
Nd - Mean dark current per pixel
Nr = read noise in electrons (total from the integration not the per second value in the specs) notice this is squared within the square root radical because it is NOT Poisson noise.
G = Gain of the camera
sigmaf = the digitizing noise of the A/D converter i.e. the error in discriminating an analog value between adjacent discrete ADU values.
So all of the N values except the read noise are Poisson variables which means the standard deviation = SQRT of the mean value (N), So the Poisson noise of the star signal is SQRT of Nstar. NS + Nd is the mean background Poisson flux per pixel from the sky and from dark current. but since Nr is the read noise (Not SQRT(Nr) it is squared inside the square root radical. the same is true for the digitization noise. It isn't a Poisson variable either. All the noise terms except Nstar are values on a per pixel basis determined in the background annulus so they are multiplied by the number of pixels in the measurement circle times a correction factor (1+npix/nB) which is a statistical correction because the error estimated from a sample underestimates the real error of the population but the greater the number of background pixels used to estimate the error compared to the number of pixels in the measurement circle, the smaller the correction becomes.
OK this gives the uncertainty (random error or sigma) in electrons. It is different for each star you measure and
SNR = Nstar/NOISE
So in differential photometry you are calculating MagT = rawMagC - rawMagC+ seqMagC
Each of the rawmags is -2.5*LOG(Nstar*(1 ± 1/SNR)) =-2.5*Log(Nstar) -2.5*Log(1 ±1/SNR) including stochastic uncertainty, which is the term including 1/SNR.
As long as SNR >/= 100 then the uncertainty terms are -2.5*LOG(1+1/SNR) = -1/SNR and -2.5*LOG(1-1/SNR) = + 1/SNR. within less than 1 millimag. The approximation is within two millimags with SNR as low as 50. The total uncertainty from measuring the target and comp is SQRT(UNCERTT^2 + UNCERTC^2). Independent random errors add in quadrature.
The thing you have to keep in mind is that the CCD Noise equation almost always significantly underestimates the true uncertainty in your measurements. Take a series of say 10 V exposures of a standard field with multiple stars at SNR of 200 or better. NGC 7790 is well situated in the northern hemisphere right now. Use Maxim to do differential photometry on the field. Pick a central bright standard star as your comp. Take the mean and standard deviation of the 10 magnitude values for each star and see how that compares to the maxim magerror. The standard deviation calculated from the data points will probably be significantly larger than the error estimate Maxim spits out. I would use the error estimate from the program as a last resort when I don't have at least 3 and preferably 5 measurements of a similar magnitude check star from which you can empirically calculate the stdev.s of the magnitude measurements.