Calculating Transformation Coefficients and the Use of Comp Stars
This is the continuation of a discussion that started in the thread "Which Software to use." Maxim DL is a common software used for photometry and the discussion digressed to systematic error that may be included in coefficients derived using Maxim since it doesn't provide "raw" magnitudes (defined as -2.5*LOG(netcounts*GAIN/EXPTIME) It requires a comp star.
When calculating a transformation you plot, for example V-vinst vs B-V for a whole bunch of different color stars. Maxim provides
vinst_target = vraw_target – vraw_comp + V_comp. (capital "V" denotes the standard magnitude)
Here is my concern using Maxim:
If the target and the comp are the same color then whatever color based systematic error exists (because my filter + camera pass band is not rectangular and extinction is frequency dependent) is the same for both stars. Therefore, the error cancels out. However, since you are plotting stars of a wide range of colors on the same graph to derive a slope, you end up with different systematic errors for different color stars being measured.
If I haven’t goofed up somehow, that will cause the V-vinst vs. B-V curve to tilt because the systematic error will vary with the color difference between target and comp. Further, the amount of systematic error for a given color difference will vary with extinction and instrumental frequency response.
We are trying to derive coefficients to apply consistently over a reasonably long period of time to correct our instrumental response to the standard system. The other two factors vary from night to night and field to field. Variation due to extinction is removed by adjusting the vinst values to extra-atmospheric values, vinst_o. Systematic error due to the difference in color between comp and the various measured stars wouldn't’t matter if I always used the same color comp in future photometry. I would always be adding the same offset for a given color target star and the extinction coefficient with that offset included would remove it. But that isn't’t going to happen. Comps will vary over a B-V range of 0.7 magnitudes, maybe more. However, If you just use the raw magnitudes of the individual stars, where mraw = -2.5*LOG(netcounts*gain/exptime), no systematic error is introduced by color difference between measured stars and the comp since there is no comp.
Ok. So much for theoretical musing. I am reasonably sure that using the vinst_o values is important due to fairly large variability in extinction from position and sky quality. Arne’s book seems to confirm that. It states in several places that you convert instrumental values to extra-atmospheric values before applying transformations.
I am not at all certain that any systematic error in transformation coefficients introduced by the color of the comp star is significant. Since it is a factor that can be eliminated, I just decided to use a procedure that eliminates it. The question remains whether it is worth it.
Maybe I am just all wet. Maybe I am concerned about the insignificant. This seems to be a topic that would benefit by comment from HQA.