Comparison Star Uncertainties
I post this as a necessarily understanding of this topic and as important information for the ongoing Forum (Home » Forums » Variable Star Observing » VPHOT): Uncertainty estimates in ensemble photometry
What should be a clear point of understanding to all photomitrists regarding the significance of comparison stars uncertainty (as publised in the photometry table), when doing differential photometry, under normalized circumstances, is how relatively unimportant that uncertainty value is to the photometry process, which after all, is to produce measures of changes in the targets magnitude which is generally compiled into a subsequent light curve.
Let’s deal with a very simple situation. Assume a comp star with a V value of 12.50 (and for simplicity, again, we shall limit this example to 2 digits) with a presented uncertainty of .10, which presents 21 potential data variations in what the “actual” value might be.
For simplicity sakes, there again, I am only going to show 5 of those potential value points for the purpose of illustration.
Please examine the attached table, which, I am embarrased to say, I could not manage to format within this document.
Obviously, this shows that no matter at what point along the range of uncertainty the “actual” comp value might be (and potentially used for computing the Target V value) the measured difference, between the two observations, is always going to be the same! Keep in perspective that the comp value uncertainty, for the original measurement, is not varying and that all observers who measure the target are going to measure the same .11 difference, everything else being equal (same time, for example), except for their own uncertainty value which is very important to the difference measurement.
The .11 difference between the two observations (this needs to be as precise as can be accomplished) is really the objective, not whether or not the “actual” target value might be 11.51 or 11.71 or something in between! Think about it.
As amateurs or professionals we want all the data we work with to be as accurate and precise as can be accomplished using the best tools for the purpose. I am not suggesting that high uncertainties are ok with comparison stars, only that for the purpose of what we are trying to accomplish, they are not probably relevant to our main objective of measuring target variability.
Those of us on the sequence team are highly conscious of uncertainty values when we select comps. As I tell new members of the team: Visual observers can accurately measure differences to .10 so you would think that CCD observers can at least achieve an uncertainty of, at a minimum, .05 if not better, with ~ .01 being ideal so we should be conscious of this when selecting comparison stars.
However, we have to play the hand we are dealt; sometimes the only comp data we can access, at a given point in time, may have high uncertainties. Without getting into the topic of causes, covered quite well elsewhere, observers should keep in perspective that the availability of calibrated data keeps improving with both new surveys and updating of previous surveys.
Therefore, if the observer is dissatisfied, for any reason, with a portion of any existing sequence or the whole of the sequence they can simply file a CHET. While CHETS are examined regularly by the team there is not always an immediate solution available but they all continue to be reviewed (currently & principally, by our Team Leader, Tom Bretl), for a possible solution as time goes forward:
PS- IMO, if the observer’s uncertainty is relatively low a single comp, close in color to the target and not too distant in air mass, and check star should produce an excellent differential photometric measurement; on the other hand, if an observers uncertainty is relatively high and or distant in terms of color and air mass, then an ensemble might lend itself to statistically improving the measurement.
To simplify as well as being more blunt:
Comp star uncertainties (as posted within the photometry table) have no effect on the measured variability of any target. Variability only depends upon the changes that occur from measurement to measurement (and these measurments are affected by many potential "error" issues).
(Comment initially withdrawn)
Sorry, I cannot help it; the above and below posts are extremely useful to me in informing my future decisions concerning the sources of comparison stars. That said,I wish to remark on two topics: the posters' oversimplifications regarding statistics and signal processing, and secondly the universal validity of relative magnitude measurements.
The simple examples cited to make the case that bias does not mattter completely ignore the effects of noise, which can never be eliminated, and display a misapplication of the concept of uncertainty. I know that professionally made measurements of stellar magnitudes involve repeated trials, ideally dozens and dozens of measurements of each target, followed by the compilation of statistical quantities - the mean and standard deviation - which are then published. Having that much data allows one to test, at a specified level of significnace, whether the deviations from the mean are represented by a particular distribution, and therefore to give meaning to the standard deviation in a propabilistic sense. It is absolutely untrue that a standad deviation of 0.010 means that the value can assume only 21 discrete values (with resolution .001). Moreover it is even less than incorrect to posit a standard deviation based on a single measurement, or on the base of averages of averages, both commonplace in seqplot.
In my many decades working with customers like NRL and DARPA on sensor systems I am quite familiar with the process for estimating system performance. In a radar, for example, we calculate Cramer-Rao bounds for angle-, range- and range rate resolution as part of the design process to ensure that adequate perfromance is potentially achievable. But we never delude ourselves into believing that the predicted performance will be realized, even if we have laboratory measurements of critical parameters, because we are aware that the real world will not be exactly in compliance with our model assumptions, and will introduce artifacts outside the system model. That is why we do testing. The analogy here is that no matter how well you charactize your photometric system, it is foolish to believe that the actual performance will match the characterization.
Finally we come to the utility of relative measurements. What the posters are saying is true, under certain circumstances. In the search for exoplanets, for instance, professional researchers often employ ensemble photometry with dozens of comparison stars in their efforts to detect millimagnitude fluctuations due to a planet transiting a star in the field. And there may be other areas where measuring the actual magnitude is unimportant, I do not know. But I do know this: in the fields of my own interest the goal is to measure luminosity, and having bias errors frustrates that goal. What kinds of stars am I talking about? Cepheids, RR Lyraes, Type IA supernovae. All of these are elements of the cosmic distance scale, and accurate knowledge of their luminosities are crucial to the accuracy of that scale. Then there are CVs, for which luminosity information is critical to the development of detailed understanding of the processes involved, the accretion rate, and dynamical evolution of the stars. So to say that all one cares about is the period and relative amplitude of variables is a gross oversimplification. Your argument violates Einstein's maxim that "Everything should be made as simple as possible, but not simpler."
Adios, I'm out of here
The fundamental issue in making uncertainty estimates for our measurements is that the uncertainty of the comp star sequence magnitude has no effect in the uncertainty of our observations and should not be included in the observer’s uncertainty estimates. The reason is that the difference between the comp sequence magnitude and the “True” magnitude of the comp is a systematic error, not uncertainty. That difference, whatever it is, is a constant offset from the True magnitude, PROVIDED, of course it doesn’t result from variability.1 The uncertainty of the comp star sequence magnitude is simply a quantitative statistical evaluation of the precision of sequence magnitude, i.e., the smaller the uncertainty, the smaller the difference between the sequence magnitude and True magnitude is likely to be.
I created a table that makes the same point as Tim's but in a slightly different format tracking the flow of a basic magnitude calculation. This was clearer for me and I also want to emphasize the differentiation between the "True" magnitude row of the table and the other rows containing sequence magnitudes offset from the True magnitude. I think it is clear that the magnitude offset of the comp from the True magnitude has no effect on the measured variability and, therefore, the light curve. They depend only on the change from measurement to measurement.
Certainly, the sequence magnitude affects the “Trueness” of the measured magnitudes – how close they are to the “True” value. That is clear from the rows of the table which calculate Target V mags from the same two differential observations by adding sequence magnitudes having different offsets from the true magnitude. However, for a given comp (or ensemble of comps) that difference is a constant, systematic offset, not a randomly changing value, and therefore the changes in magnitude between observations remain constant. Of course, the differential magnitude uncertainties include the uncertainty of the comp as well as the target observation used to calculate the differential magnitude.
In ensemble photometry, the offsets of comp star sequence magnitudes from their “True” values tend to increase the empirically determined uncertainty of the target star measurement due to the scatter these offsets introduce in target magnitudes calculated for individual comps around the mean value. However, the uncertainties provided for sequence magnitudes are still not expressly included in the uncertainty calculation. The effect of the offsets from True values is empirically included by the least squares fitting process.
- Even if the uncertainty in the comp value is primarily due to variability, the difference is still systematic unless the brightness of the star varies randomly, but it is time dependent, which you want to avoid. That is one of the reasons for not selecting stars as comps if they have larger uncertainties compared to other stars of similar magnitude in the source survey data
Brad Walter, WBY
In reading my previous post I thought I should make a small clarification. When I wrote
"In ensemble photometry, the offsets of comp star sequence magnitudes from their “True” values tend to increase the empirically determined uncertainty of the target star measurement due to the scatter these offsets introduce in target magnitudes calculated for individual comps around the mean value."
I meant only that uncertainty is increased relative to the uncertainty if there were no offsets in comparison star sequence magnitudes from their True values. I was not implying that it necessarily makes the uncertainty worse than obtained using a single comp star. It often reduces empirically determined (as opposed to analytically from the CCD error equation) uncertainty relative to single comp star measurements.
Uncertainty estimates of magnitudes measured using a single comp star that are determined analytically from the CCD error equation typically underestimate uncertainty because they omit potential sources of uncertainty that are included in empirically determined uncertainty estimates.
Brad Walter, WBY
In the scientific astronomy realm one should develop an error budget and define sources and types of errors for the observing program that is planned to be executed.
I am observing long-period cepheids to get times of maxima so the accuracy of the catalog is of secondary importance. I do want good SNRs when observing to get good precision in the magnitudes. For the comparison star I just want consistency (just the constant number please). If I need the highest accuracy to understand some direct physical process then the most work needs to be done both observationally and in final reduction. Then I want the stars exo-atmospheric flux over time to be well known, constant, and with a mimimized error. Data reduction must also account for air mass-extinction-color-filter effects. Gets complicated quickly with many sources of error combining, go to paragraph 1.
The AAVSO data submission formats have enough information to show which comp was used via the AUID and chart sequence ID and your photometry magnitude and error estimate. Future analysts if required can account for the catalog error contribution as needed and known at the time. I would encourage that all observers check and recheck your data for errors before submitting and if you find errors post submission go back and delete the ones in error and upload corrected new records.
A good foundation and understanding of basic stastics is worth its weight (mass) in Gold. Grant Foster's Understanding Statistics: Basic Theory and Practice is a good one for understanding central tendency and spread and his Analyzing Light Curves: A Practical Guide is also good for light curve analysis. Both are still available via lulu.com. Brian D. Warner's A Practical Guide to Lightcurve Photometry and Analysis, 2nd edition is also good and still available. Btw, understanding the difference between accuracy and precision is worth its weight (mass) in Platinum! Go to paragraph 1.
Jim DeYoung (DEY)
Differential Photometry Equation
V = (v-c) + C
V = Magnitude of Star
v = instrumental magnitude of variable star (related to the photon count or ADU’s)
c = instrumental magnitude of comparison star
C = the published value of the comparison star, a constant
We can all readily agree that the v-c measurements are affected by a number of factors and it is important to the process that we account for these uncertainties.
V1,V2,V3,V4, V5, etc are the subsequent measurements of the same target.
The obvious objective of the study of variable stars to measures the changes in their light that occur over time, therefore, we are really interested in the changes that occur between V1, V2, V3, V4, V5, etc… to Vx.
The precision of these measurements are very much dependent upon v-c and all their associated noise sources.
However, as I stated with my original post, the uncertainty associated with C (not to be confused with c) is irrelevant to the Differential Measurement process, C is in effect a constant.
As Brad pointed out, the “truthfulness” of V is affected by C’s original uncertainty but the goal of measuring V changes is not affected.
Do not misunderstand my intentions. I am very much in agreement that ideally we want V to be reasonably precise with our established Magnitude system of flux measurement.
I only wanted to make the point that for purposes of accomplishing our goal of measuring changes in a targets magnitude that the original uncertainty of C is not really important to that goal (while all the factors affecting the precision of v- c are most important).
Per Ardua Ad Astra,
If the purpose of your observation is to accurately determine the apparent brightness or luminosity of a target in one or more filter bands, then bias introduced by error in determining the magnitudes of your reference stars does have an effect. However, for this kind of photometry you would normally reference every night's observations to multiple recognized standard stars spanning a range of colors (most likely Landolt standard fields if you are doing UBVRI photometry) bracketing the airmass of your target field multiple times during the night. The bias of concern is that introduced by the standard stars, to which your observations are ultimately referenced not comp stars in the field of view.
Further, your error propagation equation now has many additional terms beyond those associated with your raw magnitude observations including those associated with first and second order extinction coefficients, and transformation coefficients. The combined effect of these uncertainties, which are associated with your observations, will normally dwarf any bias resulting from error in determining standard magnitudes of standard stars. For example, of the 545 stars in Landolt 2009 that have error estimates for V and all color indices, 400 have V standard error ≤ 0.005and 85 more lie between 0.005 and 0.01. Errors in magnitudes of standard stars have an effect, but it is most likely not significant, particularly when referenced to ensembles of standard stars and we have no way of determining what bias may remain except in retrospect when, or if, more and better observations of the standard stars become available.
Photometry that does not reference all observations in the field to recognized standard stars is simple differential photometry appropriate for measuring variability and is concerned with changes in magnitude (even when measured with high precision) rather than very accurate determination of apparent brightness or luminosity in various filter bands
The preparatory work involved in making accurate apparent brightness or luminosity measurements and combining those measurements from different observers can be extensive as detailed in papers by Munari and others concerning the ANS collaboration and their paper “BVRI lightcurves of supernovae SN 2011fe in M101, SN 2012aw in M95,and SN 2012cg in NGC 4424” and references in that paper. This goes far beyond simple differential photometry that was the subject of these posts.
Brad Walter, WBY
I believe the thrust of the "uncertainty estimates in ensemble photometry" topic was ABSOLUTE, not DIFFERENTIAL photometry? Certainly, if a single oberver, using the exact same equipment and software and comp stars merely runs a "time series" on a variable, the results will be very precise indeed, regardless of the comp star chosen.
The problem arises when you plot all these different observer's data in LCG, using their different equipment, data reductions, comp stars (or even the same observer who switches to a different set of comp stars) you get quite a scattered spread of points, not much different from a bunch of visual observers!
This, I believe, was the original issue, how to solve the problem of all the differences in "true" magnitude of a variable being reported, by all the different observers, with grossly underestimated error budgets based on 1/SNR, and mostly ignoring (or innocent ignorance of) systematics in the comp stars as well as others.
Now, we may suppose that ignoring comp star uncertainties in purely differential photometry may be excusable under a very, very limited set of conditions. Especially where the same comp stars are used throughout and where the magnitude domain data are not the point of one's measurements, perhaps the time domain is, as for a subset of exoplanets and maybe fast cataclysmics. The problem is that the AID then delivers "MAG" and its "ERROR", but they are never marked as merely relative mags and errors. Perhaps every single current observer understands this relative/absolute, systematic/random error context (all evidence to the contrary), but can we guarantee that ALL future users for generations to come will know that "Mag" doesn't mean magnitude and "Error" doesn't mean real uncertainty?
Whereas in the case of long-term, multi-observer, absolute photometry, target mag uncertainties are observers' only signal (to each other and to future data users) of degree of confidence in the data that will get automatically merged. In such a domain, absolute magnitudes do matter, uncertainties in the absolute mag scale do matter, and so ignoring comp star uncertainties (however systematic they are or aren't) would make for very poor data reporting at best.
I am a very new observer and this question has nagged me for so long that I have never submitted any data.
It seems to me from reading the posts above that there are two perspectives.
1) if you are interested in amplitude and period of a variable, the uncertainty of the comps does not matter.
2) If you are interested in absolute magnitudes, they do matter.
Is that a fair summary?
From my perspective your summary is a good one and prety much spot on for a topic that can be quite confusing to observers.
This Topic should never prevent anyone from submitting data... go for it!
Ad Astra & Good Observing,
I cannot agree that it is quite a fair summary. Comp star uncertainties do matter more often than that...for a variety of reasons:
- Without comp uncertainties to compare, how will we know which comp stars to rely on, or even which to include? Many comp stars have no uncertainty given (unfortunately tagged with zero uncertainty in chart tables)--do we exclude those, or include them all, use the average uncertainty of the other comp stars? Or average the uncertainties of comp stars from the same catalog source? The answer matters.
- Without comp uncertainties, how will we arrive at weights for accurate ensemble photometry (weighted average)? Shall we simply apply equal weight to all comp stars, even when we know that differing uncertainties dictate weighting them differently?
- Whether or not one is "interested" in absolute magnitudes turns out to be a decoy criterion. Even if one given observer is only interested in a Long Period Variable's amplitude and period, it remains true that anyone's mag data may well get merged with others' mag data to make the light curves presented by AAVSO to the world. This merging is outside your control. And such merging assumes that consituent magnitudes exist on an absolute mag scale--without exception--regardless of one's interests.
- Since (due to merging) absolute magnitudes always matter for LPVs, we should remember that when only one comp star is used, the uncertainty of one's target magnitude can never be smaller than that comp star's uncertainty.
And then there's the whole question of what "Error" is presumed to mean in our submitted data and in AID. This is in a woeful state: it's not even "error", it's uncertainty, and it's displayed to the whole world right there next to the absolute magnitude and clearly refers to it. And comp star uncertainties very often dominate that displayed uncertainty.
But until we're sure what AAVSO and AID mean by "Error" (or until observers can separately submit absolute vs relative mag data), I get that it's open season for interpretation, for us and for everyone who ever tries to use our LPV data, in perpetuity. Clear and reasonable definition(s) of AID "Error" would reduce most threads like this one to elementary stats that I'm guessing we could all adopt pretty readily.
But absent that, each is left to make such decisions for one's own data submissions, hoping for clarity someday.
From which it follows--and I warmly agree--that no one should withhold data because of such ambiguities.
I completely agree that comp star uncertainties really do matter in many important ways. You mention using them to select comp stars amd using them to chose weights for averaging.
And your point of what happens when one tries to merge data with that taken by others is well made.
So, is this a fairer summary?
1) Comp star uncertainties should be taken into account when selecting comp stars
2) Comp star uncertainties should be taken into account for the purposes of ensemble weighting
3) If you as a single observer are going to determine the amplitude and period of a variable using only your own data, it is not neccessary to include comp star uncertainties in the estimate of the target uncertainty
4) If you wish to measure absolute magnitudes, comp star uncertainties should be included in your uncertainty estimate for the target.
5) To maximize the utility of submitted data, comp star uncertainties should be included in your uncertainty estimate for the target.
I like that a lot.
Good summary Peter.
The origional question kept getting ducked by the experts. The question was should anything be done with the error listed for comps? The answers seemed to go off track, listing all the sources of error except the error of the comp.
An apt comparison is to ask: Is it possible to measure a voltage to 1% with a meter that is accurate to 3% with a single measurement? The answer, of course, is no. So my magnitude estimate can have no better absolute accurcy than the error given for the comp.
What to do? Ignore the comp's error and call it precision. . .. then we get the tight pattern just out of the bull's-eye; or is it the other way around?
And in my opine, Eric is asolutely correct, and makes a case for better comp uncertainties. That will be a huge task now that we have the planet hunters on board and need uncertainties of 0.0005 mag.
I suppose the uncertanties should be improved in the standard fields to make us transformers less uncertain. A big job for the fellow from LSU or maybe a space telescope.
So There are no misunderstandings regarding my remarks on Uncertainties I want to insure that observers understand that at the time a sequence is created or updated the Sequence Team endeavors to use the best applicable survey then available with comps being carefully chosen with the lowest uncertainties, within the various options, being mindful of the need to keep color choices (B-V) within a reasonably close range to that of the target, as the survey permits, within the smallest FOV that the data will permit, as well as avoiding close doubles.
I can assure you that this is not a trivial task and sometimes we have few good options available for a specific fov and while not relevant, but as an FYI, some of the more difficult/challenging ones can consume close to three hours of time. Each field of view is also unique and should not be used to create generalizations regarding the sequence process or be used to judge a specific survey.
Most of the existing problematic sequences that come to our attention are typically brighter targets that were created some years back.
If an observer is dissatisfied in any manner with a sequence they are encouraged to file a CHET. CHET’s are generally responded to in a reasonably timely manner although if someone files a couple of dozen CHETS at once the responses will be slower for sure; not all CHETS can be resolved simply because there is a lack of available data to do so that may or may not be available at some future time, depending. CHET’s are kept active until resolved. If you file a CHET I would encourage you to check back for remarks that may have been attached; if the CHET is resolvable then the filer will automatically be notified.
Is the sequence team's process documented somewhere that we can refer to? I mostly observe unobserved stars (or asteroids) that do not have any AAVSO-determined comp stars in the field and have developed my own elaborate technique (starting from APASS data) to choose and measure comp star ensembles for my own purposes. It would be good to know how the sequence team does it and compare that to the general procedures given in the standard references.
I am attaching a pdf of the Sequence Teams Sequence Selection and Revision Guidelines document.
Thank you for your interest.