Acceptable Error Range for Submssions?

Affiliation
American Association of Variable Star Observers (AAVSO)
Sat, 03/28/2015 - 00:54

Hello! I wanted to check to see what folks feel is an acceptable error range for submitting data.

    During last night's run, clouds moved in during after midnight. A number of the frames were less than optimal for many targets.  When I reduced the data, the resultant error ranges were higher than what I typically get.

    I eliminated any meansurement with error range over 0.25 mags. Is this acceptable, or should I tighten up the acceptable error range for mags to be submitted? Thank you and best regards.

 

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
acceptable error range

Hi Mike,

The usual answer is that, if your error estimates are good, any positive detection with appropriate error attached is a scientific measurement and can be submitted.  However, there are caveats with that.  For example, an error estimate of 0.25mag means you had signal/noise of about 4.  This is a barely detectable object in an image; you have almost equal probablility of a random noise fluctuation looking like a star.  If the object is easy to see and measure, then your errors shouldnt' be in the 0.25mag range.

One reason why they might be so large is that under cloudy conditions, part of the image suffers more obscuration than another part, and so the comparison star measures can have enormous offsets.  I will do photometry through light cirrus, when you might lose a few tenths of a magnitude and the clouds are pretty uniform.  Through anything thicker, I just throw the data away because I can't derive a good error estimate.

There are occasions when you have to make a measurement through lousy conditions - poor seeing, twilight, clouds, high airmass, full moon or whatever.  No one else is observing the target on that night and researchers are depending on you getting some estimate, no matter how poor.  However, such occasions are rare.  Observe within your limits; be willing to "write off" a night now and then.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
Hello! Thank you for your

Hello! Thank you for your guidance.

    One thing I'm curious about. Can conditions exist in which the target might have a good SNR, yet still have a high error range?

    For example, can patchy cloud cover change the magnitude of some comps in an image more than others? If this were possible, when using ensemble photometry, would the error range be high even though the SNR might be considered good?

    My gut feeling is that the FOV for images is too small (I run my 8 inch LX200 at 1300mm focal length) so that patchy cloud cover should not affect comps differently in different parts of the image, but I thought I would check. Best regards.

 

Mike

Affiliation
Royal Astronomical Society of New Zealand, Variable Star Section (RASNZ-VSS)
Acceptable errors

Hi Mike,

These days I don't do much observing but use other people's.  Recently I've been using some of your measures of two very interesting stars - BH Crucis and R Centauri -  mainly the B and V but RI are interesting as well.  I wonder could you drop me an email directly     astroman@paradise.net.nz   about some aspects of these.

Hi Lew - good to see you're still in there, my photometer failed and the experts are puzzled as to why.  But daylight saving here finished next week so that is a reason for getting back into action.

Regards,Stan

 

Affiliation
American Association of Variable Star Observers (AAVSO)
photometric error

Hi Mike,

There are many ways in which you can get a high uncertainty; a few are listed below:

- the target star may be underexposed with low signal/noise.  Not much you can do about this except expose longer.

- one or more of the comparison stars may be underexposed with low signal/noise.  Try to use bright comparison stars, even if they are brighter (but unsaturated) than the target.

- the comparison star standard magnitudes may be in error.  For example, one star is 0.1mag too bright and one star is 0.1mag too faint.  When forming the differential/standardized magnitudes in an ensemble, the result shows the discrepancy.  This sometimes happens when the stars are calibrated from different surveys.  Another possible cause is if you use a big aperture that includes a nearby companion to a comparison star, but the survey did not include that companion.

- the atmosphere interferes.  Non-uniform clouds, for example, or taking measures at very high airmass and not properly accounting for the differential airmass across a field.

- the telescope interferes.  If you have optical aberrations like coma, and some of your comps are close to the center and others near the edge of the field, you can get different fraction of flux measured with a fixed aperture.

- your processing interferes.  You might have an incorrect flat due to scattered light (related to the previous problem), or introduce a gradient in your image by some means.

In other words, there are lots of ways to get bad photometry, or to get good photometry but poor estimate of the uncertainty.  You need to look carefully at the process, and eliminate as many of them as possible.  Not observing on cloudy nights is a good start, at least until you get comfortable with how your system performs.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
High errors on measurements

Hello Mike,

One thing that Arne didn't mention (probably because it is so obvious) is that tracking errors can reduce the S/N, as someone who has been plagued by drive problems (like me) knows. So can spill over from a bright star very close to the star you are measuring - whereby you may get a good S/N but the data is bad.

There are times when a high error must be tolerated, like a case when the phenomenon you are looking for would get obscurred by a longer exposure time. As an example take ES Cet. This 17th magnitude interacting binary white dwarf has a variation of 139 cycles/day. That's a bit over 10 minutes, so five minute exposures are out of the question if your purpose is to see if it has changed. You can see the signal in data taken with a 14 inch 'scope with 50 second exposures. So when you need to take short exposures and can live with the scatter, that's OK. For most AAVSO work, I use 0.10 mag error (from the SNR) as a guideline.

Affiliation
American Association of Variable Star Observers (AAVSO)
Visual estimation gives some advantages

[quote=Lew Cook]

For most AAVSO work, I use 0.10 mag error (from the SNR) as a guideline.

[/quote]

Yes, while it is possible to do very high precision and accuracy photometry, it is quite a challenge! I think the bulk of the data we have in the AID is more along these 0.10 error ranges. In fact, with a little care and experience, a good visual observer can achieve 0.05 to 0.10 on a routine basis !

I mention this again, just as a reminder of the fact that visual observing is "smart" observing, and in a way, CCD observing tends to be "dumb" or "automata" type (ie., the generally poor performance of AI systems in technology :(

The human eye/brain combination can learn to be very careful and account for many of the sources of error that Arne mentions, and can do so very quickly and reliably. Take clouds for example. A typical CCD system will have quite high errors if the cloud density varies significantly over the FOV, since the measured flux of the target and comps will be directly affected, in a random way, and there is no obvious, known or workable method to compensate for it. But the visual observer can see the changes in the relative brightness of the target and comp, in real-time, as the clouds vary over the field, and you can take "mental averages" to greatly eliminate this source of error.

I notice many of these possible sources of error on a regular basis, when I operate the BSM-Hamren system at my home. When I take images of a particular object, and try to make a visual estimate of the object directly from the CCD image, and then image again a minute later, I notice some differences in the estimate. Even if there are no clouds, this is likely due to slight shifts of the stars on the CCD frame between images, and affecting different pixels, "the flat field issue". The flats should compensate for a lot of that, but I don't think it is perfect. There's issues with FWHM and pixel size, focus, tracking, random noise if s/n is low, that can affect the apparent brightness of a star on a CCD frame. So, if these errors are noticeable visually between frames, they will appear to the photometry software too, and it has to be pretty good algorithm to adjust properly for all these things!

So, under poor observing conditions, I would say that visual observing may well be more accurate than CCD, unless the CCD observer takes extraordinary measures to compensate. (If its even possible?)

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
Beating a dead horse

[quote=lmk]

 a good visual observer can achieve 0.05 to 0.10 on a routine basis !

... Take clouds for example. A typical CCD system will have quite high errors if the cloud density varies significantly over the FOV, ... [/quote]

Mike Linnolt is quite correct: A  good visual observer is routinely good to +/- 0.1 and exceptional people are good to 0.05. But I wouldn't trust them to see a 0.2 mag variation in a star of magnitude 17 in a ten minute period over and over and over. Luckily, the small field of view in a CCD image in a 'scope big enough to get to mag 17 in a minute isn't going to have differential cloud cover from one part of the field to the next except on rare occasions.

Anytime I have more than about 0.7 mag reduction in my star signal due to clouds, I quit observing.

 

 

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Good comp stars is the key

[quote=Lew Cook]

But I wouldn't trust them to see a 0.2 mag variation in a star of magnitude 17 in a ten minute period over and over and over. 

[/quote]

This situation may not be best for visual observing, but mainly due to the fact that a very large aperture instrument is needed, probably 24+ inches and good dark skies. A 0.2 mag difference is very easily and accurately measured visually, IF one has a good set of equally colored comp stars, say 0.50 mag apart, bracketing the variable. In such a case, it would be very easy to tell if the variable were closer in brightness to the one or the other, or right in between, +/-0.25. And so, 0.05 mag accuracy results, voila!

I have frequently lamented at the fact there are quite a few 24+ inch Dobs out there in the world, but hardly a one is used for VSO, usually they are relegated to counting and logging faint fuzzies only :(

Mike

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Visual vs CCD Precision

I have followed this discussion with some interest and decided to inject some data into the midst of opinion. I have been teaching the VStar course and as a result have been looking at a lot of light curves and phase plots showing both CCD and Visual data. I picked one from a recent forum post because it was handy, but it is relatively typical of the data I have looked at.

These plots are Eta Aql data for the period 245000 through 256650. It is hard to distinguish whether there is much difference in precision between individual CCD V observations and individual visual observations looking at the light curve. However the difference in the empirical precision becomes obvious when you look at the phase plot. There is much less scatter in the CCD data regardless of the size of the error bars.  I should point out that I used the period given in VSX for this DCEP star not the one based on the CCD light curve. I didn't want to use the period determined from one data set or the other since that might bias result. 

Also some of the largest CCD error bars are from HQA transformed data and I expect that is because the values are from all-sky photometry and include additional error terms. Most of the CCD observations, but not all, are transformed and may also include additional error terms that are not included in instrumental magnitudes, depending on how the observer reports error (TA, for example, includes the comp star error which includes any systematic error of comp star magnitudes).

Please do not think I am disparaging the value of visual observations in any way. Even though the visual data points have a lot of spread look at the the 0.05 period bin means of just the visual observations shown in blue with 95% confidence interval, ~2 sigma, error bars (CCD error bas are shown with the reported 1 sigma uncertainties). The visual means have excellent precision and are in very close agreement with the CCD V phase plot. however, based on data, I will dispute that, in general, individual Visual and CCD observations have similar precision in practice. Further,  keep in mind that if the visual data is not dense, as it is in this case, the error of the visual means will be much larger because of the increased scatter .

This is a pattern I have seen repeatedly in the data used in the course. and in analysis of other light curves I have done with VStar. 

Brad Walter, WBY

Affiliation
American Association of Variable Star Observers (AAVSO)
Fundamental difference between visual and CCD

[quote=WBY]

It is hard to distinguish whether there is much difference in precision between individual CCD V observations and individual visual observations looking at the light curve. However the difference in the empirical precision becomes obvious when you look at the phase plot. There is much less scatter in the CCD data regardless of the size of the error bars.

[/quote]

Hi Brad, this is a very useful comparison you did, thank you. One thing - do your really mean "accuracy" (the difference between measured values and true values) rather than "precision" (just the number of decimal points of significance that is shown) ?

Regarding the phase plot. I mentioned before that visual are "smart" observations vs. CCD being "automata". While this has its advantages when conditions are poorer, there is the one bothersome disadvantage of visual "smart" observing - bias. I think bias is particularly a problem with cyclical type of stars like a LPV or Cepheid, because the visual observer knows it has a specific behavior, and expects what the upcoming values should be, thus bias tends to creep in more so than in irregular type of variables. The eta Aql phase plot may be particularly bias-prone?

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
Accuracy vs precision

I absolutely mean precision rather than accuracy! Precision is measured by the scatter of the data. It is not just the number of decimal places shown or even the submitted error value for individual observations. Who knows if the submitted uncertainty values are realistic? The values provided by many photometry programs are often very optimistic because they only include uncertainties from the CCD error formula and ignore other factors affecting precision.  Empirically, precision is determined by the standard deviation of the data. 

The most cursory inspection of the phase plot in my last post leaves no doubt that there is much less scatter to the CCD data. Pick any phase and look at the magnitude range of visual observations compared to CCD V mag. I did a quick, approximate calculation to estimate the relative precision of CCD V compared to visual data from info that VStar provides. That calculation shows the standard deviation of the Visual data (not the standard error of the means of the data) is about 3.3 times the standard deviation of the CCD V data. I was actually disappointed by that result. I expected the CCD data to have about 1/5th the error. However, on reflection there are relatively few CCD V observations 267, (an average of about 14 per bin) compared to 4,337 visual observations (an average of 206 per bin). Therefore the results are skewed in favor of visual due to greater sampling error for the small number of CCD V samples per bin. 

For reference, please see the attached two plots and spreadsheet. The plots are plots of the 0.05 phase bin means of the Visual and CCDV phase plots. The spreadsheet contains the output from VStar giving the bin means and the standard error of the mean for each mean.  I added some quick, approximate calculations to compare the standard deviations of the Visual and CCD V data. 

Brad Walter, WBY

Affiliation
American Association of Variable Star Observers (AAVSO)
Precision and its causes

[quote=WBY]

I absolutely mean precision rather than accuracy! Precision is measured by the scatter of the data. It is not just the number of decimal places shown or even the submitted error value for individual observations. Who knows if the submitted uncertainty values are realistic? The values provided by many photometry programs are often very optimistic because they only include uncertainties from the CCD error formula and ignore other factors affecting precision.  Empirically, precision is determined by the standard deviation of the data. 

The most cursory inspection of the phase plot in my last post leaves no doubt that there is much less scatter to the CCD data. 

[/quote]

Unfortunately these terms of "accuracy" and "precision" have been confusing in the past, and even altered the common meaning. There was an ISO standard that came out some 20 years ago which addressed that. Some fields like metrology even use "trueness" in the sense that "accuracy" has been used by others, the offset of the measured value from the actual value.

But, I think I understand what you are trying to say here. The scatter of the CCD vs. visual are different, with visual scatter being higher. That appears to be so, however, you cannot just apply the common std dev calculation without consideration of the underlying "reporting precision" of the two datasets. Practically all visual observers report to the nearest 0.1 (one decimal place) while CCD usually report 3 decimal places. The appearance of the visual data shows this 0.1 "step-like" effect very clearly. This adds an artificially high degree of scatter compared to the CCD values.

While there is a fundamental difference in how precise a visual observer can measure, compared to a CCD observer, one solution to help make the comparison of scatter in a set of measurements made by these two groups of observers a little more on par, would be to adjust the "reporting precision" the same for both. Round all the CCD measurements to the nearest 0.1, and do the calculations of std dev that way. The scatter of CCD would then increase and better represent the actual rather than the reported scatter of the sets of measurements.

Mike

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Precision and Its Causes

I am familiar with ISO 5725 parts 1 through 6. It uses the term accuracy to encompass both Trueness (lack of bias or systematic error, which traditionally was called accuracy) plus precision (reproducibility quantified by the measurement of random error, uncertainty).

I am not confused about the terms accuracy and precision. In my post I used the term precision exactly correctly. Precision is defined as the reproducibility of the observation or the experiment. It is sometimes also explained as how well the result has been determined without reference to its agreement with the “true” value1.  For example in target shooting it is measured as the size of the bullet hole grouping not how close the group is to the bull’s-eye. Accuracy as traditionally used (Trueness to ISO) refers to how close the result is to the true value, the bull’s-eye.

There are two ways to determine precision. One, which is often used when it is impossible or overly expensive to run an experiment multiple time or to make multiple measurements (which might involve destructive testing, for example – in the things I was involved in that was $250,000 per item destroyed in addition to $50,000 - $150,000 per test) So your sample size is one or an exceedingly small number. This method uses detailed examination of the experimental methods and means to determine precision of the experiment as well as systematic errors. The problem with this method is that you only include the sources of uncertainty and systematic error that you identify. Further your precision estimates may not be very accurate unless they are determined by running simple experiments multiple times or making multiple observations of these individual sources of uncertainty and computing variances (square of standard deviation) for each, which you combine using normal error propagation expressions. You also must determine whether these error sources are correlated to know if you have to include the cross correlation terms in your error propagation.  

The other method of determining the precision, or reproducibility, which is preferred when feasible, is to repeat the experiment or observation a number of times and measure the standard deviation (or variance) of the results. That gives a direct measurement of precision and that is what I did in my Eta Aquilae example.

Normally I don’t like to get into arguments in forums, but your contention that the comparison is biased against visual magnitudes because they are only reported to one decimal place vs. three for CCD observations is incorrect and not supported by the data.  Let’s try a little experiment. Suppose the standard deviation of the signal in a bin of 200 observations is 0.001 magnitude due to Poisson scintillation and whatever other noise is actually in the signal received by our detector (eye-brain, CCD, whatever). 0.001 That is the best reproducibility we can have because that is the standard deviation of the signal itself with no uncertainty due to the detector. If I took all of the values of perfect observations and rounded to the nearest 0.1 almost all of the observations would have the same value. In 200 observations the probability that I would have any observations that were 0.1 above that value or 0.1 below that value or farther way than that would be less than 0.000001. Suppose the reproducibility of measurement of my detector is not infinitely good, suppose it is only 0.05 magnitudes as determined by measuring a known constant source with sufficient number of photons gathered that the Poisson noise inherent in the light signal is several order orders of magnitude smaller. The standard deviation that I calculated from those 200 observations would be 0.05 (to the nearest 0.01 magnitudes) if the resolution of my data recording was at the 0.01 level. If I only record observations to a resolution of 0.1 magnitudes but the reproducibility of my observations is still 0.05, then my data recording has a resolution of 2 sigma.  I would actually record all observations that were within +/- 0.149 of the average as being within +/- 0.1. Now I have more observations recorded as being inside the two sigma level than were actually there and I underestimate the standard deviation which means I overestimate the reproducibility of the observations  It produces the opposite effect from the one you propose.

Most people report to 0.1 magnitudes because that is the experimentally determined (experience is experimental evidence if recorded in an unbiased manner) limit of differentiation for visual observation. Based on this date that is an optimistic conclusion. I think it may be correct for experienced observers under good conditions with normal eyesight, and well  placed comp stars that bracket the target within a couple of tenths range.  Exceptional observers who are confident in their ability to make finer distinction to, say, 0.05 magnitudes are not prohibited from submitting those values. In fact, there are quite a number of 0.05 level observations and a few at the 0.025 magnitude levels in this data. As demonstrated above adding another decimal place to the reported values will not improve the standard deviation if the observers can’t actually differentiate to that level.

The standard deviation of the Visual data bears out the lower reproducibility (higher uncertainty) of visual observations because it is twice the 0.1 reporting increment.  If visual observes really had the ability, in general, to provide better reproducibility, the data would be clustered more closely around some xx.x value in each bin (not necessarily the correct one), 0.1 below that value 0.1 above it. Then the standard deviation would end up being smaller than 0.1 magnitudes, not larger. In fact the total range of data at any phase point is 0.8 magnitudes (+/- 0.4 magnitudes) or larger which also demonstrates that the data is not reproducible at the 0.1 mag level (i.e. the standard deviation is greater than 0.1 mags), at least for this data.

Similarly, if CCD observes report to three decimal places when the reproducibility of their data is only to two, it won’t improve the standard deviation of repeated observations. Their inability to reproduce results that have the level of consistency indicated by the extra decimal place is exposed by the standard deviation of the data and it will show a larger uncertainty than indicated by the extra, unwarranted decimal place.  This is why there are standards (ISO and others) for significant figures used in reporting experimental results.

The discussion above shows why empirical determination of uncertainty by actually measuring the standard deviation (or variance, if you prefer the square of the standard deviation) of repeated experiments or observations is preferred. The direct measurement of the uncertainty of the results as quantified by the standard deviation finds all of the random error whether it was recognized and correctly estimated or not in the design of the experiment.

The determination of standard deviation is sample size dependent. Smaller sample sizes experience higher sampling error. For the smaller sample size of the CCD V observations I should have scaled the result by the t-distribution for 13 degrees of freedom. That would have increased the ratio of uncertainties to 3.4 from 3.3. Still not the 5x I thought it should be. Above sample sizes of 50 and certainly for sample sizes of 200 points per bin there is no significant difference between the normal distribution and a t distribution.

I am in no way arguing against the benefits of visual observing.  First and foremost it is fun. Second, as with zoo-science, the large mass of observations leads to means that have reasonably small standard errors, as shown by the blue dot-and-line series in the data I posted for Eta Aql. That makes it scientifically valuable. You can observe a lot more stars in a night than you can with a CCD and that accumulates data on many more stars than would be possible by CCD alone. For many, it allows significant scientific contribution without all of the fuss and bother of data reduction, transformation and analysis that accompanies CCD observing.  Finally, humans are smart detectors. They don’t fall into the trap of running data through an automated reduction and analysis pipeline without visually checking individual observations for validity.

Reproducibility of observations, however, is not one of the relative strengths of visual observing compared to CCD observing. What is surprising and one of the reasons behind activities such as the XZ Cet CCD campaign last year is that the CCD observations submitted to AAVSO don’t have better reproducibility (lower uncertainty) than they have.
 

Brad Walter, WBY

 

1. Data Reduction and Error Analysis for the physical Sciences, Philip R Bevington and D. Keith Robinson, Third Edition 2003, Mcgraw-Hill, pg 2. 

Affiliation
American Association of Variable Star Observers (AAVSO)
Rounding does increase scatter

[quote=WBY]

Normally I don’t like to get into arguments in forums, but your contention that the comparison is biased against visual magnitudes because they are only reported to one decimal place vs. three for CCD observations is incorrect and not supported by the data.

[/quote]

Hi Brad, Ok Thank you so much for this detailed reply! In general, I agree with all, except for your point that rounding does not bias against visual.

The example you gave showing that rounding reduces std dev is not typical. Take for example an illustrative set of data made by CCD to 3 decimal places and simultaneous visual reported to the nearest 0.1, say of a star which steadily declines from mag 3.0 to 3.3:

CCD    [3.000, 3.049, 3.101, 3.148, 3.202, 3.251, 3.300]  STDEV = 0.1084

Visual  [3.0, 3.0, 3.1, 3.1, 3.2, 3.3, 3.3]  STDEV = 0.1272

Note, that the CCD 3 digit precision allows the data to follow the trendline with very small spread, but once you round to the nearest tenth, the spread "artificially" increases, the std dev increases about 17%. And that's not even taking the trend into account. If you calculated the deviation from the best linear fit, the correlation coefficient R^2, the rounding would have a huge difference on the std dev!

Similarly, if you took a more extreme case, where the actual value varies close to the middle of a 0.1 step, you get that large increase in std dev by rounding:

CCD [3.049, 3.051, 3.048, 3.052, 3.049, 3.051, 3.050]  STDEV = 0.0014

Visual  [3.0, 3.1, 3.0, 3.1, 3.0, 3.1, 3.1]  STDEV = 0.0535

So, rounding to fewer significant figures definitely increases standard deviation/spread of the data in most cases, and is detrimental in itself to analysis of visual data. I should also point out this is a problem with current visual charts, since comp stars are rounded to the nearest tenth, and that further contributes to the "artificially" lower accuracy and precision of visual estimates, by unintentionally introducing up to 0.05 magnitude of systematic error.

Mike

 

Affiliation
Vereniging Voor Sterrenkunde, Werkgroep Veranderlijke Sterren (Belgium) (VVS)
Artificial examples

Hi Mike,

[quote=lmk]
Take for example an illustrative set of data made by CCD to 3 decimal places and simultaneous visual reported to the nearest 0.1, say of a star which steadily declines from mag 3.0 to 3.3:

[/quote]

Calculating standard deviations on data showing a trend is usually only done after removing the trend.  So this is not a very good example.

[quote=lmk]

Similarly, if you took a more extreme case, where the actual value varies close to the middle of a 0.1 step, you get that large increase in std dev by rounding:

CCD [3.049, 3.051, 3.048, 3.052, 3.049, 3.051, 3.050]  STDEV = 0.0014

Visual  [3.0, 3.1, 3.0, 3.1, 3.0, 3.1, 3.1]  STDEV = 0.0535

[/quote]

Again this is not a very realistic example.  You have picked just the small range where your "Visual" data would get a larger standard deviation.  If the "true" value were 0.002 mag to 0.097 mag fainter, you would have:

CCD [3.051, 3.053, 3.050, 3.054, 3.051, 3.053, 3.052]  STDEV = 0.0014

Visual  [3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1]  STDEV = 0.0000

Your examples do not prove that fewer digits necessarily increase the standard deviation.  In the case of eta Aql it is obvious that the rounding errors do not influence the spread in the visual data very much.

Patrick

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Rounding cause many problems

[quote=wlp]

Calculating standard deviations on data showing a trend is usually only done after removing the trend.  So this is not a very good example.

[/quote]

Sure it is, I just left out the linear fit correlation coefficient for the two examples, which accounts for the trend. Doing so, for the CCD data R^2 = 1.000 and for the Visual R^2 = 0.941, so we go from an essentially perfect linear fit with no scatter to a poorer fit, simply by rounding to one digit.

[quote=wlp]

Again this is not a very realistic example.  You have picked just the small range where your "Visual" data would get a larger standard deviation.  If the "true" value were 0.002 mag to 0.097 mag fainter, you would have:

CCD [3.051, 3.053, 3.050, 3.054, 3.051, 3.053, 3.052]  STDEV = 0.0014

Visual  [3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1]  STDEV = 0.0000

Your examples do not prove that fewer digits necessarily increase the standard deviation.  In the case of eta Aql it is obvious that the rounding errors do not influence the spread in the visual data very much.

[/quote]

Right, I picked that to illustrate what can happen when you round, and the true values tend to be in the middle between the step size. The scatter becomes quite large. In the other cases, where the true values tend around the nearest 0.1, then of course all the rounded values will be the same, and the std dev will be zero.

In real life you cannot have a std dev of zero for actual observations, so that shows the rounding creates an unrealistic representation of the real phenomenon too.

The above are just examples. The proof comes from information theory, rounding ALWAYS adds a standard error in the approximate amount of (step size)/ sqrt(12), (~ 0.029 for 0.1 step rounding)1 so it is an unavoidable detriment that harms data analysis, unless you absolutely cannot avoid rounding for some reason.

BTW, Bennett was a colleague of Claude Shannon, the inventor of information theory. And, in case you are wondering what that has to do with observations of stars - its quite the equivalent of A/D conversion. Just replace the analog signal with the actual brightness of the variable, and the digital n-bit representation with the measured value, to whatever digits of precision you like. (1 decimal digit = log2(10) ~ 3.322 bits). The rounding error formula above applies equally, and has the same meaning to both these cases.

Mike

1. W.R. Bennett, "Spectra of Quantized Signals," Bell System Technical Journal, Vol. 27, July 1948, p. 446-471.

Affiliation
Vereniging Voor Sterrenkunde, Werkgroep Veranderlijke Sterren (Belgium) (VVS)
Rounding error formula

[quote=lmk]
The proof comes from information theory, rounding ALWAYS adds a standard error in the approximate amount of (step size)/ sqrt(12), (~ 0.029 for 0.1 step rounding)
[/quote]

This formula is not applicable here, because it makes the assumption that bins are created in which the real variation is equal to the step size.  This is not true in the case of Eta Aql because the real variation inside the bins Brad has chosen is in most cases smaller than 0.1 mag.  "Perfect" observations inside such a bin rounded to 0.1 mag can have a standard deviation of zero.  Rounding may therefore decrease the total standard deviation inside the bins because it hides the real variations.  The formula above is a measure for the real error, not for the standard deviation of real observations.

The reason why the standard deviation of the visual data is only 3.3 times that of CCD data as Brad calculated, is that the star is not constant inside a bin and this increases the standard deviation of both visual and CCD data with a similar amount.  However for the CCD data this is a relatively substantial amount, but for the visual data it is not.

Patrick

Affiliation
American Association of Variable Star Observers (AAVSO)
Rounding error Formula

Thank you Patrick. That is exactly what I was describing. Not that the real error is less, it isn't, but calculating the standard deviation from the larger visual observation magnitude bins underestimates the real error of Visual observation compared to the smaller CCD V magnitude bins and therefore biases the comparison in favor of visual means.  

As I stated I was trying to do a quick and dirty calculation and avoid either writing a R script or slogging through the spreadsheet manipulations to de-trend the data using the piece-wise straight line interpolations between bin means. Going through this made me realize that I see no way to get residuals from such a model in VStar and it would be a very handy tool. Now I am going to have slog through the calculation to see, if the result does get near the 5x ratio I expected. However, that must wait until after I finish grading exams, which I must return to right now. 

Brad Walter,WBY

Affiliation
Royal Astronomical Society of New Zealand, Variable Star Section (RASNZ-VSS)
Acceptablke Error Range

Hi Brad & Others,

This topic has generated a great amount of talk - and encouraged me to later download the AAVSO manual on making these observations.  It will be interesting to see how it matches what we do in the south.

As part of another exercise I've been trying to resolve a major problem in this field and  was looking for BV measures of some northern stars.  But I couldn't find any amongst the brighter Miras.  OK, CCDs have trouble with bright stars but I see V only measures of quite a few and one or two BVRIJH sets from an observer I know.

So my question is simple - can some of the people involved in this topic give me the names of some stars so I can check whether the error I'm examining is from a few observers or related to the software in use.

For what it's worth I think the only error estimates of value come from examining the spread in the values of one or two check stars over a season's observing.  I'm not clear if this information can be compiled from the database. I always delete the error estimate columns from any of my analyses and rely on the reputation of the observer.

When I changed from UBV photometry using PEP to BVRI using an ST6B I found it difficult to achieve the same degree of accuracy.  Instead of ~1% it seemed to be difficult to do much better than 3%.  Maybe the equipment and techniques are better now - this was around 2000 when I finally decided to concentrate on unfiltered work for the CBA for a few years.

But if someone can help with areas where I can find these measures - B and V - I'd be grateful.

Regards, Stan

Affiliation
American Association of Variable Star Observers (AAVSO)
Northern Multicolor Photometry

Stan

Are you only interested in Miras or just any multi color photometrr? Are there any magnitude range limits, variability limits or color ranges in particular? 

If all you are after is multicolor photometry one star that leaps to mind is XZ Cet There are 3 groups of quite dense multi-color photometry. It was the subject of a compaing focused on inproving photometry. It isn't a Northern star at -16:35 dec but many people from the North Observed it. See Attached. 

Another would be Beta Lyr It has quite a bit of mostly 3-color photometry after JD 2453500 and particularly after 2455500.

Deta Cep After 2454500 has some reasonably dense multicolor coverage in BVR but also some U and I thrown in. 

Of course Eps Aur after JD2445000. A note of caution there both Johnson and Cousins R&I bands. 

Then there is always the now infamous in this thread Eta Aql after about 2455000 in 4-color

If redder stars are helpful there is Eta Gem and SRA EA star. It has some 4-color after JD 2445750

mu. Cep (the dot is important to separate from MU Cep) is an SRC with qauite a bit ob BV from JD 2440000 to 2445000 and a substantial amount of 4 color after JD 2445750

R Lyr has quate a bit of B V after JD2456400

There was a T Tauri star multicolor campaign at the end of 2013 that might have generated some useful data

Heres a suggestion, ask AAVSo for a list of northern stars that hav both more than 100 B and more than 100 V observations after date 2451000. If they can't do that kind of and search ask fot the two lists separately. then download VStar. It is an easy tool to access AID and view the light curves and do many different kinds of anlysis on them. By the way the default drop down download box in VStar is the source of most of the stars in the list above except XZ cet and the T Tauri campaign. You will have to open the plot control iconand select additional bands besides Visual and CCD V for display. the default onty shows the two of the bands but it gets all of them for which there is data. You can then save the downloaded files if you want. Besides the "maven from VStar" is in your hemisphere in South Australia and there will be another AAVSO class in using VStar in July.It isn't hard to use and in the mean time you can always get help on the VStar forum. You may already have your favorite analysis program like Peranso. But if you don't, VStar is free, open source. 

Hope this helps

Brad

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Correct way

[quote=wlp]

[quote=lmk]
The proof comes from information theory, rounding ALWAYS adds a standard error in the approximate amount of (step size)/ sqrt(12), (~ 0.029 for 0.1 step rounding)
[/quote]

This formula is not applicable here, because it makes the assumption that bins are created in which the real variation is equal to the step size.  This is not true in the case of Eta Aql because the real variation inside the bins Brad has chosen is in most cases smaller than 0.1 mag. 

[/quote]

Ok Patrick, The actual assumption in the rounding error formula is that the bin size is substantially LESS than the real variation. Once you get to about equal, or the bin becomes larger than the variation, it will not be accurate. But thats not a really practical thing to do. If your true variation is less than the bin size, its not really useful to be making such measurements! eg. Nobody should do visual observations with a bin size of 0.1 on a star which varies millimagnitudes. Meaningless.

The proper way to do this analysis on eta Aql, is to treat the variable star as a continuously varying analog signal of approx. 1 magnitude, sampled to either ~4 bits for the visual case, or ~10 bits for the CCD case. Exactly equivalent to A/D conversion. The same theory and formulas will then apply. Eg., 24 dB SNR for the visual case, 60 dB SNR for the CCD, ~0.029 standard error in the visual data, ~3x10-4 standard error in CCD. Since the amplitude of the variation then is at least 10 times the bin size of the worst case, the visual, this is the proper and accurate way to analyze them.

Mike

Affiliation
Vereniging Voor Sterrenkunde, Werkgroep Veranderlijke Sterren (Belgium) (VVS)
Same and different things

[quote=lmk]

My first example, of observations following a trend line closely, then rounded so they do not follow so closely, is a more realistic model of actual vso observations.

[/quote]

Mike, that example is exactly the same as what Gary did: randomly choosing points between two extreme values.

[quote=lmk]

The actual assumption in the rounding error formula is that the bin size is substantially LESS than the real variation.

[/quote]

I was talking about the variation inside a bin, not the total variation.

Patrick

Affiliation
American Association of Variable Star Observers (AAVSO)
Excel model of rounding and truncation

Hello All

I decided that these sample sizes were too small and may be special cases, so I modeled this in Excel, with 100 data points, ie observations.  I used the random number generator to generate 100 values between Vmin and Vmax.  These observations were to three digits, with a mean of mag 4, although the mean is not important.  Excel recalculates the random function each time you run the spreadsheet or recalculate.  So the base case I used was 3.5 mags to 4.5 mags, a typical representation of these light curves that I see that might be an upper bound to the scatter that one would get.  Eta Aqu, R Cru, etc.  Many others are the same.  This gave a mean of 3.963 with a Std Dev of ..308 mags.

Then I rounded these inputs to 1 digit, like a visual observer would submit.  The mean was 3.961 with a Standard Dev of .310 mags.  Pretty clost to the .308.

Then I truncated the original 3 digit inputs to 1 digit.  The mean was 3.915 with a Std Dev of .303 mags.  The delta in means for truncation is what one should expect. Again pretty close to the .308.

So the means only are a representation of the normalcy of this data, and have no other purpose.  However, the Std Deviations of .308, .310, and .303 for the cases of 3 digit submission, 1 digit rounding, and 1 digit truncation show very small differences.  

I also ran special cases where the Vmin and Vmax were 3.99 and 4.01, and in this case with 1 digit rounding the std deviations were .005 for the 3 digit data and obviously 0.000 for the rounded data as all data gets rounded to 4.0.  So in this case, the rounding reduces the std deviation.  

I can try other cases, and increse the sample size, but I think the result will be the same.  I am convinced that there are small differences due to the rounding or truncation, but that they are small compared to the data that we are using.  Thie random function was not corrected for its distribution.  

 

Gary

WGR

Affiliation
American Association of Variable Star Observers (AAVSO)
Not applicable example

[quote=WGR]

I decided that these sample sizes were too small and may be special cases, so I modeled this in Excel, with 100 data points, ie observations.  I used the random number generator to generate 100 values between Vmin and Vmax. 

[/quote]

Hi Gary, Interesting test, but I don't think it applies to our cases of rounded observations. Not sure exactly how Excel generates its random numbers, but your mean and std dev is consistent with that from a uniform continuous distribution between two limits (a,b). Theoretically, the std dev is given by sqrt( (b-a)^2 / 12) = 0.289 for your example, which is fairly close to the 0.308 you got.

Now, given the "data" points here are just randomly generated uniformly within this range(?), rounding the values, on the average up or down a small amount, but with a net difference of zero, will just regenerate another similar uniform distribution, and the std dev would not be expected to change.

Regarding the Excel results, by a quick T-test, the difference between the calculated mean and the true mean is well within a 95% confidence range (p=0.382, df=98), but the difference between the calculated and expected std dev is 6.6%, which is borderline, so the Excel may not be doing a really good uniform distribution? But this is nit-picking.

My first example, of observations following a trend line closely, then rounded so they do not follow so closely, is a more realistic model of actual vso observations.

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
Correct Example

Hello Michael

You are correct, the random number generator in Excel is in fact a uniform distribution.  There is a Box Mueller correction that I have used for error propogation for another project.  I am not sure that this will make a big difference, but when I get a couple of days, I will make this correction and see what the effect is.  This should make the model values closer to the mean to occur more often and those near the limits occur less often.  I don't know, the uniform distribution sounds like a worst case compared to the normal distribution.  Lets see what the numbers say.

 

Gary

Affiliation
American Association of Variable Star Observers (AAVSO)
Uniform vs Normal Distribution

Hello All

Mike questioned the validity of using a random variable with uniform distribution (versus normal distribution)  to measure the difference between reporting the data to three decimal points versus one decimal point.  I agreed that this might make a difference.  I redid the analysis, using the Box-Mueller method to generate a random variable stream with a normal distribution.  

For the case of the values being between 3.5 and 4.5 mag, like the Eta Aql example, the mean value of 100 three digit random observations was 4.007 mags.  The standard deviation for the three digit case was 0.163 mags which is more like what would be expected for a random variable with a normal distribution.

When this same data was rounded to one digit, the standard deviation was .164 mags.  

When the 3 digit data was truncated rather than rounded, the standard deviation was .130 mags.  So it looks like one could support a .03 mag reduction in standard deviation by truncating the data versus the 3 digits.  

While the above analysis uses 100 data points, the spreadsheet allowed these results to be recalculated. This was done for 30 trials, each of 100 data points.  These trials were essentially the same as the above resutls within .010 mags threee sigma.  

So I cannot make a case for the rounding or the truncting the three digit data to one digit, affecting the mean or the standard deviation of light curves in the AAVSO any more than .01 mags.

Gary

Affiliation
American Association of Variable Star Observers (AAVSO)
A more realistic example

[quote=WGR]

Mike questioned the validity of using a random variable with uniform distribution (versus normal distribution)  to measure the difference between reporting the data to three decimal points versus one decimal point.  I agreed that this might make a difference.  I redid the analysis, using the Box-Mueller method to generate a random variable stream with a normal distribution.   

So I cannot make a case for the rounding or the truncting the three digit data to one digit, affecting the mean or the standard deviation of light curves in the AAVSO any more than .01 mags.

[/quote]

Hi Gary, Thanks again for your analysis. I was a bit puzzled why you are coming up with essentially no change in std dev of your simulations, between the precise values, and the 0.1 rounded ones. Especially since theory would expect a noticeable additional error of 0.029 for 0.1 step rounding. So, I was thinking maybe this test is still not a realistic example for VSO observations. 

Because you are just creating a random set of data around a constant value. Is there any real "information" in this? So, to test my idea, I ran a similar set of simulations as you did in Excel, but instead of generating random values around a constant, I generated a linear trend from magnitude 3.5 to 4.5, just as in the lightcurve of eta Aql. Then, I added random variations about this trendline of max amplitude +/-0.05, to closely simulate medium or typical accuracy CCD observations. I did 20 points to keep things simple. I evaluated the overall error in the data by calculating the correlation coefficient (R^2) in the linear trend line in Excel.

I ran it many times, with different random values, and in EVERY case, the R^2 drops from ~0.99 to ~0.98 between the exact values and the 0.1 rounded ones. This is consistent with my initial small example I gave earlier in this thread. Now, this is not a really big change, but we are using pretty small random errors anyway. The key point is, when analysing the error around trending data, the correlation does consistently become worse, when you round to 0.1

You might want to verify this result with your 100 points, but I think it is correct, since I ran it many times with consistent results. So, I think the 0.029 due to rounding error, from information theory does apply to our cases here, provided we have "real information" which we are doing our quantization on.

Mike

Affiliation
Vereniging Voor Sterrenkunde, Werkgroep Veranderlijke Sterren (Belgium) (VVS)
Persistence

Mike,

Since you persist...

[quote=lmk]

... theory would expect a noticeable additional error of 0.029 for 0.1 step rounding.

[/quote]

The paper you referred to does not say that.  It says the standard deviation of the real variation inside a box with height 0.1 mag is 0.029 (assuming the variation shows a linear trend).  The standard deviation of the rounded observations in that box is zero.  So rounding does not make the observations look bad.

[quote=lmk]

The key point is, when analysing the error around trending data, the correlation does consistently become worse, when you round to 0.1

[/quote]

Sure, the correlation of the real variations inside the box mentioned above is 1 and for the rounded observations it is 0.  But correlation is not standard deviation and that's what we are talking about it here.  I don't remember having read a paper where one tests whether the observations of a variable star are following a linear trend. And when the variation is large enough the difference would be small anyway, as your test shows.

Patrick

Affiliation
American Association of Variable Star Observers (AAVSO)
CCD VS Visual Precision

OK Guys, this is my last post on this topic. If after this someone is still not convinced that CCD observations are much more precise than Visual observations, I have nothing more to say other than run a bunch of data yourself and prove this conclusion wrong. Don't claim it is wrong; Don't hypothecate. Go get a bunch of data and work through the calculations and show the rest of us.

The response is very long with and there are several supporting spreadsheets and images from VStar.  I will only include the bottom line conclusion in the body of this post. The full analysis is in the attachments. 

Conclusion:

So, after all of this I come to the same conclusion from the data that I did after my quick, approximate analysis: CCD observations have much higher precision than visual observations. Now, however,  I conclude that my original estimate of about 3x underestimated the relative advantage of CCD precision over visual.  

Brad Walter, WBY

Sorry I missed one important spreadsheet with the revised comparison of CCD vs Visual Precision of a 0.05 phase bin of Eta Aql. It is now attached. 

Affiliation
American Association of Variable Star Observers (AAVSO)
Comp error impact on submissions

When using VPHOT and only one comp star, the error estimate created in the VPHOT AAVSO submission report only relfects the SNR error, not the comp star error.  But if the file is transformed by TA, it adds the comp star error which often swamps the SNR error.  Which is the appropriate way to report?

Gordon

Affiliation
American Association of Variable Star Observers (AAVSO)
Errors combine

[quote=mgw]

When using VPHOT and only one comp star, the error estimate created in the VPHOT AAVSO submission report only relfects the SNR error, not the comp star error.  But if the file is transformed by TA, it adds the comp star error which often swamps the SNR error.  Which is the appropriate way to report?

[/quote]

Well from statistics, errors combine in quadrature (squareroot of the sum of their squares), PROVIDED - (1) both errors are purely random errors and (2) the errors are independent of each other. It's safe to say SNR error of your instrument and comp star error are independent, and SNR is random by definition, so that just leaves - is the comp star error purely random, or has some systematic component? Then you can determine if combining them in quadrature is correct or not.

However if one "swamps" the other, regardless of whether it is random, systematic or a mixture, the combined error will be essentially the same as the very large error anyways.

Mike

 

Affiliation
American Association of Variable Star Observers (AAVSO)
WBY

Gordon, 

Something I want to be sure I understand. When you say the comp star error I think you are referring to the error of the sequence magnitude of the comp star not the observers measurement uncertainty for the comp star. The observer's measurement uncertainty of the compstar should be included in the uncertainty the observer supplies to TA for his standardized magnitude measurement.  I am pretty sure I understood what you meant but I want to be sure. 

Thanks. 

Brad

Affiliation
American Association of Variable Star Observers (AAVSO)
my meaning of comp star error in earlier note

Brad,

Yes, you're correct.  I should have been clearer.  In my earlier message I meant the comp star sequence error (i.e. speciifed in the photometry table).  I also agree the observers' comp star measurement error should be included.

I think the problem is we probably need both error estimates - i.e. with and without the sequence error.  When measuring for fine detail - like looking for a transit or small (<0.05) trends, the most important factor is getting high SNR (e.g. SNR > 200)  Somehow our reports need to show this (along with the comp star id).  If we add in the comp star sequence error it may hide the fact we had high SNR measurements.

But if one is asking, what is the best true measurement of the magnitude, the error estimate for the comp star sequence needs to be included.  Just use the simple example of a "perfect" measurement - it would still be off by the comp star error.

And the problem gets complicated by history - what is in the AAVSO data base today?  I'm sure its a mix.  I've used both AIP4WIN and VPHOT for 8 years with one comp star; both leave out the error from the comp star sequence.  If one does ensemble photometry I'm not clear what either tool does.  And I don't know what iraf or MaxIm compute and put in their submission report. 

The best solution might be to report both errors, but that would require a potential nightmarish change to submission formats, etc...

Not easy...

Gordon

Affiliation
American Association of Variable Star Observers (AAVSO)
Different error estimates - request for professional input...

One of the current campaigns highlights the problem of the alternative error measurements.  Alert Notice 510: requests observations of the symbiotic nova ASAS J174600-2321.3 - transformed in B, V, and I.  Clearly the scientists are looking for measurements as accurate as possible - probably with errors < .02.  But the minimum error in the comp star list for  "I" filter is.086 - and most are > 0.12.   And most V and B comp star errors are greater than .03 and .06 respectively. 

Observations for this campaign need high SNR images and transformation to get the last .01 or .02 improvement in precision.  But if we include the comp star sequence errors of 0.03 - 0.09, the scientists won't know what quality of the data they are working with.

But, as I said in my previous note, if measuring for absolute accuracy, one needs to know the error in the comp star sequence.  Which keeps leading me back to the potential need for both.

I'd be interested to hear from the professional astronomers - how do you incorporate the AAVSO error estimates in your analyses?  Are we chasing a problem you really don't care about?  

Gordon

Affiliation
American Association of Variable Star Observers (AAVSO)
Comp Star Error

Gordon,

Thanks for the reply. I thought I understood but sometimes understanding depends more on the preconceptions of the reader than the intentions of the writer. 

I agree, that the issue of which error to report it complex, but I contend that unless you are going to report both all the time from now on for transformed stars (which could be confusing) it is critical to make a decision and stick with it. that doesn't mean flip a coin and go forward, but I bet you could get all of the key people in a room turn off the phones and reach a decision in a day particularly if they have a list of potential alternates in advance to investigate think about. 

Personally, I am continuing to use TA and will add the sequence mag error in the notes field So users don't have to look up individual every single sequence table I use if they want to pull out the seq. mag. error.  Looking up a couple of thousand individual tables sounds terribly manual, and time consuming. I start to groan just thinking about it

Another Idea, worth what you pay for it, is to add the comp star sequence magnitude and uncertainty from AID as fields in the records of transformed observations. that way removal can be automated. This is probably one of those things that is easy to write and monumental to implement. 

Brad

Affiliation
American Association of Variable Star Observers (AAVSO)
comp star error

Hi Gordon,

As Mike says, errors/uncertainties add in quadrature, if they are random.  The problem is that the comp star error, as given in the photometry table with VSP, should often be considered a systematic error.  All of your measures in a time series, for example, use an identical value for the comp star standardized magnitude, so any true error in it remains constant.  In fact, as the photometry in VSP gets improved, a researcher can go back and reprocess your submitted observation to reflect the better photometry, but can't change the submitted error value, so it becomes less and less relevant to the actual observation.

Therefore, the main sources of uncertainty for a single comp star are:

- the photometric uncertainty of measuring the target (mostly, but not exclusively, Poisson)

- the photometric uncertainty of measuring the comp star.  Note that forming a differential measurement (V-C) actually includes both the photometric uncertainty of the target and the comp.

- the random uncertainty in the standardized magnitude for the comp star

- the systematic error in the standardized magnitude for the comp star, which should be pretty much the same for all stars in the field if the photometry comes from a single catalog.

If you are doing an ensemble, the random uncertainty in the standardized magnitudes for the comp stars adds in quadrature automatically, but the systematic error (offset from the true standard system) remains.  Since the catalogs don't separate out the random vs. the systematic errors, it is hard to add the standardized magnitude errors into the solution in any reasonable fashion.

So what would I recommend?  For TA, since it is primarily set up to do one comp star, I'd not include the comp star standardized magnitude error.  I think a researcher is going to be far more interested in the measurement error, and in many cases, everyone will use the same comp star anyway.  For time series, this will be a more accurate report.  For ensemble, right now the systematic offset will be in the few hundredths range, and in the future will continue to be refined, and the random uncertainties are already included in the averaging process of the ensemble.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
Error computation in TA

TA takes the conservative approach combining errors in quadrature with the one simplfying assumption: that the comp star's instrumental error is correlated with the target's and therefore can be set to 0, ignored.

But the transform equation still leaves you with lots of terms, each with error: Here is the transform equation for V:          Vs = vs + (Vc-vc) + Tv_bv * ((Bs-Vs)-(Bc-Vc))

- Vc and Bc are the errors quoted for the comp stars in the photometry table.

-  vs is the target instrumental error and vc is the comp stars instrumental error. Note the the error for vc is taken as zero as our simplifying assumption.

-- Vs and Bs are the transforms we are computing. Note that this equation is solved simultaneously with the other filters.

-  And don't forget we have an error term for Tv_bv.

So the code ends up looking like: sqrt((pow(rxs,2)+pow(rXc,2)+pow(rxc,2)) + pow(Tx_yz*s,2)*(pow(rTx_yz/Tx_yz, 2) + (pow(rYs,2)+pow(rZs,2)+pow(rYc,2)+pow(rZc,2))/pow(s,2)));

I'd be happy to entertain a different computation for the resulting transform error, but I need a theory behind it.

The transform process is about creating an Accurate result at the cost of some Precision. Precise, inaccurate results are of less interest; hence the AAVSO campaign to have everyone transforming their data.

George

TA author

Affiliation
American Association of Variable Star Observers (AAVSO)
TA Error calculation

George, my apologies, I was not sufficiently specific when I referred to comp star error. I meant the error of the comp star sequence magnitude. I have been submitting the TA error because it is the total error of my measurement relative to the standard system.  However given Arne's comments would it make sense to give a second error term excluding the sequence magnitude error? Of course, You can always get the sequence magnitude error from the sequence table ID. That plus the total error, the transformation expression and whether the sequence magnitude was included as random independent error or in some other fashion (or just the error propagation expression) you can back out the sequence magnitude error. 

I strongly oppose the idea that different observers submit uncertainties calculated using different methods for transformed data. There needs to be consistency of method, probably one method for single comp star differential photometry and others as compatible as possible for ensemble and all-sky.  If not, then there is no way for someone using the data to back anything out or add anything because there is no way of knowing what the starting point is.  Before TA I only used my raw instrumental magnitude uncertainties for target and comp, and transformation coefficient error and did not include the error of the sequence magnitude. Now I am using the error calculated by TA and thought that was the new AAVSO method.  I don't care which it is, but it should only be one or the other. There is already enough difference in how people calculate the instrumental errors that feed into into the error propagation expression. From discussion in this topic, I have the impression that the door is now open to "different strokes for different folks" and that will lead to the mess that all of this hard work was supposed to eliminate. . We need a clear edict from AAVSO "DO IT THIS WAY" for AAVSO submissions. If I am wrong, please tell me how I am wrong. so that I can stop worrying about this. Sorry if I sound dogmatic but I think this is important to the usefulness of AAVSO data. 

George, I am curious about your statement that the uncertainties of the target and comp instrumental magnitudes are correlated. I assume you mean correlated with the standardized instrument magnitudes since they are the input to TA. That makes perfect sense to me because, like Prego spaghetti sauce, "It's in there" already, either from the output of your photometry program or derived empirically from the standard deviation of multiple measurements.  Do I understand correctly? Otherwise, I expect bulk of the uncertainty of the raw instrumental magnitudes (-2.5*LOG(photons/EXPTIME) of the individual stars to be uncorrelated since they are different sources and Poisson noise is random. 

Sorry, I hate the term instrumental magnitude because it can mean different things to different people in different contexts. I never sure what someone is referring to without the clarifying adjectives. 

George, Thanks in advance for clarification and thanks for TA. 

Brad Walter

 

Affiliation
American Association of Variable Star Observers (AAVSO)
TA error calculation

Walter,

I agree that we need to get a clear policy from the AAVSO on how to report error for transformedc observations. I'm setting up a meeting with Matt and Arne to sort this out. Then TA will implement this policy as soon as possible.

Cheers,

George

 

Affiliation
American Association of Variable Star Observers (AAVSO)
TA Error Calculation

George, 

Thank you very much for taking the initiative to thrash this out. This is important and TA has the promise to standardize the data we submit and make it even more useful to the scientific community than it is today.  I would hate to see that opportunity partially wasted by encouraging "different strokes for different folks", particularly after all the effort you have made to create a really good "product." I can see both points of view, but I think it is critical to make a decision one way or the other and stick to it. 

Perhaps Stella should be involved in this discussion since she is now director. It would be unfortunate to establish a policy only to have it change in the near future. This is just a suggestion, and suggestions are worth what you pay for them, if you are lucky. 

Brad, Walter WBY

Affiliation
American Association of Variable Star Observers (AAVSO)
Thank you all for comments. I

Thank you all for comments. I appreciate them as I try to be familiar with the process of CCD variable star observing and obtaining the most accurate and precise data that my system might be capable of. Best regards.

 

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
A bit OT here (additional chart precision needed)

I think we got a bit off the main thrust of this thread with the minutae of statistics, and I am sure I am partly to blame...

Thus, I would like to concentrate on the main issue at hand - identifying errors in reporting, and correcting them.

Mainly, the issue of comp star errors. First of all, with the completion of APASS over most of the sky, we now have an excellent set of comp stars for most objects. This is probably the greatest improvement in the history of AAVSO! Like an order of magnitude or better improvement in accuracy. While for certain high precision photometry, it may still be insufficient, I think for the majority of CCD photometry, the current accuracy of comp stars is very good.

Unfortunately, the improvement has not been as complete for visual observers, the principal reason being that the magnitudes are still only given to one decimal place on the chart face. Yes, the full precision is available in the photometry table associated with each chart, but in reality I think 99% of the visual observers just use the rounded to nearest 0.1 that is printed on the chart. Earlier in this thread I pointed out that a 0.1 rounding error results in a 0.029 magnitude additional error from the true value, simply due to rounding.  So, most of the visual estimates have this additional error term embedded as an unavoidable consequence of comp star rounding.

Though this rounding error is not that large for visual, and is near the smallest resolution that visual observers can detect, it is something that intrinsically exists and does add some additional error to visual estimates. And it is fully avoidable by a very simple change - print the magnitudes to 2 decimals on the chart face!

I would like to strongly recommend this policy be adopted by the chart team from this point forward, especially since the change is so trivial, likely a one line of code in the VSP!

Mike

 

Affiliation
Vereniging Voor Sterrenkunde, Werkgroep Veranderlijke Sterren (Belgium) (VVS)
No change needed

[quote=lmk]

Earlier in this thread I pointed out that a 0.1 rounding error results in a 0.029 magnitude additional error from the true value, simply due to rounding.

[/quote]

Mike, I don't want to dwell on this further, but part of the discussion indicated that your claim was unfounded.  So I don't think you should ask for a modifcation of VSP based on this.

Patrick

Hi all!
It is curious to

Hi all!

It is curious to learn a full error of determination of brightness of stars on олл Skye cameras. There is fuller formula of calculation (PDF file) of an error in which the tool mistake and an absolute error is considered. If to consider on this formula, it turns out +-0.3 mag. The observer has (BISA) with the 40th summer experience of supervision this error +-0.07 for average supervision till 5-15 shots. How there can be such error?

All the best, Ivan