Topic now error issues rather than Unable to load images

Affiliation
American Association of Variable Star Observers (AAVSO)
Fri, 01/01/2016 - 21:03

Afetr successfully loading a series of images earlier today it seems that I cannot load any additional image as the "select images to upload" button does not turn into black when I choose the telescope from the list. I have tried both the wizard and the quick upload with no success. Is there any temporary problem with VPHOT? By the way I am glad that VPHOT is currently much quicker than in the past to display images when uploaded.

Gianluca

Affiliation
American Association of Variable Star Observers (AAVSO)
VPhot Upload OK

Gianluca:

I just uploaded one of my images with the quick upload and it was in my image list by the time I opened the list.

Not sure why you had this issue. Have you tried again?

YES, the queue is faster!!  ;-))

Ken

Affiliation
American Association of Variable Star Observers (AAVSO)
VPHOT vs AIP

Hi Ken

I tried to upload a couple hundred FITS and got just over 30.

SO I operated on the ~30 and resubmitted. The VPHOTs produced more scatter and a little brighter magnitude.

VPHOT data in the photo below is the stuff with err bars. AIP data has no err bars. Is there a possible reason the scatter should be more pronounced in VPHOT?

TRE

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Scatter

Hello Gianluca

Was the aperture exactly the same along witht the sky rings in both softwares?  This will cause a difference in scatter, particularly with smaller apertures.  What was the FWHM of the apertures?  How different were the scatters?

 

Gary

WGR

Affiliation
American Association of Variable Star Observers (AAVSO)
VPhot vs AIP

Hi Ray:

I opened your two sets of data in VStar as shown in your screen shot that you attached.

1. With respect to the precision (error bars), I made a couple of observations. You indicated that you reduced one set of data with AIP, but you ran it with one comp, whereas you ran the VPhot set with an ensemble.

AIP apparently calculated no error terms since I do not see any error bars and no error shows in parentheses after the calculated magnitude in the details box. VPhot calculates an error term and this error is reported in parentheses after the calculated magnitude. The error for the VPhot magnitude was (0.003) for the particular data point I looked at. This is very good precision (may even be unrealistically low!). Again, no error is reported for the AIP magnitude. Therefore, the error bars are larger for the VPhot data, but certainly more realistic than 0!

2. The second question relates to the accuracy (bias) between the two sets of data. Since they were not reduced with the same comps, there is little reason to expect the VPhot and AIP magnitudes to agree exactly. Hopefully they would not be too different and in fact, they only differ by a few 0.01 magnitudes. I would propose that that is about as good as you can expect if you do not use the same comps. Remember that the error reported for most APASS comps is about 0.02 mags.

So, I think you should run the data set with AIP and VPhot with the same comps and check. Force AIP to give you errors? BTW, you probably do not want to add replicate sets of data into the AID. Best to delete the duplicates you have already submitted. 

Questions, comments?

Ken

 

Affiliation
American Association of Variable Star Observers (AAVSO)
AIP and VPHOT

 

Thanks Ken

After I squinted at it a bit longer, I sort of came to the same conclusions. But thought the VPHOT version had a bit more scatter. Again, a choice of comps. The s/n was around 300 so maybe .003 error is not out of the question for each point. The scatter, I attribute to choosing more comps that may have been fruther from the center than the AIP comp. Do more twinkling comps make more scatter?

Ray

Affiliation
American Association of Variable Star Observers (AAVSO)
VPhot Error

Ray:

Using only Poisson count for calculation of precision (1/SNR) is not very inclusive. Your real precision will not be that good. There are other sources of error.

In an ensemble, VPhot uses the estimated target magnitude from each and every comp in your sequence, and then calculates the mean and standard deviation of the target magnitude. This standard deviation and the Poisson error are combined to yield a realistic total error.

Ken

Affiliation
American Association of Variable Star Observers (AAVSO)
Error issues

[quote=MZK]

Using only Poisson count for calculation of precision (1/SNR) is not very inclusive. Your real precision will not be that good. There are other sources of error.

[/quote]

Wonder why everyone uses "Poisson" for the type of randomness here, because with typical SNR used for most objects we measure (ie. SNR of 300 = 90,000 quanta!) it is indistinguishable from Guassian (normal) distribution. Maybe in the old PEP days with only a few "counts" coming in, there may have been a reason to use a Poisson distribution, (where a few discrete counts might differ appreciably from the expected number), but it seems the usage is outdated?

And certainly, as Ken mentioned, the calculated error from 1/SNR, especially when SNR is so large, is just a "meaningless" absolute lower limit on the error based purely on statistics. This is far below the true error caused by the many steps in the measurement process, which are due to both random and systematic effects. (The latter probably the dominant one.)

[quote=MZK]

In an ensemble, VPhot uses the estimated target magnitude from each and every comp in your sequence, and then calculates the mean and standard deviation of the target magnitude. This standard deviation and the Poisson error are combined to yield a realistic total error.

[/quote]

This "Poisson error" is a fundamental mathematical property of any measurement, and needn't be combined with the actual errors in physical measurement. (In most cases is vanishingly small). It should suffice to just report SNR, or counts, or something similar of the measurment.

Has this ensemble technique of calculating error of the target been validated as the best? Seems like you would just propagate the errors of the comp stars in quadrature? Maybe better to do linear interpolations between "nearest pairs" (one brighter, one fainter than the target) of the set of comp stars?

Just some thoughts here.

Mike

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Ken and Mike

Well I am sort of embarrassed to say that after 4 stats classes 40 years ago, I don't remember the details. But I recall that Poisson's version was a counting approximation to the normal distribution. Normal works well when you have lots of data.

The thing that always bugs me is that the stated errors on the comps does not seem to be folded in. So if the smallest comp error is .05 , how the heck can  VPHOT compute an error of .003 for the observation?

Early on I sometimes used the larger error for the comp or 1/sn as my observational error.

Another thing bugs me, Given the capabilities of photometry, why would you use standards from a crowded cluster for all scopes with differing FOV's? It doesn't seem to be sensible to pick out stars from a noisy background in a large FOV, and then us such crappy data to fit a straight line so you can obtain system transform information from an 80mm f5 scope. Seems like the transform would be bogus. I suppose all the stars and background stars in a cluster are the same color and there is no dust to make new ones.

One of these days, I will get out the old mathematical statistics tome and figure out the real errors. Just to busy/lazy to do it so far. I imagine some bright-eyed grad student has written an example error calculation for stars in ApJ or somewhere. But then I would be to lazy to check if the author and referees got it right.

Such a hard business. . .

Ray

Affiliation
American Association of Variable Star Observers (AAVSO)
Error Frustration

Hi Ray:

I hear your frustration.  sad  BTW, I'm not a statistician by any stretch of the imagination either.

I am more disturbed by the lack of reported errors in the AIP magnitudes that you submitted (if that is true?). Yes, it is also a little surprising (but not impossible) that the reported VPhot error was 0.003 even using the conservative error calculation. Keep in mind that VPhot is reporting a random error (almost noise) rather than a measure of accuracy (bias from the "true" APASS comp magnitudes). Even this is not true since both get included somewhat (SD). (Do not quote me on this!) I just use my chemical analysis experience (I am that) to convince myself that measurement errors with typical detectors are more like 1-10% and not 0.1-1% for any measurement! Of course, we really need to talk flux rather than magnitude. The Kepler project is a clear exception, but my scope is not worth that much and my skies are a bit poorer!  wink

Your comment about transformation brings up a whole new issue!! My only conclusion is that the coefficients correct for a systematic error (bias) that is separate from random error (noise). So I think transformation is still valid. I just get disturbed by people who report precision as small as 0.001 mag. I do not believe they are realistic representations of how reliable a measurement is?!

Ken

 

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
calibration

Hi Ken

Good to talk to a chemist. In 2008 I seldom provided error estimates. Tough thing to do and a wrong error bar is worse than none. I always felt a bit guilty about providing an estimated rather than a computed error bar. I figured that the data was about 10 times better than I could do visually and I didn't know how to generate an error for visuals either. My mag value variations were often less than the published errors for the comps. How could a person put an err bar on a measurement that was not at least as large as the error bar on the comp? I could see that some folks were providing error bars and there were discussions as to how to compute them, (s/n?) but folks at that time were not providing consensus guidance. I don't recall a CCD observing manual in 2008, but error-bars that I sometimes provided were sort of based on on page 48 of the current manual. So a user could just look at the variation and eyeball an error bar. Works pretty well.

Discussions like this one in 2014:  https://www.aavso.org/exposre-time-sn-and-mean-v-band-magnitude            are still not provided an answer. As I get closer to taking images again, I'll spring for Arne's $200 CCD course, hoping it is given the treatment there. 

Thanks to VPHOT, I can now at least let AAVSO generate error bars that the AAVSO seems to like, even if I don't know how they are generated. I think the .003 error provided by VPHOT may be the result of a calculation like that on page 49 of the current CCD manual plus the s/n of 350 that VPHOT calculated.  I don't know that early amatuer CCDers fully calibrted the cameras ( I didn't, I was just doing differential photometry ). So I don't know if I believe anyone's error bars from the pre-ensemble era unless they were doing a time series to arrive at an error or the pixels were dominated by star photons so that S/N was easy.  Is there a link with the math as to how VPHOT handles S/N and uncertainty arithmatic?

I am eager to start transforming too. I will likely go through the motions with the software provided by the AAVSO, then do a sanity check like I did on my old 2008 data with VPHOT. It will be awhile before the CCD is back up and running, checked out, and calibrated. So it is still a pipe dream .  Also need to understand the current SOPs  for "good" photometry, and do software checkout again. Had to take it one step at a time in 2006, same deal a decade later.  Getting it all together will be even tougher, given the popular OS's are not happy unless they are married to the home cloud (MS or AAPL). We need a new workhorse OS that pays attention to the job at hand rather than spending 80% of the processor time communicating with the mother ship. Then all the software vendors need to create stuff that works on it. Scientific Linux?

Hope VPHOT gets even better before I need it. Make it upload 500 images at a time for starters.

Ray

P.S. As to 1% measurement errors, I suppose it depends on the instrument and the method. In a recent experiment,  the kT noise conspired with the mechanical resonance of optics to make attometer motions at the surface that swamped our background. And yes, we can count single photons, statistically. It is sort of now you see it, now you don't. Little quantum devils. So 1% is huge for some measurements while 50% is normal for others.

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Mess of errors

[quote=TRE]

I could see that some folks were providing error bars and there were discussions as to how to compute them, (s/n?) but folks at that time were not providing consensus guidance. I don't recall a CCD observing manual in 2008, but error-bars that I sometimes provided were sort of based on on page 48 of the current manual. So a user could just look at the variation and eyeball an error bar. Works pretty well.

Discussions like this one in 2014:  https://www.aavso.org/exposre-time-sn-and-mean-v-band-magnitude            are still not provided an answer. 

P.S. As to 1% measurement errors, I suppose it depends on the instrument and the method. In a recent experiment,  the kT noise conspired with the mechanical resonance of optics to make attometer motions at the surface that swamped our background. And yes, we can count single photons, statistically. It is sort of now you see it, now you don't. Little quantum devils. So 1% is huge for some measurements while 50% is normal for others.

[/quote]

Some further comments/observations on the error issue:

1. Providing error bars without precisely defining how they are calculated, is next to meaningless. And it can even do more harm than good! Someone looking at an "undefined" error bar in data, could jump to the conclusion that the range of the bar accurately reflects the possible range of the measure, when in fact it is frequently derived in some ad-hoc fashion.

2. Providing just a 1/SNR error bar grossly underestimates the true error, except in the cases of very low SNR. But you really cannot rely on such "poor" data much anyways, since the Poisson error in these very low signal cases, means you really don't know if the measurement reflects the underlying physics, or just the statistical quantum "noise". (particularly a single low SNR measure).

3. A good visual observer is likely able to provide much more meaningful and realistic error bars (though very few actually do so!). The human ability to intelligently estimate error when observing really exceeds the automated software approaches. A good visual observer, using good and properly spaced comp stars, should get 0.05 magnitude accuracy, which is not much different than what the inter-observer variations come out to be for all CCD observers. I know this has been controversial, some claim it is "unfair" to compare expert visual observers with average CCD observers, but on the other hand, it is much easier to become an expert visual observer than CCD!

I surely don't want to raise the old visual vs. CCD battle, just want to point out the major difficulties of using automated data collection and software methods alone, to calculate meaningful error bars. 

Mike

 

Affiliation
American Association of Variable Star Observers (AAVSO)
error calculations

 

2. Providing just a 1/SNR error bar grossly underestimates the true error, except in the cases of very low SNR.

This should actually say "except in the cases of very high SNR."  Perhaps this was just a typo.

As to why AIP didn't produce error estimates in the data at the beginning of this topic, perhaps AIP was not provided with the information needed to calculate the SNR (gain, read noise, etc.).

Phil

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Error Progress

Hello Mike

While it is true that observers submitting observations with Maxim get 1/SNR errors and apparently AIP observations are submitted with no error bars, two of our other submission programs account for the errors in a more realistic way.  Vphot and TA calculate a combined error including the errors of the measured comp stars (except for Vphot for the case of using a single observation).  Geir and George have done a marvelous job to make sure that the errors are combined properly with Arne's blessing.

As a further refinement, an observer can in Vphot, look at the sequence errors from the data base, and choose those comp stars with the lowest errors and high SNR.  Just adding more stars to an ensemble does not guarantee a better result.  Because our comps vary in magnitude, their sources are different and the ones with the highest SNR's in our images, are not necessarily the ones with the lowest errors.  This is an enhancement I have only discovered in the past week or so.  

So I believe that reducing the data in Vphot, choosing the ensemble based on the errors, and Transforming with TA give the most accurate result with the most realistic precision and provide a proper error estimate for the LCG.  

The same result can be had in Maxim, but its much more cumbersome and manual to access the comp star errors.  I believe that TA will then add the errors of the Trans Coef as well as the comp star errors. (George? am I correct about this?)

I should also give a shout out to Ken Menzies who maintains our Q&A/Forum on Vphot, as well as teaches the Choice Course for it (It's excellent, I am taking it now).

Gary

Affiliation
American Association of Variable Star Observers (AAVSO)
How VPhot calculates uncertainty

 

 Is there a link with the math as to how VPHOT handles S/N and uncertainty arithmatic?

Ray,

Below is Geir's answer to this question from another topic in the VPhot forum.  (It all makes senses once you decipher the notation.)

The error estimate is calculated in two different ways depending on wether one or more comp stars are used, which probably explains what you observe.

 

In the case of a single comp star an error estimate is calculated for both the target star and the comp star, based on the stars Signal-to-Noise-Ratio, as

 

Err(SNR) = 2.5 * Math.Log10(1 + 1 / SNR)

 

SNR is calculated using the "CCD equation" involving ADU, gain etc, ref. the AAVSO CCD observer manual. Err(SNR) is displayed in the target star estimate table on the single image report page. The final error estimate is then the sum of squares of Err(SNR) for target and comp star:

 

Err = Sqrt( Err(SNR)t*Err(SNR)t + Err(SNR)c*Err(SNR)c )

 

In the case of multiple comp stars it is preffered that the standard deviation of the comp stars target estimate is used. The Single Image Photometry report table of comparison stars has a field called "Target estimate". The standard deviation of the mean of all the values in that field. This is listed as Std in the target estimate table on the same page. Now the final error estimate is taken as

 

Err = sqrt( std*std + Err(SNR)*Err(SNR) )

 

where Err(SNR) is as described above. This should be a conservative estimate.

 

The advantage of using the std is that it covers all kinds of errors, for instance errors in the sequence. If the comp stars magnitudes are inaccurat, perhaps an average from many catalogs of various quality, you might get a bad result even with bright stars with high SNR.

Phil

  • reply

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Too low SNR error

[quote=spp]

In the case of multiple comp stars it is preffered that the standard deviation of the comp stars target estimate is used. The Single Image Photometry report table of comparison stars has a field called "Target estimate". The standard deviation of the mean of all the values in that field. This is listed as Std in the target estimate table on the same page. Now the final error estimate is taken as

 

Err = sqrt( std*std + Err(SNR)*Err(SNR) )

 

where Err(SNR) is as described above. This should be a conservative estimate.

[/quote]

If you look at the attached LCG plot for a typical variable (SS Cyg), you see that in most cases, simultaneous measurements fall OUTSIDE of the error bars. This is physically impossible, and illustrates the problem of reporting primarily SNR error.

In the VPHOT equation, if the "std" of comp stars isn't given, is only SNR, or is given as zero, the total Err will then just come out your SNR error. Such reported errors for typical well exposed CCD frames, have much too small error, because only the SNR is used.

Obviously, the other errors exceed SNR, and the LCG plot shows what then happens.

Mike

 

Affiliation
American Association of Variable Star Observers (AAVSO)
V and Vis?

Hello Mike

Comparing V and Vis is not a good idea.  They require a transform to get Vis to V, and as far as I know, no visual observer is doing that.  (Remember the Stanton Paper).  Vis and V should never overlap.  There is no way of knowing from this plot what software was used or how many comps were used.  I hope that CCD observers will start transforming their data and get away from SNR, other than as an indicator of how good the photons on the chip could be, without a bunch of other errors contaminating them. 

 

Gary

Affiliation
American Association of Variable Star Observers (AAVSO)
Comparing CCD

[quote=WGR]

Comparing V and Vis is not a good idea.  They require a transform to get Vis to V, and as far as I know, no visual observer is doing that.  (Remember the Stanton Paper).  Vis and V should never overlap.  There is no way of knowing from this plot what software was used or how many comps were used.  I hope that CCD observers will start transforming their data and get away from SNR, other than as an indicator of how good the photons on the chip could be, without a bunch of other errors contaminating them. 

[/quote]

Gary, I wanted to illustrate that the error bars of simultaneous CCD measures don't overlap, rather than differences between V and vis. If the error bars truly represent the actual error of measurement, simultaneous bars would fail to overlap only (approx.) 2 * 0.15^2 = 5% of the time, for typical +/- 1 std dev bars. In reality, MOST bars fail to overlap, proving that the bars are not realistic.

Mike

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Overlap of error bars

Hello Mike

Remember, these error bars are 1 sigma, not 3 sigma.  If I look at the logical pairs, then they over lap at the 2.5, 2.0, 2.0, 3+, 0.5, 2.5, 0.5 and 4.0 std dev's.  I agree that it should be better, but 6 of the 8 pairs overlap within the 3 sigma level.  This is 3 different observers, and looks like it is not transformed data.

Gary