SNR in DC photoelectric photometry

Affiliation
American Association of Variable Star Observers (AAVSO)
Mon, 04/27/2015 - 06:03

Photometers like the SSP3 and SSP5 do not count actual photons.  The "counts" they report are merely proportional to the number of photons received.  A single star  measurement, or "deflection," consists of a series of short integrations.  As I understand it, the signal to noise of one deflection is estimated from the quotient of the background-subtacted mean and the standard of deviation of the integrations: SNR = mean/stddev.  The program star and the comparison star as sampled separately, with two comp deflections typically bracketing the program star deflection (the comp deflections being averaged in some way).  Furthermore, multiple sets of alternating program and comp star measurements are typically made.

My question is this:  How is an SNR, or accuracy, estimated from these combined data?  The WebPEP program computes a standard error value for the averaged series of individual program/comp reductions, but that seems to have no bearing on the inherent accuracy of those measurements.

Tom

 

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
DC S/N

Hi Tom,

Can you explain what you mean by the standard errror having no bearing on the inherent accuracy of the measurements?

DC is different than photon counting, as you are measuring the weak current generated by the flux from a star, usually after amplification with a selectable gain amplifier.  Usually the major component to the uncertainty is the centering of the star, as the diode in the SSP-3 has non-uniform sensitivity across its surface, and the star may not be identically centered in the aperture for each visit using the SSP-5.  That is why we have the cookbook pattern of a set of moves and recenters that get averaged together in WebPEP.

Arne

Reported uncertainties

Tom,

The uncertainties reported by PEPObs are measures of the intrinsic scatter of the observation, which includes all sources of noise plus additional scatter caused by sky conditions or other issues.  They do not include additional sources of error in the processing pipeline like (for example) the observer's epsilon(V) which we've discussed separately before.  This is how the PEP reductions have always been done (so they are consistent throughout the historical record), and is a normal way of computing uncertainties in any instrumental data not limited to PEP where multiple measures are combined into a single data point.

Matthew

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Reframe the question

Let me try asking this way.  As I understand it, for a photon-counting system to achieve an accuracy of x, it needs to make a measurement with SNR of 1/x.  Eg: for 0.01 accuracy, I need SNR of 100.  I am trying to understand how to characterize the quality of deflections needed in DC system to achieve an accuracy of x.

The standard error reported by WebPEP does not take into account the SNR of the multiple integrations in the deflections, hence, I don't see how that error can be reflective of accuracy, at least in the sense above.

Affiliation
American Association of Variable Star Observers (AAVSO)
DC accuracy

WebPEP does account for the scatter in the 3 separate average deflections. So it reports a smaller standard error when the standard deviation of the three average deflections is lower. Each of the 3 deflections is calculated as the average of multiple integrations (usually three or four), and is a sample mean of the underlying population. We use the three sample means to estimate the underlying population mean. The standard deviation of our estimate of the population mean (the standard error) is the standard deviation of the 3 estimates divided by the square root of 3.

This method assumes the sample means are all drawn from independent identically distributed random variables. This is not strictly true as adjacent variable deflections are ratioed with the bracketing comp star measures, and adjacent (in time) variable delta magnitudes have one of their two comp measures in common. So they are not strictly independent. In practice this effect is small, although it could be significant if there is a large scatter in comp star measures.

To summarize I believe webPEP calculates the standard error correctly.

Under the assumptions stated above the results would not be different if we used one 30 second integration instead of three 10 second integrations to estimate the mean of each deflection. In one case we let the hardware do the averaging and in the other we do the average.

I am currently hiking the Appalachian Trail and just happen to be in town to resupply. I would be interested in any addition comments when I reach my next resupply town in a week or so.

I maintain the scatter in the individual measures is not the critical factor. It is the variation in the means of the individual deflections which is important. This variation is due to multiple causes such as variations in detector sensitivity over its surface area or gain drift or thermal noise in the electronics.
Jim

Affiliation
American Association of Variable Star Observers (AAVSO)
Re-Reframe the Question :)

 

Let me step further backwards, with apologies for the confusing way I have  approached this problem.

The WebPEP program estimates magnitudes from three nominally independent sequences of measurement, each sequence consisting of one deflection of the variable, V, bracketed by two deflections of the comparison, C1 and C2.  C1 and C2 are averaged in some way, giving C, and a single differential magnitude is computed as -2.5*log(V/C) [I here ignore the step of subtracting background deflections].

The three diffferential measurements are then averaged to give a final differential magnitude, and the standard error of the amalgam, which I undertand to be the uncertainty of said magnitude, is computed.  My question is this: how can the uncertainty of the amalgam be computed without reference to the uncertainty of its parts?

Tom