A better way to do accurate photometry

Affiliation
American Association of Variable Star Observers (AAVSO)
Tue, 12/30/2014 - 20:16

Given the issues with inter-observer discrepancies in CCD photometry, as well as the obvious difficulties observers face with the complexities of transformations, linearity, etc. evident in the postings, I would again like to suggest CCD photometrists employ the similar method as visual observers use - Choose closely spaced comp stars, slightly brighter, and fainter than the object, and use a linear interpolation calculation to obtain the magnitude.

In principle, such an interpolation would be far simpler and inherently much more accurate than the usual technique, where comp and check stars are several magnitudes different, transformations, their accuracy and linearity are necessary to reduce the raw observations. Mathematically, it is easy to see that using an interpolation with two closely spaced points, is far more accurate than using extrapolation, where the "lever arm" effect of a slope is magnified to yield much larger absolute errors away from the fixed, known point.

One argument I have heard against this interpolation approach, is that existing reduction software does not really function in this manner. Well, it seems that VPhot is our "in-house" app, which we could easily have modified to do reductions in this fashion. If Geir K. could add this functionality as an option, at least users could employ either the classical technique, or this new interpolation way, to see what works better in practice!

Mike LMK

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Interpolation

Hello Michael

Your suggestion is an excellent one and is used in instrumention circles as a basic technique.  I have tried this technique in the past, and I always find that, if the comp stars are good, that my measurement of the variable using classic techniques agrees with the interpolation very well--much better than the systematic errors that show up between observers.  I have not done this for many years, but suspect that it would not change things. 

If it did, then changing VPHOT would be easy, but then everyone would have to use it.  Not sure thats a good thing.  Eventually the other software would catch up.  Also, remember that our customers, ie the Professional Astronomers use the same technique that is currently employed in their observations.  I suspect that the mountain top observatories vs the near sea level conditions that most obsevers are required to use, contributes to these effects in a major way. 

Clear Skies

WGR

Affiliation
Royal Astronomical Society of New Zealand, Variable Star Section (RASNZ-VSS)
Better Way to Accurate Photometry

Hi WGR,

I read your note which suggest that sea level observing creates problems.  At Auckland we had little trouble in matching other sites in UBV and U is one area which Johnson suggested could never be transformed adequately at sea level.  At Waiharara I'm still at near sea level - 30 metres high and 100 metres from the harbour and had no problems before and after I did CCD photometry.

I've read many of the suggested CCD techniques and believe that the problem is in these, rather than inherently inaccurate original data.  There seem to be many ways of transforming data, allowing for extinction and so on.  We all have to learn and UNDERSTAND what we're doing and why.  To become a scratch golfer I had to spend years getting things right and this applies to photometry.  Buying a CCD and some filters and hanging all of this on a telescope doesn't make you an astronomer.

I'm presently trying to analyse some colour photometry by AAVSO members of an interesting Mira star and note that two different observers have B-V colours which diverge by 0.05 magnitudes.  One would expect a slight colour change as the period changes - by 10% - but which of two beautifully parallel sets of B and V measures is correct?

I wasn't sure what Mike was driving at but for visual measures of Cepheids we're trying to have a comparison spacing of about 0.1 magnitudes over the range of 1.0 to 1.5 magnitudes.  But I really think the problem is that too many observers think that just selecting suitable software and plugging the data will give you good results.  Not so - you must understand how to do things correctly and the things that can go wrong.  This was a necessary requisite of photometry in the 1970s and later but seems to be ignored.

I've just received back my old UBV photometer and maybe will observe again - but I expect to spend quite a while relearning how to be a good observer.

Regards.

Stan

Affiliation
American Association of Variable Star Observers (AAVSO)
Errors

Hello Stan

THanks for your comments.  I have a question for you.

You wrote: "At Auckland we had little trouble in matching other sites in UBV and U is one area which Johnson suggested could never be transformed adequately at sea level.  At Waiharara I'm still at near sea level - 30 metres high and 100 metres from the harbour and had no problems before and after I did CCD photometry."

Can you quantify your agreement with other observers in magnitude of the objects and the precision of the agreement?

I have been at this for 24 years now, and have spent lots of energy tracking down sources of error.  I welcome any suggestion to improve the accuracy of our resutls.  I do see night to night variation with the same scope/equipment with no changes.  Technique is twilight flats.  

My evidence of it being the sea level location and the attendent high think clouds is anecdotal.  Don't know how to quantify the sky.  I did have on JD 2457019, on EE Cep, a nice time series going on what started as a clear night, and then half way thru, things started to deteriorate.   The precision deteriorated by 10x, even for differential PT.  I have had many nights like this.  

Also, look at EE Cep, from JD 2456850 to present.  You will see quite a variation in my scatter(WGR), all taken with professional equipment from 3 different sites.  Two sites are at sea level and one at 600 meters.  The 600 meter site is almost always better, but not always.  We are in the East of US, and we get all the smutz from all across the country.  Right after a front moves thru, its usually pretty good a day after.  They it deteriorates until the next front moves thru, usually a couple of days.  This is clearly evident in the exposure times that are allowable to avoid saturation.  I have seen 2x differences across the sites and nights.  

I have used Landolt SA110-503 as a target to check the whole process.  I can get within about .003 to .014 mags of the absolute published value.  Maxim 5.16 and 6.07.  Aperture 2-3 FWHM.  40,000 adu's typical.  

Suggestions?

Affiliation
Royal Astronomical Society of New Zealand, Variable Star Section (RASNZ-VSS)
A Better Way to Accurate Photometry

Greetings,

Firstly, to whom am I talking?  The observer code doesn't help me much as at present I'm not an observer but I use the database material a lot.  I also made an error yesterday in quoting 0.05 difference - it was actually 0.5.  If you contact me off-line I'll give you the name of the star to avoid embarrassing anyone.  I'm a chartered accountant, not a professional astronomer, and most of my analyses are in the colour photometry field, not TSP.

I'll sort out some material to back up the claims but mostly it's fairly old and is PEP, not CCD.  When I was trying to do colour CCD photometry from 1996 to 2006 I found it hard to match the quality of PEP and took the lazy way out and did unfiltered TSP for the CBA.  CVs in the 1970s and 1980s were a major interest and it was good to return to the field. The first CCD was Joe Patterson's anyway.  Many of our Cepheid measures are on the database under the name APOG - Auckland Photoelectric Observer's Group. There are other stars there as well - BH Crucis, eta Carinae, a few Miras but many of our early measures were destroyed in a clean-out of the Auckland Observatory records by well-meaning people before the computer age came into being in a big way.  I'll provide some examples for your information.

In many cases accuracy isn't all that important - for Mira stars 3% colour photometry is good enough.  But why do these discrepancies arise.  I never did see an evaluation of Arne's test project with the star in Cetus - did this appear anywhere?

We also do photometry a little differently in the south in that standard stars are available in the E and F regions at 45S and 75S with the LMC and SMC thrown in as extras.  Thus it's easy to determine good values for secondary standards in fields of interest.  This takes a little work but removes most of the extinction problems.

Regards, Stan

Affiliation
American Association of Variable Star Observers (AAVSO)
Better Way to accurate Photometry

I agree with Stan. Picking more comps may help but perhaps not. It depends what causes your inaccuracy or lack of precision. From my experience and conversations with others and numerous posts, the biggest cause of inaccuracy seems to be flat fielding. Multiple comps won't help that problem, particularly if the additional ones are farther away from the target. Other issues are FWHM variations across the field of view and among images due to seeing, focus variations and field curvature; varying sky conditions across the FOV; aperture selection; transformations; target comp color differences (second order extinction); and to a lesser extent for CCD photometry (in reasonably small fields above airmass <2.0) first order extinction,  and for short exposures shutter latency and scintillation, and finally zero point error resulting from the choice of comp star. These are just a few of the most common things that affect the results. There are others that can be devastating but less common. For example. The ghost image of the bright star you used to synch your telescope to the sky and adjust focus immediately before slewing to your target star which is also centered in the field, right where the gost image is. I learned that one the hard way. 

Since flat fielding is probably the worst culprit probably the most important thing is to chose a comp close to your targetunless you are absolutely sure that your flats are really flat. I think the best way to test that is to use that particular set of master flats to reduce a star over a matrix of positions in your field of view and average the results of doing it several times to minimize Poisson noise and to make sure that the results aren't being affected by some atmospheric variation across the field. Even if one set of master flats is flat doesn't necessarily mean the next one will be even if you use a light box, EL panel or dome screen and certainly not if you use sky flats. 

Some of the issues can be resolved by averaging the results from several images or stacking (and checking each visually) and reporting the average. Others can be minimized by Checking th FWHM of the target and each comp in each image, measuring with a bunch of different apertures and making sure the aperture is sufficiently large that the standardized mag has stabilized (or sufficiently above the knee of the individual star raw mag growth curves (with mag axis inverted). you may have to measure at a bunch of different apertures to get close to the "best" for each image and best is somewhat subjective. It may not be the lowest noise depending on your FWHM variations across the image.If you are at 2.0 or higher airmass and you are trying to do high precision , you may have to take extinction into account depending on the size of your FOv and the relative location of target and comps and their relative colors. 

So there is a lot more to do than just  pick comp and check stars, and select an aperture diameter at least 2.0 x the average FWHM of the target star in your series of images and that is before you apply transformations. 

Using multiple comps will give you an estimate of zero point error, but I would never use just two comp stars. I probably wouldn't bother unless I had 5, maybe 4. 

Brad Walter.  

Affiliation
Royal Astronomical Society of New Zealand, Variable Star Section (RASNZ-VSS)
Better Way to accurate Photometry

Hi Brad,

That's a good summary but I'd like to ask a few questions.  So far I haven't been able to provide the background requested in an earlier post but I'll dig out the hard copy records and see what variations we were getting bewtween the comparison and check stars.  In PEP it's such a slow process - comparatively - that Co and Ck and variables are all that is measured.  I've also managed to get a few stars with measures by different observers but they were often years apart and reduction methods were different.  One observer just published V subscript E to indicate they were unreduced V.

PEP was simple aperture photometry and I hope that is what most amateurs are still doing.  But some aspects of this puzzle me.  We tended to use large apertures - normally about 30" at sea level sites (and for what it's worth Auckland seeing was normally more stable than the two high level observatories at around 1300 metres).  Yet most CCD apertures used are less than this.  Then Michael Bessell at one stage recommended defocussed images with CCD work.  This can create problems in crowded fields but these are simple to overcome.  Then it's now the method to take a sky annulus around the star - a nono for PEP.  So are some of the accepted CCD techniques incorrect?  There might also be something in the high mountain/sea level comment but perhaps the message is that different techniques are needed?

I've been helping Mark Blackford get observers in Variable Stars South to use DSLR photometry.  OK, the B-V is compressed by a factor of 2 but the results seem very good and around 2% accuracy when ensemble photometry is used.  This goes back about 4 years now.  In this context I wonder how observers are managing to measure bright stars accurately with CCD systems.  And the bright stars are, of course, the ones with the greatest historical background.

Your comment about shutter speeds interested me.  I struck this problem early on in trying to get 2 second images with an ST6.  I saw some results recently from an ST8MXE or whatever where the V images wer 2 second, B were 4.  Have the shutters been upgraded so that different exposure times now provide acceptable data?

That's enough for now - but I'm curious about the accepted methods used in CCD photometry.  How reliable are they?

Regards, Stan

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Who is this

Hello Stan

WGR is Gary Walker.  Always interested in reducing the errors.  

My only experience is with BVRI CCD Photometry.  I have never done PEP, so PEP comparisons will not help me evaluate what I see on the LCG.  

How many clear nights (usable or photometric) do you get from your location in Aukland?

 

Gary

Affiliation
American Association of Variable Star Observers (AAVSO)
Better Way to Accurate Photometry

First, aperture and sky annulus. I have no idea how you would create an annulus in PEP photometry and it would actually be in the light path if you did. In CCD photometry, sometimes called synthetic aperture photometry, the aperture and annulus are in the software and are overlaid on the image. They aren't real apertures or real annuli in the light path in front of the detector so they don't produce diffraction.  They are strictly software measurement tools. See attached image. Due to diffraction through a small annulus, I guess it makes sense that you would use a larger aperture when it is in the light path. 

I would hazard a guess that there are more amateurs doing CCD photometry at low altitudes than high. I am at 130 meters above sea level, not very high. Most of the people in this part of Texas are imaging at no more than about 300 meters above sea level. Even the Austin Astronomical Society site near the top of the Texas Hill country is only at 335 meters. The Dark sky site for the Houston Astronomical Society is lower than my site. I think most of the differences you mentioned come from the differences between real apertures used in PEP photometry and synthetic apertures used in CCD photometry. Another thing to keep in mind is that synthetic apertures are normally specified by radius but in PEP you normally specify diameter. If I recall reading somewhere that Arlo Landolt used a 14 arcsec diameter aperture for his standard star measurements. That would be a 7 arcsec radius, which for many common CCD telescope combinations would be somewhere between 5 and 12 pixel radius. A rule of thumb for a starting choice for a CCD measurement apertures is RADIUS = 2 x FWHM. So if you have 3 arcsec seeing and 1 arcsec per pixel plate scale a starting place for selecting measurement aperture would be 6 pixels. You have to empirically determine what aperture is best for the specific conditions in every image but the rule of thumb is a good starting place. 

Many astronomical cameras use iris type shutters. So the open and close radially. They take several milliseconds perhaps even 10 or 20 to open and close fully. Therefore they can cause measurable vignetting in exposures shorter than about 10 seconds .  They are very different from the kind of shutters used in normal daytime photography camera that are designed to give an even exposure for exposures of a millisecond or less. SBIG cameras do not use iris type shutters. They use a high speed rotating disk with an opening which avoids vignetting. it opens and closes in the the same direction of rotation and therefore avoids most of the uneven exposure issues. Therefore exposures of a second are possible without significant shutter latency effects. Of course, for short exposures you still have scintillation to worry about but that is easily avoided by averaging measurements from, or stacking, several images. 

For my set up with 9 micron pixels (SBIG STXL 6303) 0.75 arcsec per pixel and 2 to 3 arcsec seeing I commonly end up using apertures ranging between 6 and 12 pixels. 

In the attached image don't worry that portions of some measurement circles appear to be off the image. As you might have concluded from the sliders, the image is actually much larger than the display window so all of the stars being measured are nowhere near the edge of the image.The missing portions of the measurement bulls eyes aren't visible in the view window but they are visible to the software and to me if I move the window around. If I zoomed the view out further, everything would be visible, but I wouldn't have been able to detect image defects as easily (like cosmic ray hits or hot  or dead pixels that weren't corrected by darks). 

Of course in PEP photometry you don't have to worry about flat fielding because you only have a single pixel detector. As you can imagine good flat fielding is critical in CCD photometry due to pixel to pixel detector variations, vignetting (constant, not shutter latency) by your telescope light path  and artifacts (e.g. dust donuts) in the light path that affect regions within an image. I hope this answers most of your questions. 

Brad Walter

Affiliation
None
What?

First up, Happy New Year to all AAVSOers, hope 2015 is rewarding and enjoyable to all!

I'd just like to make a few points here.  If ever there was a better demonstration of what is wrong with amateur CCD photometry than this thread, I'd like to see it.  What started out as a relatively simple suggestion by Mike Linnolt has spiralled out of control - if I was to show this to anyone who had an inkling to take up CCD photometry they'd be heading for the door!  It's almost like the raison d'etre of some is to add new layers of complexity to the already complex reconciliation of data from a thousand different telescopes, cameras, filters, lenses, exposure times, local sky conditions and observers.

Mike has averred on a number of occasions that, disregarding outliers, the scatter in visual observations is not that dissimilar to the scatter in CCD photometry.  I know I've seen error bars in CCD light curves that that are mutually exclusive, meaning that either one observation (or both!) is completely wrong or that the measurers don't know what an error is.  Certainly with consistent and painstaking methodology, individual photometrists can produce absolutely amazing results - eg exoplanet transits etc.  But in general CCD photometry is it reasonable to expect that data from all observers actually can be accurately reconciled simply by application of a vast range of corrections, through software or other means?  Or put another way, will the (apparent) increasing complexity of the data acquisition & transformations and the drive for "perfection" simply kill CCD photometry as an appealing pathway in amateur astronomy and render it the lonely preserve of the few with the skills and interest to undertake it?

Just my two bob's worth.

Cheers -

Rob Kaufman (KBJ)

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Better Way to Accurate photometry

Rob,

I understand your point. However, from my experience and the experience of many others I have talked to, Mike's suggestion doesn't address the major causes of of the scatter in CCD data. It simply increases the complexity of calculation and reporting. You are doing ensemble photometry, though only with two stars. Even if all of the significant sources of error have been included and the magnitude reported as an observation is based on a sample of three observations there is still a 39.9% probability that the "true value" is outside of the error bars. ( 1 sigma for Student's T distribution for sample size of 3). So two observations separated by more than their error bars often means that one or both of the error estimates are too small, but not always. 

It is quite common that the error reported for the CCD observation is only the random error estimate generated by the photometry program. The total error is actually much larger, and many CCD observers starting out don't base their error on the standard deviation empirically determined from multiple observations and sometimes it is impractical to do so.  If you compare the error in sequence magnitudes to the reported error in many submitted untransformed observations, you will see that often the reported error for the submitted observation is less than the error given for the sequence magnitude. That could be correct, but given the number of observations and careful technique that go into determining the sequence magnitudes it is much more likely that the error reported for the untransformed submission is underestimated - systematic error primarily due to  equipment and technique have been omitted. The omitted errors can be  consistent across many night's observations, or they can change between nights or even between sets of observations within a night in ways that are difficult or impossible for others to correct.  

Understanding the causes of those systematic errors and learning how to correct for them is the way to improve the precision and accuracy of observations. That doesn't mean that one has to do all of this "stuff" from the start. Learning about all of the potential causes of error and mitigation methods is a journey. If someone is interested in making scientific observations, he or she is likely to be interested in the journey.  If the journey isn't of interest, then one doesn't embark, and the quality of the resulting data will probably remain unimproved. 

Brad Walter

Affiliation
American Association of Variable Star Observers (AAVSO)
CCD is very complex

[quote=KBJ]

If ever there was a better demonstration of what is wrong with amateur CCD photometry than this thread, I'd like to see it.  What started out as a relatively simple suggestion by Mike Linnolt has spiralled out of control - if I was to show this to anyone who had an inkling to take up CCD photometry they'd be heading for the door!  

[/quote]

A great point Rob! I am Mike Linnolt (LMK) have been 99% a visual observer for the past 15 years. Though I operate the BSM-Hamren scope here in Hawaii, I do not really get involved in the data reduction side of the process. I use the scope for imaging of interesting objects, or visual estimates from the raw CCD images (usually "fainter thans") Now, this is not that I am a-technical, far from it. I am a molecular biologist by profession, and do plenty of instrumental measurements in the lab - realtime PCR, for example!

But, I am much more interested in direct visual observing, because it avoids the majority of the complex issues mentioned in this thread. I prefer to enjoy my observing, to produce "instant" results and report them right away, rather than get caught up in the gory details of accurate data reduction, stuck VPHOT queues, etc.

Of course, this is just a personal preference. I can tell from the forum posts that there are a number of CCD observers that really enjoy the details of photometry, and producing really good results, and more power to them!!

But, as Rob mentions, do we really want to present such complexity to new, budding observers? I could agree that a large number of people who might have some interest in contributiong to citizen science, would feel too intimidated by these discussions to ever get started. "Drinking from the firehouse" problem. Unfortunately, there is no easy way to do accurate CCD photometry! If you start people out real simple, gross mistakes can easily occur, corrupting the AID!

Maybe we need to create 2 AID's, one for experienced observer's who have mastered the techniques, and another for beginners. Let them enter the steep learning curve, but still contribute data right away. As long as we make perfectly clear the 2 AID's have different error bars, I think it would be ok.

Or, as I always have promoted - Get newcomers into Visual observing first. It is far, far easier and people can contribute right away, with a minimal learning curve. Given the plethora of advanced "Goto" scopes on the market nowadays, finding objects has become a lot easier. I know that has been a problem for many newbies in the past. Maybe a basic manual introducing "How to" of visual observing with common goto scopes, and placed on the top of the webpage, would be a big help here!

Mike LMK

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Mike
It is good that you are

Mike

It is good that you are interested in visual observing as it is good that I and others are interested in CCD observing.  If you are going to help us improve CCD observing it would be good to learn more about the subject.  I would recommend that you start with "The Asynchronous Polar V1432 Aquilae and its Path Back to Synchronism" by David Boyd and a host of co-authors.  It was presented at 2014 SAS.

Using CBA data, David took 75,849 observations (most unfiltered), from 23 observers made over a 15 year period and produced a very acceptable light curve as well as making detailed and credible findings about the issue in question.  You can find further examples of how the data is used by looking up any of Joe Patterson's many papers.

In order to critique the data it is good to understand how the data is being used.

Much of the discussion in this thread has involved detail that a beginning CCD observer simply doesn't have to worry about.  An entry level CCD, a modest telescope and a V-filter and you are in business.  CCD'ers are a friendly lot and are willing to offer help...frequently more than you need.  Mentors are also available through the AAVSO mentoring program. 

Frankly, if I had been required to enter variable star observing throught the visual program, I would still be a double star observer.  It just isn't my thing.  If someone wants to start with visual observing, great.  But I would never recommend that all observers start as visual observers.

I probably missed a perfect chance to keep my mouth shut, but there you are.

Jim Jones

Affiliation
American Association of Variable Star Observers (AAVSO)
ANS paper

Gary:

I attach what I think is the paper Arne refers to.  My biggest question is how to interpret the "error budget" graphs on page 20 - ie: what quantitative characterization would be given to these graphs to convey the error distribution?

Affiliation
American Association of Variable Star Observers (AAVSO)
Applying basic principles

[quote=jji]

Much of the discussion in this thread has involved detail that a beginning CCD observer simply doesn't have to worry about.  An entry level CCD, a modest telescope and a V-filter and you are in business.

[/quote]

In business to corrupt the database? Jim, I think the details here are pretty necessary for ALL CCD observers to be able to understand the myriad of factors already mentioned that contribute to errors in data gathering and reduction. How else are you going to know if your data is good? I agree with previous comments that a lot of the data problems arise because entry-level observers use OTS hardware and software, press the button and submit! For sure, some times or many times , no serious issues interfere, and decent data pops out. But we can see many cases where this simple approach fails as well.

I have been involved with this ongoing discussions for years now. Yes, I am primarily a visual observer, but I know the basic process of CCD, and I know quite a bit about data analysis, error, statistics, etc. I teach college courses on the latter subjects. I want to do what I can to help improve the quality of our data, so if you permit me, I would like to continue offering suggestions to contemplate.

One solution is implicit in the quote above. Simply place a large number of basic CCD systems in the hands of a large number of people. Then the "law of large numbers", assuming most of the resulting errors of ignorance turn out to be random, will guarantee a converging error bar. This has certainly been shown to work with the long-term visual LPV observations. But, this is an inefficient way to do it if you have to buy things, the error will go down only as the square root of the number of systems deployed - maybe too expensive?

Mike LMK

 

 

 

Affiliation
Royal Astronomical Society of New Zealand, Variable Star Section (RASNZ-VSS)
A better way to do accurate photometry

Greetings, Rob,

Your comments were very apt and I agree.  But I'm still left with the thought that too many photometrists are trusting too much to 'black box' solutions and don't really understand what they're doing.  There was the guy who was quoting magnitudes to 4 decimal places with what appeared to be about 5% accuracy as compared to the rest.  How you overcome this problem is a difficult question.

And my query about the 0.5 magnitude divergence in B with one star was real - I mentioned this to Arne in the hope that he could diplomatically resolve the problem but no doubt the change at the top has been time consuming.  On the other hand Arne's comment about precision and accuracy was interesting and very true.  And for most of the stars covered by amateurs 2-3% accuracy is good enough.  I'll try to follow up the paper mentioned by Arne out of interest.

But let's not drive observers away by making photometry look too complex as this discussion has tended to do.  So my apologies for whatever I've done to make it look complicated.  And to follow up Mike's comment - I've persuaded one of our better observers to look at a large amplitude Cepheid and the results he's produced so far leave me quite startled.

Regards, Stan

 

Affiliation
American Association of Variable Star Observers (AAVSO)
A better way to do accurate photometry

This discussion has inspired me to attempt an interpolation process, but I have a question regarding submitting the interpolated data to WebObs.

I use AIP4WIN to reduce my data using 1 comp and a check star. For interpolation, I could do two reduction runs using two different comp stars (same check star for both)  which would result in two data logs. I could then then submit an average of the results. But this appears to create some issues for submitting the data to AAVSO.

I could first create a spreadsheet into which I insert the data from both AIP data logs and the results would show the averaged TS mag for the target. This would be especially helpful in time series reduction. But I do not see how I would show the comp star information. Is there a way to indicate in AAVSO format that the target mags are averaged results using the two comps? Or would using multiple comps in an ensemble in one reduction run achieve the same results?

Keith Graham

Affiliation
American Association of Variable Star Observers (AAVSO)
Interpolation

Hello Keith

I don't think either of the suggested methods are interpolation.  There is really no precident here that I know of. I would have to derrive the equations to make sure that the averaging is not equivalent.

What I would do is measure the instrumental magnitude of the target and the two selected comp stars that bracket the variable brighness as close as possible.  Then I would plot the instrumental mags vs the sequence values for the two comps.  Now calculate the slope of a BFSL connecting these two comps.  Now using the instrumental mag of the variable, you should be able to calculate what the standard value would be.  

Gary Walker

WGR

 

PS:  If you would like, send me the 3 instrumental mags and the 2 comp standard values, and I will try calculate the Variable Estimate using interpolation.

Affiliation
American Association of Variable Star Observers (AAVSO)
accurate photometry

I think we are getting pretty far removed from Mike Linnolt's original suggestion of improving CCD photometry.

It is pretty safe to say that the average CCD observer has very good precision, but pretty poor accuracy.  What this means is that the uncertainty from point-to-point in, say, a time series, is excellent.  That is why so many observers are able to detect an exoplanet transit (millimagnitude depths), or obtain superhump periods of cataclysmic variables, where the peak-to-peak amplitude may only be a few hundredths of a magnitude.  Compare one observer to another observer for the same object and the same night, and you might see far larger separation between the mean levels of the two time series - the "accuracy" part.

Mike suggests a linear interpolation between two comparison stars as one method of improving the accuracy.  It is always a good idea to bracket observations - spatially, in brightness, in color.  You remove some systematics this way, or at least understand the level at which they influence the results.  However, CCD sensors are inherently linear (as opposed to the human eye), and so magnitude interpolation is not the major source of offsets between observers.  I don't see any reason to make this a requirement in data submission.

CCD observations can be accurate - look at any of the papers by Ulisse Munari and his ANS group of amateurs, where measures from different observers never exceed 0.01-0.02mag.  Or even look at the light curve of V1357 Cyg, which shows agreement between several CCD observers with far better accuracy than the visual observers for the same target.

As Brad says, the goal is to strive towards better and better accuracy, and I think we are making significant steps in that direction.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
papers by Ulisse Munari and his ANS group

Hello Arne

I have looked at this paper before.  I would like to look at it again, but I can't find it via Google or AAVSO.  Could you add a couple of key words or reference to help find it?  I probably have it downloaded on the MMO Computer and not on my laptop.

 

Thanks

Gary Walker, WGR

Affiliation
Vereniging Voor Sterrenkunde, Werkgroep Veranderlijke Sterren (Belgium) (VVS)
Photometric filter issues

Hi,

if not yet mentioned another issue are the use of photomrtic filters.

Here different manufacturers although providing all V filters, show quite different final magnitudes if used from the same location. Different CCD chips are also playing a role. I have been observing using Astrodon V filters nevertheless having about 0.2 mag difference to another observer just obsrving from an observatory 20 m away from mine the ame variable.

He is using a different brand of V filter.

Happy 2015,

Josch (HMB) 
 

Affiliation
American Association of Variable Star Observers (AAVSO)
interpolation

Hi Gary,

Thanks for your information. Of course you are correct when you indicate that averaging is not true interpolation. I goofed on that one. You got me to thinking back to my visual observing days when, as Mike pointed out in the first post of this thread, how interpolation is utilized in that process.

But this now has me thinking even deeper. As I mentioned, I use AIP for reduction using one comp and a check star. I always try to use a comp star that is close both in magnitude and position to the target star. When I check my results in both QL and LCG, I have to say that I am generally well within the results of others. If I see I am outside of that range, I will go back to check my results. In those cases I usually find I selected the wrong star in my image as the comp star.  Now, that said, I would think that most observers use either a similar process for their reduction or they might prefer using ensembles. Apparently using an interpolation method would result in greater accuracy. I think we all agree that we want to get the most accurate results possible. If interpolation would achieve this goal, then I would think that its usage would need to be employed universally if we are going to achieve a reduction in light curve scatter. If only a few observers use interpolation, then their results of greater accuracy would be buried within the light curve of all other observations using the more traditional methods used in current reduction software.

So this leads to other questions. Users of AAVSO data need to know uncertainties and this information is included when data is submitted. However, as has been pointed out, these uncertainties are sometimes questionable. For example, my software will sometimes show an uncertainty of .005 – not very realistic. But this is what is presented, and I would think a user of this data would take this into consideration. So, how reliable are the uncertainty figures? Knowing this, could the user create an interpolation curve from the data that is currently submitted using the traditional methods mentioned above?  If so, would that interpolation curve match one created using only submitted interpolation data?  Perhaps this would be an interesting project if it has not already been done.

There is no question in my mind that we need to be submitting accurate data. But is the data we currently submit accurate enough to meet the needs of those who use it? If not, would submitting interpolated data be the answer to meeting those needs?  If interpolation is the answer, then there should be a movement to use interpolation universally, and perhaps this might be the next big step in the advancement of reduction software.

 

Keith

Affiliation
American Association of Variable Star Observers (AAVSO)
Interpolation

Hello Keith

Good to hear from you.  I agree with most of what you said.  You also wrote " If interpolation is the answer, then there should be a movement to use interpolation universally".  Yes if it is the answer.  This is the first step that we need to evaluate.  

With the current method, we make 2 measurements, the comp and the target, and use the linearity of the ccd to calculate the Estimate of the Variable.  (We also measure a check, but that does not affect the errors of the Estimate).

Fast forwad to interpolation.  We need to make 3 measurements here.  The two measurements of the comps will have some errors.  These affect the BFSL connecting the two.  The Target will also have an error, and this will affect the result with interpolation.  

So we should do a statistically significant study before we conclude that interpolation is the answer.  I can think of cases where interpolation would be worse than the current method.  ie with interpolation via the visual method, I would think that one would want the comps to be as close as possible to the target in magnitude. (correct me if I am wrong there).  In the case of the CCD, for comperable errors, we want the two comps to be spaced at the maximum spread that gives the lowest error in slope of the BFSL between them, and this will be a lot larger than with Visual.  Of course if the spread to too large, one of the comps becomes faint, and the errors increase.  So interpolation is not a silver bullet.

Then there is the complications that instead of using 2 comps to determine the BFSL, you can use an ensemble of any number, and the complications are magnified even more.  

In addition, before we solve the problem, lets understand where the biggest errors come from.  If they are systematics, which appears to be Arne's optinion and I share that,  then interpolation is only going to give you a little more precise answer with the same old systematic error that swamps any improvement.  

Gary

WGR

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Interpolation

Hi Gary,

 

Your comments reinforced my concerns after reading and re-reading the posts on this thread. Prior to reading these posts, I had not even considered interpolation as a viable alternative to the current reduction methods. I found it interesting that there are inherent errors with interpolation just as there are with traditional reduction methods (I guess that should be no surprise). So now the question arises as to how those errors would affect the final estimate compared with errors using traditional methods. It appears to me from your comments that more research needs to be done to determine if interpolation is indeed the answer to more accurate estimates. It certainly is worth investigating, but I can imagine that this will be no small task.  So the jury is still out until we can establish the benefits of interpolation. Perhaps this is why we do not see it as yet as an option in the software.

 

I think this thread is an excellent eye opener to encourage the investigation of new techniques in establishing greater accuracy of our estimates.

 

Keith

 

PS. On a side note not meant to be advanced here (there is another forum for this) I see you are getting a taste of spectroscopy. I am so glad AAVSO is now getting involved with this field. And I am especially happy to see so many AAVSOers jumping on the band wagon.  I now have an Lhires III and an Alpy600 on two different scopes and have been doing both hi res and low res spectroscopy for about 4 years now mostly on Be and symbiotic stars.  I use a third scope for photometry. My ultimate goal is to do simultaneous photometry/spectroscopy on various targets..A small warningsmiley – as you get more involved with your SA200, you will no doubt want to further your interest and get into slit spectroscopy. This is REALLY fun stuff and very rewarding, but it can get expensive. The SA100 and SA200 are great ways to get started at minimal cost to see if you want to further pursue  and possibly move up to slit spectroscopy sometime in the future.

 

Cheers,

 

Keith

Affiliation
American Association of Variable Star Observers (AAVSO)
Interpolation vs. Ensemble

Gary and Keith, 

The primary reason for using multiple comps is not interpolation. CCDs are very linear. Yes it is to improve precision and accuracy but it covers a range of issues having to do with spatial and color variations and estimation of zero point error. There is a provision in WebObs for using multiple comps. It is called ensemble photometry. I recommend, however more than two comp stars. I would recommend a minimum of 4 or 5  Otherwise, stick with a single comp star. The instructions for ensemble photometry in WebObs can be found in item No. 3 at

http://www.aavso.org/aavso-extended-file-format

However, The instructions don’t tell you how to enter comp information. The best way is to list your comps in the comment field of each observation. You are limited to 100 characters (that includes punctuation and spaces. Comment info is downloaded with the observations. Header information is saved and can be retrieved but isn’t downloaded when you do a normal interrogation of the data base. So you have space to make an entry such as “Comps- <followed by by 7 AUIDs separated by spaces>.” AUIDs are better in case there are duplicate labels. If there are no duplicate labels then you can use the labels, which allows you to enter more of them.

If you are using more comps than fit into the comment field, Arne advised me that you can add them as an additional line (or lines) to the header, but then a user of the data has to request the header info along with the observations. It’s more of a pain and you have to know to ask AAVSO for it.  

Some programs can automatically generate ensemble results by simply entering the information for more than one comp.  I haven’t used AP4WIN in more than 10 years so don’t remember if it does.

An additional benefit of ensemble photometry beyond potentially increasing accuracy and precision is that you can get a zero point error. Because the target magnitude obtained using each comp is slightly different and the software can do a least squares fit and add that zero point error in quadrature with the CCD equation measurement uncertainty (roughly the Poisson uncertainty). In my experience the zero point errors are typically larger than the Poisson error of an individual measurement. The zero point error comes from many sources: anything that causes differences across the field of view including variations in FWHM and flat fielding. It also includes error in the comp star magnitude values.

As far as averaging instrumental magnitudes vs. fluxes, consider the following

An instrumental magnitude measurement is -2.5*LOG((1+/-e)*FLUX/EXPTIME)
= -2.5*LOG(1+/-e) -2.5LOG(“TrueFlux”/EXPTIME), where

e is the error in your measurement of the flux;
“True Flux” is the value of the flux you would get from an infinite unbiased sample;
EXPTIME is the exposure duration.

In looking at the effect of averaging fluxes vs. averaging magnitudes only the error terms contribute to the difference in the two methods because the differences of the average of the “true” terms net to zero in the subtraction. If I measure a star fluxes twice and get 10% difference between them that amounts to a difference of 0.114 magnitudes, comparing downward from the brighter to the dimmer*. If I average the two flux measurements and compute the magnitude of the average it is 0.0557 mags dimmer than the brighter measurement but if I computer the two magnitudes separately and then average the result is 0.0572 dimmer than the brighter measurement. The difference between averaging magnitudes vs. averaging fluxes is only -0.0015 magnitudes* hardly significant for measurements that vary by 0.11 magnitudes. The error in computing averages directly from magnitudes decrease as the difference in fluxes decreases because
lim as x → 1 of LOG(x) = 0, so the error term vanishes for sufficiently smalldifference from 1.

If you do the same exercise for 11 equally spaced flux differences from 0 for the brightest to -10% for the dimmest the difference between averaging magnitudes vs averaging fluxes is  -0.0006 magnitudes*

10% difference in flux ratio measuring the same star using different comps in the same image would be an extremely large difference ~ 0.1 magnitude. You would expect the measurements to vary by perhaps a few hundredths of a magnitude at most. However, even if you did have a spread this large, the resulting error from averaging magnitudes instead of fluxes is negligible compared to your standard deviation.  See attached spreadsheet.  

The bottom line is that averaging fluxes is certainly the mathematically correct method. However, for the kind of accuracy we are going to achieve, even down to the millimag level, the difference is not significant. Further since the error is systematic, it has even less effect on precision.

Something else I normally do is measure every comp in the field of view. Even if I am doing single comp photometry, having the additional check stars in the image can give indications of systematic errors due to position or color. When I upgraded from my old ST7 to a large format camera, doing this allowed me to quickly detect that I had a radial systematic error that was undetectable in the small image but significant in the larger one. It turned out there were two causes: radially symmetric distortion and radial gradient in my flat field. Also, it you have larger variation in your check star than you can explain from random error, this gives you a way to determine if the variation is being caused by low level variability of the comp star.

Brad Walter

--------------------
*If you do the computation relative to the dimmer/dimmest measurement the differences are slightly smaller. See the attached spreadsheet.

 

Affiliation
Royal Astronomical Society of New Zealand, Variable Star Section (RASNZ-VSS)
A better way to do accurate photometry

Hi Keith,

The only reason for using interpolation is if the CCD is non-linear.  Hopefully most observers are working in linear regions of the detectors.  I wonder, however, at some of the techniques, equipment and software in use.  In the CCD era it was simple - you built a photometer, learnt how to use it and how to reduce the measures.  There was no OTS equipment - the only choices were in filter combinations and whether to use a 1P21 pm tube or the more expensive and more sensitive EMI pm tubes.

At the end you mention the needs of those who use it.  Traditionally the AAVSO has made TSP measures of large amplitude LPV stars where the present 2-3% CCD measures are adequate and better than visual.  Observers are now moving into other areas like EBs, Cepheids and a variety of lower amplitude stars.  But TSP is only concerned with periods, amplitudes and shapes of LCs so really only well calibrated V is useful.  Most Miras are adequately covered by the visual observers in any case.  Even the very few evolutionary colour changes in Miras can be followed with this accuracy.  So what is all this CCD photometry for?

In my early days of CCD I was assured by various respected professional observers that some degree of defocussing was better than highly focussed images.  But the convention, driven by equipment manufacturers, is to get the sharpest images possible.  Then the software apertures used are much smaller than the old PEP apertures.  Perhaps this is driven by the need for observers to measures the faintest stars possible - at V = 18 the separations are rather small.  But if your'e measuring a star of that magnitude any star within four magnitudes of that will cause poor results - the same applies at all levels. But if you work at 4 magnitudes above the limit the problem stars will be visible.

I often wonder how good the software for measuring the sky annulus is.  Or the software itself - why do different  packages produce a range of magnitudes - Tom Richards asked this question in a Variable Stars South Newsletter article several years ago.  Quite clearly the stated accuracies are too low - but this is a selling point!  And in this thread I noticed there's now new software on line to work on transformations.  Another black box solution that few of us understand.

One solution would be to have more short term projects sponsored by the group.  Observers could become part of these - the good thing would be that results would be compared systematically and frequently - this should make for better quality overall.  Without a challenge of this type there appears to be no incentive for anyone to improve the standard of their photometry.  No matter how bad it is it still ranks as a measure.  So may I ask a concluding question of the observers - why are you making all these CCD measures?

Regards, Stan

Affiliation
American Association of Variable Star Observers (AAVSO)
A better way to do accurate photometry

[quote=Stan Walker]

I often wonder how good the software for measuring the sky annulus is.  Or the software itself - why do different  packages produce a range of magnitudes - Tom Richards asked this question in a Variable Stars South Newsletter article several years ago.  Quite clearly the stated accuracies are too low - but this is a selling point!  And in this thread I noticed there's now new software on line to work on transformations.  Another black box solution that few of us understand.

[/quote]

Stan, while I'm not even trying to answer your philosophical questions, I'd comment just a bit about the software. There are several widely used photometry algorithms available as source code (DAOPHOT 1&2 e.g. in IRAF and ESO-MIDAS, aperphot routines in IDL ASTRO package, Sextractor, etc etc). Most probably several codes are published as journal paper, too. One could make a short overview of those questions - how exactly stellar sources are measured... That would be quite interesting. Of course there are number of proprietary codes as well (e.g. MaximDL), where it is not clear (at least I personally do not have any clue) what methods are used.

Best wishes and happy new year,

Tõnis

Affiliation
American Association of Variable Star Observers (AAVSO)
How it is Done

I use a program called Mira-Pro By Michael newberry, one of the developers of IRAF. The Software manual for Mira Pro has the most complete descriptions of the algorithms of any software package I have seen. It might be possible to download the manual without charge. 

Brad Walter, WBY