Skip to main content

Is AAVSO "to old fashion stuff" concerning modern photometry technology ?

8 posts / 0 new
Last post
akjalam2's picture
Is AAVSO "to old fashion stuff" concerning modern photometry technology ?


I dare to take up this subject as i want to see an automated VPHOT option.

Modern CCD  technology gives a lot of data. A lot.

As now we speend  a lot of time just clicking around in VPHOT when all the raw data for automation already is there.

The only thing you should do is to have your nice sequences i order and press "GO" and your uploaded images whould be processed and reported.

We should be working togheter to make it possible.

I wrote some IRAF/Pyraf scripts myself just to learn how to do it but i feel it should be for everyone to use.

Not just for some.

So how do we do it? :)


MZK's picture
VPhot Automation

Over time, I'm sure more procedures will be automated BUT I strongly urge that an all in one process from data uploading to analysis to reporting WITHOUT any human interaction (visual checking/confirmation of data) would potentially lead to "a garbage in / garbage out" phenomenon!

Currently, a VPhot time series analysis is semi-automated in terms of magnitude generation (yes, it takes 3 clicks) but reporting requires the observer to look at the light curve and verify the data before submitting it to AID. I think this is the most reliable way to conduct accurate variable star photometry.

A few forum members continue to note problems with the quality of data, both visual and CCD!


rmu's picture
Human filter is needed

Automation is really possible, but I think it can be applied only to variable stars whose behaviour you know well, and check data once is produced by the program. Photometry is often a very delicate task, and in some conditions it is too difficult to automate the process. There is a risk of wrong data and false alarms.

But I salute an automation of photometry, if it is applied on rutinary observations, and with a personal watch behind.

akjalam2's picture
Is it so?

Im not so sure that the "human filter" is better than "computer filters".

I do understand that you would get garbage in crowded fields but we already have BSM and a lot of automated robotic telescopes who night after night depends on sofisticated software.

How abt Kepler? I dont think that any human interaction was used there...

I you dig in to Iraf/Pyraf you would be suprised how accurate they are.

I did run some tests with objects using both my Pyraf scripts and then the same objects with VHPOT and they where identical.

So why shouldnt AAVSO be in the frontline on automated photometry?

The quality of CCD and Visual observations i guess reflects what AAVSO is.

An amature organisation. 

We will always have beginners as well as semi-pros.

Im fine with that.

// Pierre



MZK's picture

Two observations:

Your note about AAVSO having both beginners and semi-pros is part of my concern. Beginners will feel that data generated by the "automated system" must yield good data. I think such members learn more if they have to put more effort and visual confirmation into their results. They will learn how to generate good data not just expect/assume it.

Your right about semi-pros generally getting good data with automated systems. This is the case of excellent observation/procedure yielding excellent results. Actually, I would never submit / publish my data without checking the result somehow. Meridian flips sometimes show problems with flats. AAVSO ensemble comps sometimes yield one or two inaccurate results that I have to remove by hand to improve precision and accuracy. Of course, such problems can be alleviated by "good" automation systems! Not impossible but difficult to ensure? I suspect you are a bit more optimistic than I. BUT, we can hope. I like you certainly do. Nothing is lost in trying.

With resepct to Kepler, perhaps one of our professional members could tell us but I bet that initial data from the system underwent "significant" review by many professionals before the system was allowed to generate reported/published data. We should all be so lucky to observe from "ideal skies" with sub milli-mag precision and superbly characterized optics.


Kepler transit photometry still involves human inspection

If you read this paper which describes the procedures followed by the Kepler science team to reduce and analyze data to find planetary transits


Planetary Candidates Observed by Kepler, III: Analysis of the First 16 Months of Data


you will see that in addition to a large number of automated calculations, the team relies upon human inspection of the data for the best candidates to weed out false positives.  On page 19 of the paper, in section 4.3, "Promotion to planetary candidate", one can find this:


Analysis is based on a blend of both quantitative metrics and manual inspection. Both the promotion from TCEs to KOIs and the promotion of KOIs to planet candidates has a human element that not only increases the reliability of the catalog but also reduces the number of false negatives that are discarded


I've been part of several scientific projects which were centered on automated data analysis (Leuschner/Lick Observatory Supernova Search, Sloan Digital Sky Survey, The Amateur Sky Survey), and in each case, humans were part of the loop.  We're very good at recognizing bogosity.

HQA's picture
VPHOT automation

We've left VPHOT semi-automated because it forces you to look at your data.  Look at the number of discrepant datapoints for Nova Del 2013 that have been submitted to the AAVSO.  By inspecting your data before submission, you can prevent a large number of mistakes.

That said, the survey telescopes (like AAVSOnet, APASS, etc.) do have full automated pipelines to process and extract the maximal amount of information from each image.  A typical BSM night, for example, will measure over 100,000 stars. Those pipelines are not perfect, but ON AVERAGE, do a pretty good job of photometry.  We are in the process of developing a more consistent, generic pipeline that could be used by other observers.  Some observers, for example, have asked for a way to extract the photometric data and upload the starlists to VPHOT, rather than having to upload the images.  That might be a way of combining automation with human inspection.


wel's picture
The issue of automation

The issue of automation versus human monitoring is an interesting and challenging one. I think that a key thing to realize is that the "quality assurance" aspect tends to be evolutionary. When I was involved with SuperMACHO - where a key benefit was near-real-time alerts after every night's new photometry, it was interesting to watch the progressive nature of the elimination of false-positives. In the early days, there might be a hundred objects flagged every night as being worthy of attention. As the patterns of artefacts and false-positives became obvious, they succumbed one-by-one to new software filters. By the end of the project only about a dozen objects per night caused alerts and most of those were worthy of further attention.

A great fraction of the issue of automation is the degree to which an erroneous result will cause havoc. If one is measuring pulsating variables with slow modulations of any sort, there is little danger of one bad point influencing the science. On the other hand, if your interest is in rare, transient events your exposure to false positives is much greater. In either case, software filters can be designed to adapt to the realities of real images and real image analysis but the transition period to (almost) full automation will be longer for less well-characterized events.

I think that it is also important to recgonize that visual inspection/interaction is by no means infallible. It is usually just superior to  the initial plan for software-filtering!



Log in to post comments
AAVSO 49 Bay State Rd. Cambridge, MA 02138 617-354-0484