Project LAVA is off to a good start with the posts on the Visual Observing forum's topic: "Observing CK Ori, a non-variable star on the LPV Program ".
We started to talk about a standard procedure so we can easily compare and reproduce each other's analysis. That would include documenting DCDFT parameters (period/frequency range and resolution or standard scan).
Matthew's initial suggestion for a method was:
Fourier analyze a given light curve using all of the data.
Assess whether there are any meaningful signals present other than one year and one month.
Repeat that analysis using data only from individual prolific observers (people who covered it well for several years or decades), one observer at a time.
I have some questions:
Does anyone want to add anything to the approach proposed by Matthew?
Is there a role for ANOVA or self-correlation? ANOVA will give small p-values even when the only signal present is yearly/monthly. Creating a model, pre-whitening the data should help by removing such signals and their aliases, in order to determine whether anything is left other than noise.
Obviously, just looking at the data, filtering by observer, looking for regularity is important, as Sebastian showed in one of his posts. The mean time between observations plugin can help here as well, allowing you to quickly pick up observations that are ~365 days etc apart. VStar has some ways of easily creating filters, e.g. by observer code or date/mag range.
Taking Matthew's suggestion literally, all data means observations in whatever bands are available. Should the focus be upon visual observations or where available other bands such as V? Sebastian cautioned about trusting visual observations of HR 7923 (HD 197249) for example. For that star, there are no other bands to analyse in the AID (other than a couple of isolated observations in another band) so perhaps nothing further can be done in such cases?
Should we use the Data Analysis forum (when created) for discussion about the project and to collect links to resources (e.g. Percy's paper , the list of stars to be analysed? Or should we make use of a Google Group/Docs (e.g. a spreadsheet: https://docs.google.com/spreadsheet) to organise our analyses?
Should we break up the list into chunks of a few at a time for analysis? Pete has analysed several and Doug and Sebastian have commented on these. I posted a simple analysis of SY Mus and have started to look at a couple more in the background. How do we want to proceed from this point re: breaking the work up?
Should we have one person analyse a group of objects, then someone else repeat the analysis for that same group or perhaps instead just have one or more people comment upon it in order to generate discussion?
I tend to think that using a uniform "workbook" approach to documenting our analysis will help us to communicate and reproduce each other's work. I liked Pete's Word document and his thorough approach, but I wonder whether something simpler for each object analysed is what we need initially as the basis for discussion of the type we have seen between Pete, Doug, Sebastian.
One possibility is something like this:
Visual: 566, V: 36
ANOVA: Visual, p-value: < 0.000001
DCDFT Visual (std scan): 182.14340424, 359.15600836
Pre-whitening with 182.14340424, 359.15600836
ANOVA: Residuals, p-value: ...
ANOVA suggests presence of signal is questionable once these periods have been removed.
where band and ANOVA information is taken from the File -> Info dialog, periods are from DCDFT top hits.
Obviously, days-per-bin affects ANOVA results etc.
Output like the above for all data and per long-standing observer could be created.
For objects that are obviously variable, like eta Aql, not much analysis will be required. For others, something like the above will just be the starting point for discussion (as we've seen), requiring a richer document with plots, discussion etc. This is also the reason I'm asking whether we want to use the Data Analysis forum (to come) or something else like Google Groups, for this.
Perhaps those who want to pursue the LAVA list can start by choosing a couple of objects each, one that we know is variable, one we don't, taking an approach like this initially, iterating over the approach with these objects until we're satisfied with it.
This topic (thread) will be moved to a Data Analysis forum which will be created in the near future.
Anyway, these are the things I've been wondering about so far.