1 post / 0 new
percyjoh

I am not a visual observer, but I time-series analyze AAVSO visual observations.  If you have been reading the papers in the JAAVSO that my students and I contribute, then you will know that, very often, we find periods of 365.25... days in the data, with amplitudes of up to 0.1 magnitude.  We also find periods of one sidereal month (29.53.. days) in many stars.  We attribute the one-year periods to the "well-known" Ceraski effect, discovered over a century ago.  To quote Gunther and Schweitzer on the AFOEV site: "when two stars of equal brightness are aligned so that the line-of-stars is perpendicular to the line-of-eyes, the observer may see the 'upper' star brighter than the 'lower' one".

So I'm guessing that, when observers observe stars like Miras, once every clear night, they *tend* to observe it in one part of the sky (such as the east) at one time of year, and *tend* to observe it in another part of the sky (such as the west) a few months later.  And this results, on average, in a different orientation of the line-of-stars relative to the line-of-eyes.

And, as the moon moves around the sky each month, the observer *may* tend to observe the star on one part of the sky at one part of the monthly cycle, and in another part of the sky later in the monthly cycle (especially if the star is near the zodiac) -- resulting in a tendency for the line-of-stars to change relative to the line-of-eyes.

I'm sure it's a bit more complex than this.  Each variable has more than one comp star, and observers may not use the same comp star as the brightness of the variable changes.  And the situation may be different for variables near the zodiac from what it is for stars near the ecliptic poles.

Does this make sense?  I only need an explanation for an effect of a few hundredths of a magnitude.

John Percy