photometry during non-optimum conditions

Affiliation
American Association of Variable Star Observers (AAVSO)
Fri, 01/14/2022 - 20:19

Regarding photometry during non-optimum conditions, should observations made during wildfire smoke-filled skies be considered reliable and useful photometry?

This past summer, and most summers, mid-summer wildfire smoke covered most of the continent during many days and nights, as illustrated in one example of a smoke-aerosol chart from NOAA-HMS (Hazard Monitoring System) for July 21, 2021 (image in blog below, from source https://www.ospo.noaa.gov/Products/land/hms.html). Yet, many photometrists submitted observations, including me but I deleted my PEP observation made during the night of July 21 for Alp Cyg. It’s light curve is attached showing PEP observations with July 21 in the middle, and the sinusoidal curve is clearly messed up around that date. I do photoelectric photometry with general operational maximum error of 0.01 magnitude and although the observational error (calculated standard deviation) was well within this maximum acceptable error, and probably most or all of the observations in the light curve also have individual error smaller than 0.01 magnitude, the light curve shows examples of observations from nearly the same time differing by much more than 0.01 magnitude. That particular date, I noticed a smoky appearance to the daytime sky but clearing conditions (while cloudless) during the evening and guessed that it might be safe to do photometry on one of my targets (in this case, Alp Cyg in the PEP program).

More generally, it’s just one example of a smoky night for most North American observers. You could pick almost any star and find photometry observations during nights with significant wildfire smoke somewhere over the continent. In the light curve for SS CYG, also centered around July 21, it looks like the V-band observations often have more scatter than the visual observations. As a visual observer, I’m quite comfortable judging when my confidence in my interpolated magnitude estimate is sufficient to submit the observation, but as a photometrist, I’m probably supposed to know better than to submit questionable observations, at least in the case where I can see the sky. It’s easy to imagine that many observations were made automatically when someone’s remote observatory detected a cloudless night and then the scope began to execute a list of photometry targets, oblivious to high cloud, smoke aloft, aurora, contrails etc.

So, my question is, are any of the photometry observations (CCD, PEP, DSLR) made through smoky skies accurate (as in close to reality), or just simply individually precise but not reliable? Should variable star photometrists be more conscientious about the data they submit to the AAVSO database?

We can’t post images to the forum and so I have posted a few supporting images to a blog at https://franksphotometrywildfiresmoke.blogspot.com/ (darn spammers).

Frank

Affiliation
American Association of Variable Star Observers (AAVSO)
good question

I wondered the same thing.

The smoke had to have had some small effect.

I did suspend transforms of data taken during smokey conditions.

I suppose one could go back and do photometry on some of the constants in images taken before, during, after the smoke.

I did do a cursory look at some of the stars that many observers took data on. Some of the LPVs, densely covered nova, etc. Didn't notice anything horrendous.

Add one thing here: I love to average as much as reasonable given the rate of change of magnitude. It easily cuts the scatter by 50 to 70% depending on SNR.

Averaging hurts the totals sum a lot, but helps the data quality even more; especially for long time series of dim stars. But after you clock a couple thousand OBS for the record, you end up gong back and deleting or averaging the couple thousand because they are mostly scatter.

Need to be a little careful with exposure times and averaging while looking at exoplanet in-e-gresses , delta scuti and any other fast phenomena. But is works well for a field full of dim binarys, LPVs and cepheids.

What worries me most is long-term accuracy, and I test myself with things like NU Uma on VSTAR. Something a bit off there. It's all my doing, but have not figured it out yet. Could be another variable comp.

Ray

Affiliation
American Association of Variable Star Observers (AAVSO)
Of Course!

Frank:

You asked "Should variable star photometrists be more conscientious about the data they submit to the AAVSO database?"

I think the answer is yes, of course, all photometrists should be conscientious about their data! Look at the scatter in AAVSO light curves and you can certainly agree that many observers do not put as much effort into their analyses as they should. Ideally, but unrealistically, all the data points in a light curve reported as a standard magnitude should fall on top of each other. However, all measurements (photometry or other), have some inherent imprecision (random error) and bias (accuracy). In each case, the smaller the better.

As you have stated, our sky quality is never perfect and often downright awful! Atmospheric depth and the presence of pollutants (particulates and other) impact extinction which changes both star brightness and color. If photometrists do not check and correct for these conditions, target brightness may vary by tenths of magnitudes.

Differential photometry reduces the impact of these conditions, especially in a small field of view and at low airmass, but not all. Since brightness and color are impacted differently, a good quality assurance procedure would be to measure a check star that is "identical" in both of these parameters as your target star. IF you calculate a check star magnitude that is identical to its known standard magnitude, you can generally assume (hope) that your target magnitude is accurate. IOW, you have done an excellent job of correcting for systematic errors (bias) due to imperfect sky quality. IMHO, if you achieve regular levels of precision and accuracy of 0.01-0.03 mag you are doing a very good / reliable / acceptable job. If you achieve levels of precision and accuracy of 0.1 mag you should try harder. If you achieve levels of precision and accuracy of 0.005 mags, you are in the top few percent who work really hard to gather excellent data!

And, you are correct about generating and submitting data from a 'black box' photometry tool. If an observer does not check their images and the calculated precision and accuracy, they should not be submitting the data. BTW, none of these comments should dissuade observers from collecting their data but push them to improve by learning and utilizing "best practices". Statistics (means and best fits) do an amazing job at providing a "true" magnitude from less than perfect data from lots of individuals!

Ken

 

Affiliation
American Association of Variable Star Observers (AAVSO)
non-optimum conditions

I observed with a PEP system from Indiana, and CCD systems in Flagstaff, under non-optimal conditions.  PEP was hardest, because the comp/check stars were not in the same exact region of sky and were measured at a different time than the target.  That said, the standard sequence of several comp/target measurements, bracketed by check star measurements, gave a pretty good indication of the quality of the night.  Since I was at the telescope, I could make a judgement call as to whether clouds interfered, and haze/smoke tended to be pretty uniform.  I avoided doing all-sky work on marginal nights.

For CCD work, if the night was marginal, I always took sets of measurements (for a "snapshot" visit to a field) and not just one set of BVRI.  That way, I could obtain independent measures of the target, average them and look at the standard deviation to check the quality.  If one target magnitude was highly deviant from the others, I could discard it on marginal nights.  Since I was at the telescope, I also included copious notes in the observing log as to sky conditions.  We got lots of smoke in Flagstaff in the summer from prescribed burns.  Again, I didn't do all-sky work or extinction calculations when conditions were poor.  I also didn't observe when the smoke was thick, because the stars were extincted, and there was often ash in the air (not good for the mirror!).  Differential photometry in images is amazingly robust.  In the CCD school videos, I show one time series where the instrumental magnitudes varied by 2 magnitudes, yet the differential results were +/- 0.02.  For time series, I often use the (K-C) error as the error for the variable, since the variable might be flickering and contributions to the uncertainty from sky conditions may be buried.

For automated systems like AAVSOnet, determining whether a data point is good or bad gets harder.  You can use metrology information, such as a weather station, a Boltwood or equivalent cloud sensor or an all-sky camera movie.  There are a couple of sites that give cloud motion information over the past few hours, and I check that every morning.  You can also be pro-active and take 3-5 sets of images per field, or carefully watch the check star on a time series.  As Ken mentions, wide field systems tend to show more error than narrow-field systems when conditions are marginal.  You can get patchy clouds that obscure a comp and not the target, or vice versa.  This was another reason that I usually used ensemble photometry with comp stars spatially surrounding the target, as I could see if conditions were patchy.

It is the observer's responsibility to provide the best photometry along with any ancillary information that might help the researcher in their analysis.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
What FOV Do Non-Optimum Conditions Affect Results?

Just curious. At what FOV does non-optimum conditions, such as patchy clouds or smoke, begin to affect photometry? I presume at a narrow enough FOV, all stars would be equally affected. I realize that this may vary by night/conditions, and all frames should be examined. However, is there a FOV that patchy smoke/clouds become more of a problem? Best regards.

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
What FOV Do Non-Optimum Conditions Affect Results?

Just curious. At what FOV does non-optimum conditions, such as patchy clouds or smoke, begin to affect photometry? I presume at a narrow enough FOV, all stars would be equally affected. I realize that this may vary by night/conditions, and all frames should be examined. However, is there a FOV that patchy smoke/clouds become more of a problem? Best regards.

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
What FOV Do Non-Optimum Conditions Affect Results?

     Mike didn't get a direct response to this, so I'll give it a try.  The advice I was given long ago (by Ian Thompson at Las Campanas Observatory) was that for differential photometry you needed something like 3 minutes exposure or longer to smooth out _thin_cirrus_ (less than a couple tenths of a magnitude of added extinction) across the usual telescopic CCD field (say 20').  My experience using the Lowell 0.7-m robotic telescope (16'x16' field) was that this advice seemed about right to achieve internal rms scatter < 0.01 mag.  Thus I often work on asteroids with 5-minute exposures with some cirrus, and the results are fine.

     At much shorter exposures (< 60s) the difference in extinction/obscuration across the small field leads to noisy or just plain wrong results.  Perhaps if one took 'many' such images, then the averages would come out OK.  More recently I have been using our 1.1-m telescope on the same targets (mainly T Tauri stars).  The telescope is much more efficient, such that exposures are ~5x shorter than with the 0.7-m (it had a bad turned edge on the primary, making the images quite soft).  So now exposures are often well under a minute, and I really need photometric conditions (or nearly) to have the differential on-chip photometry come out with high internal precision.  I generally take three to five short exposures at each visit for many of the fields, if only to beat down the scintillation noise.  Obviously with a much wider FOV this criterion is going to be less favorable, so more and/or longer exposures would be required to smooth things out.

     The smoke from worldwide wildfires has been significant in the US West in recent years, often obscuring the sky altogether.  When things are merely murky, the structure of the smoke is far broader than cirrus; it is also moving more slowly.  So I have got away with the short exposures without problems.  The obvious test is to try it on some field you're familiar with on several of the poor nights, and look at the scatter in the differential magnitudes of multiple pairs of comp stars as compared to good nights.

     I presume many folks in less-dry climates have to deal with mid- and low-level clouds that are more-or-less opaque.  That's a time to leave the telescope closed --- unless you are really desperate, and don't mind throwing out 3/4 of the data post-facto.

\Brian

Affiliation
American Association of Variable Star Observers (AAVSO)
Clouds, Smoke, Extinction and FOV

Hello! Thank you for your note.

    I had not thought about exposure time as a means to smooth out cloud artifact. Thank you for that guidance.

    Just curious - how do you check your frames for thin clouds that might cause smaller errors? Significant clouds are easy to spot on an image. However, I would think that thinner and higher clouds that might affect photometry accuracy might be much less visible to visual frame check. Might this be revealed as an increase in photometric uncertainty even though the image itself might appear uniform to the naked eye? Thanks

Mike

Affiliation
American Association of Variable Star Observers (AAVSO)
Clouds, Smoke, Extinction and FOV

     Yes, exactly.  It is easy to make internal checks of your data.  Some of the T Tauri fields I work on encompass several variables, or stars I am simply curious about.  One of them, for instance, includes a slightly-active field star that is hardly variable.  So even though the T Tauris can be 'active accretors' and bounce up/down unpredictably several tenths of a magnitude each night (even during a night), that peripheral star shouldn't vary more than ~0.01 mag night-to-night.  Another field happens to have a slow semiregular variable in it (Elias 3-9, not in VSX).  So if the data from a particular night does not follow the smooth trend from previous nights, that's a sign that the data are poor.  It should be straightforward in any particular field to measure some random star(s), along with however many formal comp stars one uses, as a control on the set-up, and over a series of nights the lightcurve should be dead flat.

     Since I am at the telescope looking at every image, I can generally tell from the sky visually, and from the count-rates on specific stars on the CCD, whether things are likely to be workable.  Luckily in Flagstaff, as Arne can attest, it is usually clear or it's cloudy, with ambiguous nights a minority.  Having things like an all-sky camera or a tiny security camera that is sensitive in the far-red can also let you see how the sky looks.  Weather satellite images are lots better than they used to be, too.  So use whatever tools you have available.

     Because of the increase in forest fires (mostly prescribed burns locally, but region-wide wildfires, too), we've now installed particle-counters at our three telescope sites that give some measure of the low-level aerosols.  This allows us to set some formal numerical limit on closing the telescopes that is more well-defined than "smells like we're on the downwind side of the campfire, probably a good time to close up (get out the marshmallows?)".  As I type this the widget at the 1.1-m telescope reads zero (this is the EPA reckoning of particles of 2.5 microns and smaller).  But last summer, when we had some serious wildfires that obscured the sky apocalyptically for a couple weeks, this ran up over 400 in those arbitrary units.  Folks were wearing masks for covid _and_ the smoke!

\Brian