If you don't have a back illuminated CCD that is not a deep depletion type, is there anything in image processing that can be done to mitigate etaloning effects in bands such as Rc and Ic that extend below 700 nm?
Brad Walter, WBY
Etaloning (or fringing) is due to atmospheric emission lines in the red. These narrow-band line act like a monochromatic light source, and the thickness of the CCD sensing layer acts like a quarter-wave plate.
Note that it is primarily due to the atmosphere, and is a relatively low-ampltude effect. It looks horrible because the eye can see such patterns over the field of view. Because it is due to emission lines, it really doesn't affect the star signal...just your ability to measure sky (and extract faint stars).
To get rid of it, take lots of dithered images of as blank an area of sky as possible (basically, away from the plane of the galaxy and away from bright stars). KPNO used to keep a list of blank fields; we ought ressurect that sometime. Then median-filter the dithered frames to remove the stars - that leaves you sky, with the fringing included since it is a fixed pattern on your sensor. Then all you have to do when observing is determine the mean sky background, and scale your "master sky" to match, and subtract.
I am likely missing something, but: If you subtract a mean sky background from a finge pattern are you not left with a fringe pattern? Unless the subtraction produces zero ADUs in all pixels?
Since the Etaloning is an interference pattern caused by interaction of atmospheric emission lines with internal reflection in the chip, when you subtract the median of the dithered images of an "empty portion of sky", the median image contains the interference pattern from the sky long wavelength emission lines. When you subtract it you subtract the interference pattern, or at least a close approximation.
There are deep depletion versions of back illuminated chips that greatly reduce etaloning because the silicon layer is thicker and absorbs more of the long wavelength light, reducing the amount of reflection. However, There is a big price difference, in the range of $10,000, for the cameras my club is looking at between standard back illuminated and deep depletion versions.
Brad Walter, WBY
All of our back-illuminated cameras here at Tartu Observatory have pretty bad fringing patterns. While usually it is possible to select "nice playground" for target and compstars - the ones without severe fringing pattern - in general fringing introduces pretty bad scatter when measuring faint(er) stars.
I found a very good fringing removal description for Palomar 5m telescope data (http://www.ifa.hawaii.edu/~rgal/science/lfcred/lfc_red.html#fringe). Though, described for IRAF (what I fortunately use daily basis ;-) ). Based on that text, I wrote a bit more fringing-removal-focused text for my students (still, based on IRAF), I'll attach that text here, too. Maybe you find it useful.
Quick outline of this process is very similar to what Arne described:
- preprocessing all data frames, flats, blank sky frames (if they exist). NB! Blank sky frames must be flat-fielded, too!
- processing in IRAF is easy using file lists, creating those lists for all required processing steps
- creating masks for stars on blank sky frames OR on numerous (typically dithered or containing different sky areas; the more frames your have, the better SNR your isolated fringe pattern will have) scientific target frames
- combining blank sky frames (or many target frames, where positions of stars change from frame to frame) into a ''superflat'', taking into account masks per every frame
- smoothing that superflat with median filter to remove high-frequency patterns (fringing is often a pretty smooth pattern)
- subtracting median superflat from superflat gives you more or less pure fringe pattern, deviating around 0. In my case it is typically +-30 ADU.
Scaling just "superflat" (that's basically the one that Arne described as result of median combining of blank sky frames) can be problematic - sky level (in addition to fringes) can be significant.
To apply fringe pattern to object/target frames:
- preprocess object/target frames
- create masks for all of those frames (!)
- scale fringe frame according to fringe pattern intensity on target frame(s) and subtract that pattern
All those things can be done with few IRAF commands. Since fringing pattern is caused by more or less stable (wavelength wise!) natural phenomenon, one can combine many-many target frames from multiple nights into one fringing pattern. This fringing pattern is kind of "instrumental pattern".
I attached also 2 files, one before and another after fringing correction.
As side note, similar process can be used to create illumination correction - e.g. to take into account scattered light in the telescope and photometric instrumentation.
Thanks for the detailed process. I have a question about the median filtered super flat. Over what scale do you apply the median filter, 3x3, bigger, or do you mean you take the median of the entire image? If the goal is simply to remove the average background then using the median of the whole image would make sense. If the scale of the median filter varies depending on the characteristics of the superflat, what is the criterion for selecting the median filter scale?
Brad Walter, WBY