Skip to main content

How To Archive Spectra: Wavelength Calibration

18 posts / 0 new
Last post
How To Archive Spectra: Wavelength Calibration


I'd like to start a discussion about the best practices in archiving spectra.  There are a number of separate groups that have taken different approaches.  I'm interested in why we're doing that and getting a discussion going about if there is any one standard we can settle on that works best for the widest number of applications available.  To kick it off, I can think of three factors that there are a wide variety of practices on:  wavelength calibration, flat-fielding, and 1D vs 2D spectra.

I think that it might work best to take each of these in one forum thread at a time.  Let's start with wavelength calibration.  If anyone is anxious to start a discussion about one of the other sub topics, be my guest, but I suggest that it might work best to do that on separate threads.

Lets acknowledge that spectra are in many ways more complicated than photometry.  There is not really even a professionally agreed on standard.  And this makes it harder for Pro-Am collaboration and crowd-sourcing spectroscopy.

As a professional who does spectroscopy, I know that the most useful way to archive spectra can depend on what you want to do with them.  If you are doing identification of a transient, then you might not even need a wavelength calibration.  You know what a Type II vs a Type Ia supernova spectrum looks like as long as you generally know this is optical or this is IR or UV.  But if you want to do radial velocity work or catalog how an H-alpha emission profile evolves overtime, then you want a good wavelength calibration.  A good relative (pixel to pixel) wavelength calibration is also needed to measure equivalent widths.  

We should also consider that there are different ways to wavelength calibrate spectra.  An end user that is doing radial velocity work will probably do an artificial wavelength standard like an arc lamp for good relative calibration but then on top of it observe radial velocity standards to improve their absolute calibration.  Whereas someone doing abundance analysis is satisfied with a good relative wavelength scale from the arc lamps but not as concerned about a very well defined absolute scale.

We would all do wavelength calibration all the time to the highest degree possible but we don't because it takes extra steps and it can be hard with some setups.  As a result we do it if it is easy or we really need it and otherwise we ignore it.  Different archives have approached this different ways.  Here are the approaches I can think of:

  1. The forget about it approach - ignore wavelength calibration all together.  The positive is that it is a uniform policy for all spectra that heavily favors those submitting data.  The drawback is that the data isn't really useful to people who need wavelength calibration and that can be really challenging for the end-user.
  2. The let us know if you've done it approach - In this case an archive accepts either type of data (wavelength calibrated or not).  I is up to the user to flag their data appropriately.   This is a very inclusive approach that makes it easy for people to submit data but it is annoying if they flag their data one way and it is actually the other way.  The drawback is that there is not a uniform standard.  This can be frustrating to end-users and make the archive harder to mine.
  3. The you MUST wavelength calibrate approach-  Wavelength calibration is required.  The negative is that this discourages people from submitting to the archive because it requires more work from those submitting spectra.  The positive can be that it raises the quality of the data by imposing a standard that everyone has to meet.   End-users love this because the spectra are ready to use with less interpretation.
  4. Wavecal data stored separately but linked to the observations-  This is kind of related to option 2.  Often wavelength calibrations are done with arc lamps or observing separate standards.  In this model the calibrations are uploaded separately for end users to apply if they want to apply them or ignore them if they want to ignore them.  This has the advantages of #2 for the observers.  It adds a little more work for the end-users than #3.  But it puts a larger burden on the archive, which is now storing in some cases twice as much data and it has to effectively link the observations to the appropriate calibrations or it becomes harder for the end-users to navigate.

I'll say that option 4 is kind of what I do for myself for my own data.  But as an archive end-user we all love #3 where the data falls in your lap with no additional work to do.  And the observer in each of us has a guilty affinity for the idea of just dumping our data with little additional effort on our part.

In each of these we see a balancing (or conflict depending on how you look at it) of the expectations and requirements of end-users versus observers providing spectra.  So many of you on this forum have experience running archives, or as an end-user, observer, or both.  What do you think is the best balance and practice for a spectra archive when it comes to wavelength calibration?  What have you personally found the best compromise for an archive to make?  And I'd also be interested in hearing why.

Robin Leadbeater
wavelength calibration of spectra

Personally, if an observer is not able to wavelength calibrate his spectra and give at least an approximate measure of the uncertainty I would question whether they are ready to submit spectra to a pro-am database and should be encouraged and helped to achieve this minimum standard.

This is done with the Pro-Am Be star database BeSS for example where since almost all spectra are taken at H alpha the telluric lines provide an immediate check on quality of wavelength calibration and any new contributors are moderated and coached appropriately to get their calibration techniques up to scratch.

I dont see that this is in principle any different to photometrists submitting data to the AAVSO database. One assumes they are competent and if any problems arise, the professional user would typically follow up with the contibutor directly to confirm the exact procedures used.  

As an example, contributors to ARAS forum pro-am projects generally follow detailed rigorous data reduction procedures which mirror those at professional observatories and have stood up to scrutiny in referreed journals.




Robin Leadbeater
limitations of option 4

I submit fully wavelength calibrated data, normally not heliocentric corrected as this can be done based on the details on observatory coordinates and times in the fits header (Data should of course be only submitted to databases as fits files with all relevant data in the header) 

I also keep all the original images on file in case of questions.  essentially meeting the conditions of 4

I am not convinced in general though that following 4  offers much advantage as the calibration errors I see don't usually arise during the calibration step combining  lamp and star spectra, provided good software and standard procedures are followed. Any errors are more likely to arise from unreliable lamp spectra, for example instability in the spectrograph producing shifts between star and lamp spectra or poor design of the lamp illumination biasing the line positions. Avoiding these are part of the observers skills and are difficult to identify in the agregated archived images unless all the individual star and lamp exposures (typically taken before and after each exposure) are also stored.


Robin Leadbeater
good wavelength calibration procedure

Of course the starting point for wavelength calibration is to have a good procedure in place to start with.  Examples of bad procedure can be found for the Alpy 600 spectrograph on the Orion project website

Where it is suggested

1) That 2 points are sufficient to perform a wavelength calibration.  Since, in common with most spectrographs the wavelength calibration of this spectrograph is non linear this will lead to unneccesarily high errrors.

2) That a satisfactory non linear calibration can be performed on a spectrograph with coverage down to 360nm using neon lines between 588 and 700nm.   Since all  the lines are at the red end of the spectrograph range, a non linear  fit is likely to generate significant calibration errors when extrapolated to  the blue end of the spectrum and indeed this problem has been seen in practise by those who have attempted this technique. (The website suggests the calibration "appears to be excellent"  but offers  no estimate of the error particularly for the blue end of the spectrum)

Fortunately we can visit the spectrograph designers website where robust wavelength calibration procedures are described for for this spectrograph including the case where the matching NeAr calibration lamp is not available.




  I'm going to step in here


I'm going to step in here and say that Robin you make good points about applications that require high precision wavelength calibration.  There are applications that is absolutely necessary for.  But extending your use of photometry as an example, there are many different levels of observers in the AAVSO.  It is designed that way to cultivate talent and give the beginners a home.  The  archive (the AAVSO international database) is built to accommodate all those levels.  It is not perfect.  But the price of a perfect system can be a loss of opportunity and talent.  

I guess I should have started with the idea of building the equivalent of the IAD for spectroscopy.  Can it be done?  I think that is an open question.

Again with photometry as an example, the IAD is successful because it serves professionals.  You are absolutely right about the success of any spectroscopy database relying on winning the approval of professionals.  But there again, the professional world of spectroscopy is less black and white.  If professionals could agree on a universal format for archiving spectra this discussion would be a lot easier.  As a pro I need to learn as many ways of dealing with spectra as there are spectrographs.  Yes, IRAF has a spectra package but not many pros use it because if you are a regular consumer of spectra then you quickly develop your own home brewed software that gets the job done for what you want to do.  I will concede that this is inefficient (and I'll spare you the joke about herding cats), but it is a reality.  Even for professionals a universal set of standards and format incites heated discussions.

As an example, I was part of a project with the Hubble Space Telescope were the standard wavelength calibrations were an issue.  They were not done with the program.  That did not stop them from going into the HST archive!  We did the calibration later using sharp emission lines in the object and checking it with interstellar lines.  The point is that even pros do some wacky things when it comes to data that is sitting in an archive somewhere and has no noticeable notation on it.  I get your point: that kind of thing is not a good idea for beginners.  But even professional archives are not very strict (or even uniform) with wavelength calibration.

At the risk of offending you I want to take exception to what you said about the Orion project.  We agree that the wavelength calibration they are enforcing is not sufficient for some applications.  However, it is sufficient for a survey of variability.  Will what they find be definitive?  Maybe not.  But there is a long tradition in astronomy of starting the survey broad, learning from what you find, and refining your requirements to address what you find as you go forward.

I have personally done only two point wavelength calibration on spectra I have published in major journals.  The instrument was well understood and the application did not require higher precision.  I see it as is part of the end user’s responsibility to understand the limits and act appropriately. There is also a burden on the observer to communicate what was done.

Your point of view is valid and very focused on the concerns of the end-user.  As end-users, we’d all love to have everything to a very high precision.  But we loose something if we pass all the responsibility to the observers.  I’d offer that sometimes good enough is good enough.  For the growth of spectroscopy as a crowd sourced endeavor, I think we need to provide a place for observers at the entry level because sometimes any data is better than no data at all.  

AAVSO Spectra and Calibration

I'm happy to see the serious discussion of spectroscopic data management get underway.  Having taken many thousands of spectra myself with no place to archive them, I look forward to the day when the AAVSO will provide a permanent accessible archive for this kind of data.

That being said, I think it is very important that we distinquish between the mission of the AAVSO and the operation of pro-am collaborations such as the Be star project or the CBA.   Those projects, run by and for the benefit of professional astronomers focused on a specific interest, can set any standard they wish for amateur collaboration.  The data collected in such projects will be quickly transformed into papers.  After a few years most of these projects will come to an end.

The AAVSO, on the other hand, is not a science project.  It is a permanent (we all hope) institution in the international astronomy community.  Its mission is to enhance our knowledge of variable stars by collecting and providing access to observations over long periods of time, largely provided by citizen scientists.  Obviously no one wants "wrong" information to be preserved and distributed.  On the other hand, we want as many people as possible to be able to contribute, using every kind of instrument available today and 50 years in the future.  The test of acceptability of spectral data ought to be whether it is potentially useful for some kind of scientific purpose.  

For example, consider a spectrum of a 4th mag star taken with a 100 lpm grating in a filter wheel processed by Rspec.  R~50?  Maybe.  Pretty crude data by current standards, like of a visual magnitude of a Mira with a 0.2 Mag error estimate.  Should the AAVSO provide a way to keep it?  Well lets imagine that we had a time series of such simple spectra, taken every month or so, of Delta Sco starting in the 70's. It's not hard to imagine that this data would be of great interest and genuine publication value when Delta Sco suddenly went into B[e] emission in the late 90's.   

My point is that we should as be open as possible so as to get as much information into the archive and to get as many participants providing it.  This means that we provide a way to archive original image frames as well as fully reduced FITS tables.  Thirty years from now, if a grad student is digging into the history of a star, I am quite certain that he or she would really want to be able to reanalyze images from the '10's with 2045 software rather than be forced to depend upon reductions made by someone no longer active who used software that can no longer be run.

Storing data costs little today and much less a decade from now.  We don't have to limit ourselves to one neat file of fully corrected and reduced data vetted to publication standards.   If we do so, we will end up with very little data on very few stars.  

As a minimum, I would want the observer to provide two dark subtacted images, one of the target and one of a spectral standard star, taken on the same night with the same instrument.  This would be accompanied by a fairly extensive set of parametric information on the location, sky conditions, sensor, etc. which can be entered thru a web form.  This package is sufficient for calibration by some future investigator.  Data of this form can only be archived and retrieved, but that's a hugely important capability that we do not have today. 

Fully reduced data (with or without images) is clearly better.  We should encourage observers to learn how to provide it as their skills grow.  We should not and have no need to limit ourselves to a particular standard of perfection as a price of entry any more than we should require 20 inch telecopes and 17K resolution spectrographs as minimum data sources.

Those are my thoughts anyway.

Gary Cole

Starphysics Observatory, Reno.




Save The Images (Gary's Idea)

I like this idea Gary.  This reminds me of plate archives and how useful it is to be able to go back to the glass and work from there.

The only concern (as it is with plate archives) is storage space.  But I think you make a relevant point that bits and bytes are realatively cheap right now.  That is definitely worth pricing out.

TCB168's picture
This is interesting. I

This is interesting.

I submit data to BESS as 1D fit files. They are checked for consistency and quality before being submitted to the database. They are all frequency callibrated and instrument response corrected. I can then easliy compare my spectra to previous ones as can any researcher. I don't see how the raw 2D images could be easly compared. It would mean downloading large numbers of files, processing them and then comparing them. This would be very time consuming for little chance of finding something. Not dissimmilar to uploading thousands of galaxy images and letting someone else search through them for supernovas.

If the image files and the calibration files were stored as well these would not be of much use in the future. Data about what the calibration file lines are would have to be stored as well. If a researched in the future wanted to check them, he/she would have to make a guess as to what the calibration data was and the observing conditions were. This info could be added to the database as well but now you have another file that has to be kept with the image files. It starts to become very unwieldy.

Callibrated 1D fit files are relatively easy to store and study. If a researcher wanted to see the original images then they caould approach the person that took them. This is the same with photometry images. We don't store them in a central database just the photometry measurements.

It is not too hard to calibrate spectra. It would be interesting to actually find out whether the calibration process is actually stopping anyone from deciding to participate in spectroscopy. Maybe a survey is needed.



I'm glad to see some active

I'm glad to see some active discussion taking place on the role of spectroscopy within the AAVSO.

I assume this means, subject to agreeing on the parameters that the AAVSO spectroscopes will/ could be allocated data collection time on suitable telescope systems. That would be great news.

I think we must accept the basic differences between traditional variable star observing and the data collected with spectra. The useful data is contained within the spectra. Therefore this needs to be available. A search criteria for users, besides the obvious target name would be the R value of the instrument used. This would then give a first indication if the data could be used for their purpose.

There are two basic tools used by amateur spectroscopists - lo-res gratings (either as objective gratings or in the converging beam of a telescope) - this is by far the most popular "entry point" for the novice or the more complex (and expensive slit spectroscope).

Gratings are very cost effective giving a good "Bang per buck". They can obtain a collection of <R=250 spectral images quickly and by applying the basic wavelength calibrations (either a 2 point, or a 1 point and known dispersion) present a usable spectral profile of the target object. There are many freeware processing software packages which can provide usable calibrations.

I think therefore when we talk about beginners to spectroscopy it's a very safe bet that we're talking about lo-res gratings.

Slit spectroscopy R=3000 to 30000 is a completely different ball game. The instruments are not cheap and they are a "one trick pony" - they can't take images of the sun, moon, planets, DSO's,  etc. producing a spectrum is all they do, but do it well. Amateurs who have made the investment of time and money to set-up and use a slit spectroscope are almost commited to preparing their data in as a professional manner as possible. Why spend thousands of dollars and NOT do a proper reduction???

The BeSS database has been mentioned and flagged as a "Pro-Am" database which is (my words) unsuitable for Joe Public and his spectra. I strongly disagree!

It's not the BeSS database we should be looking at - it's the methodology. The contents required in the fits header tells any subsequent investigator all he/ she would need to know and determine if the data is "fit for purpose" - Check it out. The content of the header is basic stuff, but for spectroscopy critical.

As an example, spectra data collected by various amateurs (using gratings and slit spectroscopes) on the V1369 Cen have been uploaded to the ARAS Nova page - no issues/ no drama/ no problems. The header information is there on every submission.

As a minimum any amateur collecting spectral data will want to carry out a wavelength calibration (I honestly can't think of any example which has not been calibrated. Sure, we can debate the accuracy but at R=50 - so what?)

I've already been heavily criticised for encouraging amateurs to venture in spectroscopy (!!) - "They need a Physics and or AstroPhysics degree to even start understanding the subject"

Our role is data collection. I accept we are all "under qualified" but that should not stop or hinder our own development and quest for understanding. I refuse to accept that amateurs can't contribute in spectroscopy. We've been doing it for many years.

Sorry for the length of the reply, but as you may guess this is important to me. It's my passion.





Slt vs No Slit

My very zen reply to Ken's comments about slit vs no slit is that on some level there is no such thing as "no slit" or slitless.  Because when you think about it your "slit" with the objective prizm or grating is shaped like the zero order image.  That can make the analysis interesting.  

Following Gary's idea, if we had the image saved, it becomes a workable problem because the grating produces a zero order image that records the shape of your "slit" for deconvolving with the spectrum.

The more I think about it, the better I like the idea of archiving the images.

Robin Leadbeater
calibration procedure


> At the risk of offending you I want to take exception to what you said about the Orion project.  We agree that the wavelength calibration they are enforcing is not sufficient for some applications.  However, it is sufficient for a survey of variability.  Will what they find be definitive?  Maybe not.  But there is a long tradition in astronomy of starting the survey broad, learning from what you find, and refining your requirements to address what you find as you go forward.


No offense taken of course.  I think in that case a clearer statement of the requirements of the project in terms of accuracy and precision would be useful so that contributors can judge the capabilities of their equipment and their performance against it.  A more honest statement of the accuracy and limitations of the proposed methods would also be useful and avoid confusion, particularly among beginners.

Getting back to the subject of databases this is an important issue for a general database where it is not always clear to what use the data will be put. In this case one has to be doubly careful to do the best one can to maximise the value. To date almost all pro-am spectroscopy work has  been for specific projects where the requirements can (but admittedly are not always) clearly defined.

There is also a long history in science of building on what others have achieved. How about adding the link to Buil's calibration methods to the site so that those who want to investigate alternative (more accurate) methods of wavelength calibration can do so?  

The calibration technique using Balmer lines in a reference star is actually a simpler technique to calibrate the Alpy and other similar low resolution spectrographs than the proposed home made neon calibrator approach and gives a better calibration. The reference star can also be used to correct for instrument response and provided it is taken at similar altitude even approximately correct for atmospheric extinction.  It would be ideally suited to this project and will already be familiar to those who are upgrading from the simple Star Analyser grating type spectroscopy



Round up so far


One idea/view emerging in this discussion that in order for an archive to be successful it must give professionals what they want.  And what follows from that is that there need to be really high standards about what goes in.  I only partly agree with that.  

One model for "sections" that has worked very well for the AAVSO is to have associated professionals that provide campaigns and ideas for projects, setting standards, and giving feedback along the way.   Along those lines, I agree that professionals have an important active role to play in non-professional spectroscopy.  It is good to want to earn the attention and respect. But IMHO there will always be pros that look down their nose at non-pros, no matter what.  My own concern is that a strict policy and high standards may not even satisfy those people and at the same time has the unfortunate side-effect of discouraging a lot of novice observers.  

As an end user, I would love it if everyone was an expert observer.  But honestly, some of the most satisfying collaborations I have had with citizen astronomers were ones where working with them made them both of us better.  This is why I think if the AAVSO wants to get into the spectral archive business that we should pick standards that are forgiving.  It is better IMHO to build a system where the price of making a mistake as an observer is low but the rewards for doing a great job are satisfying.  I think there is a lot to gain in the encouragement of novices as opposed to a system that weeds them out or presents a high hurdle to get over to be accepted.  Some would rather pay the price of having a few bad seeds in the mix than throw out any wheat with the chaff.

From this perspective one philosophy isn't better than the other.  Which one you pick depends on what you decide you value the most.  I see room for more than one philosophy operating out there, not competing, but serving different audiences and purposes.  I actually think that the stricter weed-out approach would benefit from an environment where it co-existed with a more nurturing approach in the way that a farm team helps raise the level of talent available to a big-league team.  I don’t worry about pros and all the best players deciding to work exclusively with the big-leagues.  There will be those that decide to again and again play with the "farm team" regardless of their level because that fits their personality better.  Some of us measure our satisfaction by those few memorable wins rather than a perfect record.

A suitable archive fit file

A suitable archive fit file with ALL the data and information you'd need would only take up 25k or so.

Storing images would require an additional linked database give some basic information on the aquisition methods etc. These could be large image files (!) and a complex "search" methodology.

Why not just agree to store fit files of 1D file (profiles) of the spectra?

This does n't present any major barriers to the novice and would indicate both to them and the subsequent users that some thought has gone into aquiring the spectrum.


By way of example.

What would you rather find when you went searching in the archive - an image like this:

novacen1148_d_c.jpg ( I can't load the original 250k cropped fits file to the message)

or a 1D profile like this:

NovaCen_5100A_280114.png ( the fit file would be 25k and hold all the identification info in the header)

Christian Buil
Christian Buil's picture
Spectra archive (ARAS, BeSS experience)

A typical exemple of archive on a current campaign ( SN2014J ):

(ARAS actual archive status - experimental form)

Another "huge" recent campaign on nova Delphini 2013:


(First, efficient amateur spectroscopy is now a reality, (second) note variety of setup used during this campaign, from slit to slitless spectrograoph and up to échelle models,  but with very consistent results and high quality (with the contribution of a professional for some diagnosis, Steve Shore)).


Note also the uniform use of FITS syntax from BeSS experience, understood and implemented in many software.


From my personal experience, I can afirm that the learning curve is extremly fast for amateurs spectroscopist. Of course, sometimes you help observers at first, but after 2 or 3 e-mail exchange or discussion on forum, everything is in place . The results intercomparison is also crucial to move fast.

Note the importance of forums:


The key for the success is the use of clear processing observation and processing protocol, well valided along the time and experience.


It is not always useful to re-invent the wheel. There are now well established processing protocol, well suited to our instruments  and understood by many amateurs now.  Amateur spectroscopy is (presently) a mature discipline! Of course, improvement is of also an actuality subject.

For me, slit spectrographs are easier to use and gives most reliable results quickly. Today,  such specrograph for medium resolution can cost less than a good CCD camera  But slitless are also valuable and can give good results for low resolution observations (see the works of Robin, for example) - but I think paradoxically operation more difficult (a personnal opinion).


Last, I have 20 years experience in spectrogtaphy and since that time, I archive the raw data (2D images). I have lots of hard drive. But rarely (if ever) I look at these images, because ... I took care during spectral processing extraction and calibration! These ancient data are not easy to exploit (software evolution, huge volume, ...).  Presently for me, a one night of spectrographic observation represent typically 1 Gb of raw data! Archive the raw data in a context of muti-user database is totally impractical and unnecessary.


I note with great satisfaction the increase interest of AAVSO members to spectroscopy. Possibilty of a spectro AAVSO archive is a good new. Clearly, new pro-am collaboration and educational project opens. 


Christian Buil



I agree that it would be

I agree that it would be useful for the spectra to be "iindexed" by some images that could be quickly displayed to thumb through them.  The Eta Car archive ( has preveiw images of its spectra in a 2D form.  Those are automatically generated by an indexing algorithm when new data is added to the archive.  Becuase of the nature of those spectra (there is information in 2D moving in the cross-dispersion direction) there is no preveiw plot of brightness vs wavlength.  But I agree that would be usesful for an archive that contained spectra of point sources.

Olivier Thizy
Olivier Thizy's picture
John, ARASBeAm, a BeSS


ARASBeAm, a BeSS database front-end, does this by creating thumbnails from database spectral profile. for exemple last spectra uploaded:


I forgot to mention that BeSS is VO-compatible which allow to use some tools to search, download & compare spectra. For exemple VisualSpec freeware can search through BeSS VO database and has some tools to create thumbnails as well...



Olivier Thizy

I think these discussion also

I think these discussion also answer the question of 1D v's 2D files......

That only leaves flat field corrections.

I'd be interested to hear (maybe a new thread) from any spectroscope users and comments on their current methods of flat fielding....

I'll start the new thread.

Olivier Thizy
Olivier Thizy's picture
Experience from BeSS & ARAS




There have been talks long time ago with AAVSO on the subject of a spectra database, I would be glad if it could progress somehow. But I would also hope that it doesn't start from scratch and does not reinvent the wheel.


ARAS is an informal group on amateur spectroscopy but already made some progress which could (should?) be used for any AAVSO spectra database. We are all available to help AAVSO upon request to benefit from our now long term experience.



BeSS database, developped by professional astronomers in Paris Observatory jointly with amateurs initiated the use of FITS file for spectra profiles.


BeSS :


I believe lot of people would agree that this is mandatory nowdays, but ten years ago the format that was used among amateur community was simple text file without header. Most spectroscopical software today create FITS file compatible with « BeSS format » which can be found there :



One of the main principle of BeSS is that all instrumentation specific data should be reduced.


This includes pre-processing (bias, dark #and# flat), geometrical correction (tilt, smile, slant...) and profile extraction using if possible optimized extraction algorythms.


This also includes wavelength calibration to the best minimal RMS residual – multiple line calibrations that spread well on the spectral domain is prefered to two-lines simple calibration for non linear instrument for exemple.


Last but not least, this includes instrumental response reduction as this can only be done by the observer using his own observational data.


BUT BeSS file format doesn't recommend to apply data reduction that could be done later on :

-telluric absorption line removal ; we assume that better technics could be implemented in the future so better not doing it for BeSS spectra

-continuum normalization : this is a very tricky operation to be performed (fitting the continuum) which depends on the object and it is something that should be left to the person using the spectrum for later analysis so that same method could be used on all spectra

-heliocentric radial velocity correction ; this can be done a any later stage with potentially better algorythms.

-interstellar absorption correction


In summary : reduce your data from all defect which is specific to your instrument but not more. Of course, more can be done for your own analysis, but for a spectra database, there is no need to do it now when it could be done later one with better tools and/or more consistent method on spectra from different sources.



Keep in mind that BeSS is not only a database. ARASBeAm is a front-end which has been developped around BeSS and allow to drive observations toward the most interesting targets. This is very beneficial to ensure for exemple a good coverage of the Be stars. It is also a great tool for observers which can easily find target for their equipment – beeing low or high resolution spectrograph.



BeSS is also a group of administrator that validate every spectra submitted. Not only this ensure a good quality of the database content, but it also help all new comers to reach a minimum level of quality from their observations.

I am sure to find people which benefited from this support from the more than 50 amateur that published spectra in BeSS...



The big issue with BeSS is that it is for Be stars only (well, Herbig Ae/Be too). We are working on a more general spectra database and the current campaigns on ARAS website are actually a preliminary step toward that database:


ARAS web site :

ARAS database :

Nova Del 2013 page :

Nova Del 2013 database :

Nova Cen 2013 :

SN2014J page :

SN2014J database :


Current campaigns allow to archive the most critical data and the exemple of nova Del 2013 spectra is exactly what we would like to expand to numerous other variable stars.


We have a prototype of a more automated/efficient database that should help in managing more spectra in a more efficient way. ARAS goal for the spectra database is to simply the process, to « Keep It Simple », and to support either campaigns specific observations or more general spectra, over a broad list of objects.



I only hope that AAVSO initiative will be coherent with ARAS – but I'm confident that it will be. Again, ARAS folks are available to help AAVSO in anyway that could be.



Olivier Thizy

Log in to post comments
AAVSO 49 Bay State Rd. Cambridge, MA 02138 617-354-0484