Improving Upload and Plate Solve Processing Time ?

Affiliation
American Association of Variable Star Observers (AAVSO)
Thu, 03/05/2015 - 20:23

I notice that there are times when VPhot is cranking through hundreds of images in the queue, and looking closer a few days ago, noted that fully 50% of the images were 30MB+  As a VPhot newbie, I'm not sure how bad the backlog gets, but it occurs to me that we could take a bit of time at our end to save time and processing speed for everyone else, and ultimately get our results faster.

1. If we compress the files, they upload a lot faster.

2. For some fields, I think shooting a sub-frame could reduce the amount of data to be uploaded, plate solved and stored.

3. A combination of 1 & 2 above, could cut upload & processing times dramatically -- in my case nearly 90% -- and free up additional storage space, as well.

Example:  My unbinned full-frame images are 32MB.  When zipped, they are 14.8 Mb, so an improvement of over 50%.  For some variables, the entire frame is not needed, provided adequate comp and check stars are near the target, and you don't need the full frame to identify the target with certainty.  I was able to shoot a series of U Gem by using a sub-frame in MaxIm DL, reducing the uncompressed file size to about 7.6 MB.  Using a combination of the sub-frame image size and compression, the file sizes being uploaded were reduced by 89%.

Makes me wonder if a file compression routine could be added to the upload process to automatically compress all files waiting to be uploaded, and perhaps the use of sub-frames could be actively encouraged whenever appropriate?

Clear skies,

Brad Vietje, VBPA

 

Affiliation
American Association of Variable Star Observers (AAVSO)
File Compression Etc.

Hi Brad:

I happened to just take a look at the queue. Most files in the queue at that time were already compressed (zip, bz2). Apparently many people or systems (e.g., AAVSONet; iTelescope) compress before upload already. It certainly speeds up the upload. If your files are in a single folder, you could certainly compress them before upload. Yes, it is another step but not anymore than the calibration step you conduct on your computer images before upload. An algorithm might be nice if we could get a programmer to develop it?

More importantly, in terms of speeding up VPhot image reduction, the zipped file has to be opened in order to reduce it so the file will be its original size at that point so I think the queue would move at its current speed. I think that is the rate limiting step since upload occurs in the ether before it gets to the queue.

In terms of web storage space, zipped files help keep storage costs down. Important!  I don't think storage space is currently a problem. It was increased in the past thanks to member donations. Still an issue for thought, especially processor speed.

Shooting sub-frames has been mentioned before but for a variety of reasons I have not seen many members try it. It's an extra step and for small ccd fields not an option. Of course, those with big cameras should take on the responsibility to try this. Sooner or later, it will cause the powers that be to reconsider charging for vphot use? I personally think about a 30' x 30' FOV is a sweet spot for most photometry. Bigger FOVs are wasted for most photometry IMHO. But bigger almost always wins out today!.And bigger scopes usually mean smaller FOVs.

Oh well, a continuing battle. Everyone needs to do their part!

Ken

Affiliation
American Association of Variable Star Observers (AAVSO)
Compression and Sub-Frames

Thank you, Ken.

It didn't occur to me that plate solving would still run at the same rate, but of course it would.

My FOV is 43 arc min square, so if I'm shooting a longer time series I can use a sub-frame for many if not most targets.  When taking these robotically, however, that may not be an option.

Clear skies,

Brad Vietje