One year remote imaging

It is just over a year ago since I installed my telescope rig at a remote hosting observatory in Spain. The journey to go remote has been documented in four earlier blog posts. Part 1 was all about goals, site selection, software and hardware. Part 2 described the design and remote command and control. In Part 3 the rig was assembled and tested. Part 4 described transport to and installation at the site, calibration and first light. In this final part, I’d like to share some of my experiences after one year of use. What went well, what would I have done differently, and how has it changed my approach to astrophotography.

Remote Imaging

When I drove home from the remote site after installation just over a year ago, I remember that my biggest concerns were all ‘what if…’ situations. I knew that the system was well designed and should be able to make nice images. But what if a device stopped responding, what if the mount would wrestle itself through an unforeseen cable snag, what if screws came loose somewhere, what if power was lost, what if internet was lost, etc, etc. During the design phase I had made sure I’d have redundant on/off switching capability on all devices. It means that I can check performance on many levels. And In practise I’ve found that super valuable. Not only for ‘reset’ purposes, but also to just understand what’s going on. The Shelly IP switch gives me detailed information into power usage over different power circuits, and over time you will know power consumption of an idle system vs. an active system. I’ve got two virtual desktop applications running, Splashtop (provided by the site) and NoMachine. Perhaps once or twice I could not connect with one of the two and used the other as a backup. The issue was always in (Windows) updates, so a restart would have probably solved it, but the convenience to go in via the other route gives great peace of mind. I find NoMachine to have superior image quality on a desktop, while on a tablet Splashtop is a bit more convenient.

So far, none of the systems has broken down. Just once there was a connection issue with the Planewave Delta-T heat controller. Not a real surprise, as that piece of hardware is an absolute joke. In the startup sequence I need to isolate it from the other devices as well, otherwise there are all sorts of issues with it. But a lock-in by the vendor, so little you can do about it. The connection issue was corrected by plugging the device in another USB port, a USB-2 port rather than a USB-3 port. For the rest everything has been working without hiccups.

 

Monitoring power usage on your phone helps a lot in understanding what is going on.

The UPS is monitored and sends an email when power is interrupted and system has been shut.

There is a lot of redundancy built into the control and command, giving peace of mind.

 

The scariest moments are the power outages. And they’re mostly scary because you don’t know what’s happening, as internet connectivity will then also drop and there is no means to get to any of the systems. When the power comes back on, you can backtrack how the shutdown procedure went. The UPS is set such that it will keep running the system for about 10 mins, just in case power comes back up, and then start a controlled shut down, including parking the mount, switching off all devices and turning off the software. The UPS is web-monitored, so if it drops out, I will get an automatic email. In the last year, two major power outages happened, one with the big storm in October last year, and the other just weeks ago with the national power outage across Spain. For the rest power outages happen rarely. The site has upgraded the whole power system last year. When power comes back up, an email from the UPS monitor is the first ‘sign of life’. The system does not come back to life automatically, I need to manually/remotely switch it back on again. That is on purpose. I prefer to check all is safe before going back on again.

The internet connection at site has sufficient bandwidth for a smooth operation. The virtual desktop sessions are usually smooth, and working with the software on site is very doable. Images are uploaded to the cloud throughout the night, and in the morning all my files are nicely downloaded on my desktop at home.

I have not needed to use the on-site staff much. From time to time we have a bit of contact, mostly to see how things are going. Communication is excellent. The few times I have asked him for some help on something related to the telescope, it was always done quickly and professionally.


The Telescope

Before I moved the telescope to the remote site, I had a few sessions with it in the backyard, but not many. So the telescope itself was a bit of an unknown to me at the start of this adventure. One of the main questions I had going into this was collimation. Would this hold well, and if not, how can collimation best be improved. The on-site staff is very familiar with my type of telescope, so this would probably not be an issue, but still. To assess collimation, I use the SkyWave AI-based wavefront sensing technology from Innovations Foresight. It gives a quantitative answer to the question ‘how good is my collimation’. At installation I was surprised at how good the values came out, realising that the telescope had been travelling almost 2500km on the backseat of a car. After about a year I decided to assess the collimation again. See below for the results. The scope turned out to be in perfect collimation, even after a year of slewing around at very cold and very hot temperatures.

 

With a collimation score of 9.6/10 from the Skywave software, it appears the after on year of use, the collimation is still in perfect condition.

A 3D model of a defocused star, showing a very even distribution of the PDF model, indicative of good collimation.

 

There’s one thing that I’m still not 100% confident about and that is the maximum resolution. In reality I never get better resolution than 2” and often somewhere between 2-3”, occasionally even higher. For the biggest part this resolution is dependent on the seeing conditions of course and they will not always be optimal. But one would expect that over the timespan of a year there would be the occasional dips <2”, and so far I have not seen them. One thing that I did notice at one night of imaging was that one target had significant better HFD values than another, despite similar weather and altitude conditions. As I was trying to explain the differences, it occurred to me that between the two objects, the telescope would be in quite a different position relative to the control cabinet. The control cabinet is full of equipment that heats up during a night and has ventilation holes in all different directions to dissipate that heat. Especially in colder nights, this could mean that I create my own worse seeing because of the cabinet! In discussion with the site, we decided that the most effective way to assess this issue was to block all ventilation holes that would leave air out under any potential telescope position. Ever since this change was made, HFD values have been consistently lower, so this was definitely the right thing to do. It would probably also make sense now to put the shroud back on. I had it taken off, because it makes the telescope more sensitive to wind. But for a calmer column of air in front of the main mirror, the shroud may be further helping to reduce the effect of any turbulence still coming from the control cabinet. As a longer term solution, perhaps some kind of fan should be installed in/on the cabinet to make sure warm air is actively blown away from the telescope.

Throughout the year, three mount models have been built, with very small differences between them. This indicates that the mount holds its position nicely and there are no signs of slippage, bending, flexing, etc. Voyager has an option for precise pointing, but even without using that, the pointing accuracy of the mount is pretty good and within a stack of images taken on many different nights, there is only the smallest amount of stacking artefacts visible around the edges. A new set of Flats is taken every couple of months. These do change, as dust bunnies do appear to be moving around over the filters. Such differences are mostly visible in the broadband filters, and more specifically in the luminance filter. The narrowband filters show very little difference between flats taken on different occasions. So going forward the plan is to repeat the narrowband flats less often than the broadband flats. So far flats have not been taken rotation-specific. Most images are taken with a rotation of 0° and the images that weren’t did not have major problems with flats. This is something to keep monitoring, as that might change when more diverse rotation angles are used.

Voyager Advanced

A review of one year use of my remote telescope would not be complete without a small review of the software used to automate the imaging, Voyager Advanced. When the decision was made to go remote, it was clear that I had to change my control software. I have been a long-time user of KStars/Ekos and the INDI platform. And as much as I like that software, and still use it for backyard astrophotography, for a remote setting, it just does not feel like the right choice. The alternative options I looked at were NINA and Voyager. Both are Windows-based and use the ASCOM framework, so would involve quite a bit of a learning curve for me. NINA has a very complete set of functionality and a large and enthusiastic user-base. And since it’s free, it is hard to beat from a cost/benefit perspective. Voyager Advanced is around for a long time, and is praised for its stability. It’s not cheap, but its user base is enthusiastic and often uses it in remote settings. As much as these software packages are similar, there is a subtle difference, which is obvious when opening their respective websites. NINA describes itself as ‘astrophotography imaging suite’, while Voyager is labelled as ‘astrophotography automation software’. This is an important difference, as one objective was the ability to run the rig 24/7 without intervention, just feeding it new targets from time to time. The Target Manager of Voyager Advanced does exactly that. Also NINA has a comparable option and I compared both in numerous YouTube videos. The implementation within Voyager Advanced seemed at the time to be more mature, so I went with Voyager.

So, a year later, how did it go?

Positives

The learning curve was actually smaller than expected. Of course I had already years of experience in astrophotography, so I knew what I was looking for, and most of that was pretty easy to find. The UI has some uniqueness to it, but once you understand how that works it is actually quite intuitive. The Wiki has a wealth of information in it, and gives answers to most questions. There are several series of introduction videos available, which are also very helpful. The software has seen several expansions over time and that shows. Fundamental hardware control is first defined under a ‘profile’. But when you create a ‘Sequence’, almost all the hardware control can be adjusted in the sequence as well. And if you define a ‘Target’ in the Target manager, you have to define a ‘Base Sequence’, which is the same as a regular sequence, but only partly applied. And on top of that, the Target manager allows to override some of the parameters in the sequence. If you would build the software from scratch, you would probably do this a bit different.

A key element to the automation is a scripting language called Dragscript. Perhaps the use of the word ‘scripting’ is commercially not the best choice, as it might turn people off for the wrong reason. It has little to do with scripting and is much more a sequencer. Just a very easy to use, and super powerful set of actions that you can drag over into a sequence of events. And it can use all available information, including safety monitor (roof open/close), weather info, sunset/sunrise, etc. So it is not too hard to define a sequence that does exactly what you want it to do. The hardest bit is to imagine all possible scenarios that can go wrong (‘events’) and how you want the system to respond to that. What is nice is that the system can send you emails or text messages at any point you like, so that you get an idea of what’s going on without going into the system itself.

 

Selecting and framing targets could not be easier. All targets are presented as actual images, making positioning and rotating the FoV very precise.

Position and rotation info of targets are stored in the RoboClip database, which feeds into the imaging software.

The absolute top feature of the software is Target Manager. Once you’ve defined some standard templates for how you typically image certain targets, adding new ones can be literally done with a few clicks. I have templates for narrowband and broadband imaging, as well as imaging under full moon conditions. The template defines aspects such as how to manage the hardware (base sequence), exposures, constraints (e.g. dates, altitude, sqm, hour angle, moon distance lorentzian, etc). Moon distance lorentzian is a restriction of imaging with moon presence, consisting of a combination of % illumination and angular distance, very smart. The objects coordinates are fed through Roboclip, an internal database of targets that you select. The nice thing is that Roboclip can be accessed through the web-interface. So from any computer with internet connection, you just search your target image (Aladin database), it shows up as an actually usable image, with an overlay of the FoV of the imaging profile selected. You can then hand-position the target in the FoV, including desired rotation. Click save and you can use the target in Target Manager. In the script you just tell it to run Target Manager, which evokes ‘RoboTarget’, a little agent in Voyager that determines which targets are going to be imaged when, based on their ephemeris, constraints, priority, etc. It keeps a tally of progress and stops imaging a target when the required exposure for that target has been reached. Really, Target Manager alone would be more than enough reason to select this software, it is absolutely magnificent!

At the end of a session you have the possibility for the system to create a PDF report, with tons of relevant information on targets, images, HFD, #stars, etc. With that report you can get a very quick glance at what happened, which targets were imaged, why a run might have been aborted, what the quality is of the frames, etc.

 

In the Target Manager all imaging criteria are defined, including filters, exposures, constraints and lots more.

RoboTarget in action. The software selects which target is photographed when and keeps track of progress.

At the end of the session a detailed report is automatically generated.

Areas for improvement

As with any software system, there are always areas for improvement. First of all the web interface. Besides the use in combination with Target Manager, which is absolutely magnificent, the rest is completely useless. When you log in, you start with a complete blank overview. No previews captures, no previous focus runs, no time-course of temperatures, weather conditions, etc. And turning it on at the start does not help, as after a few minutes of inactivity you are logged out. And next time you log in, you start with a complete blank overview again. The interface looks great, and can easily be used on tablets and phones, etc. But for this to work, it somehow needs to show you the session history when you log in. Oh, and the interface works over a non-secured connection. Really, in this day and age?

As mentioned at the start, the system is designed for automation. For most cases that is great. But it also means things can happen without you seeing it, or being able to control it. The focus routines for example. There are two options. One slews to a nearby star, focuses on that star while applying an earlier recorded focus-curve, and moves back to the target. The second option makes a simple V-curve based on HFD values of all detectable stars in the image. Focus point in option one is a mathematical position based on a single out of focus star, the second is the lowest point of the V-curve. The first option claims machine learning, and getting better over time. But the problem is that you have no way to see if the method worked well. I have had a number of times where clearly a wrong focus point was chosen. It is impossible to tell what caused the error. Could it be some temporary seeing or cloudiness issue? Was it some kind of user error? A backlash problem? Nowhere to check what had happened. So I now use the standard whole field V-curve, of which I can at least see the last curve and make some judgement as to whether it was accurate.
The earlier mentioned PDF report is fantastic, and fully automatic. But also lacks any type of customisation. I would love to have a column of focuser positions, so that I can see when a focus run has taken place and if the outcome is anywhere suspicious. I was naive enough to ask the developer as a future feature request. He quoted me a custom development price, which I politely declined. If the system undergoes a suspend moment during the night, continuation of the session starts with a new report. So in a restless night, you may have 5 different reports all just showing a few images, or no images at all. It would be great to have the report based on a time interval (a full night for example). But the report is automatically generated with information that the developer deemed important, and cannot be changed.
Another small annoyance is the creation of darks. For flats there is a specially designed sequence available. But for darks it isn’t, and requires a script… Sure, drag scripting is not difficult, but to have to use it just to shoot a few darks?

Finally, the user forum is quite unlike other user forums. The developer of the software is actively involved and responds always very quickly. This is a good thing. But the responses can come across as protective, probably not helped by the language barrier either. I’m used to fora for users to come up with their problems and other users helping them out with that. But problems are quickly considered ‘support questions’, which are not allowed on the forum. One would assume this is because software support is something you can opt to purchase as a separate item. And constructive suggestions for improvements in functionality can easily be considered criticism and are (partially) removed. I’ve once had a very reasonable comment made with some suggestions, notably as an answer to a request for improvement suggestions for the web interface. The developer did not agree with some of it and deleted half of my message (!). I then decided to delete the whole message. Ever since I’m not using the forum anymore.

The software is sold as a perpetual license. But if you want support and updates, you have to pay a small annual fee. So far I have done that, but wonder if it’s worth the money. So far there have not been any substantial updates relevant to my situation. Support is quick, but not sure how much I need that. To the last question I asked how to handle the fact that MacOS safari doubled down on non-secure websites, the answer was to use another browser, because the web dashboard is non-secure…

Would I recommend it

That is a difficult question to answer. I guess it depends on what you want from a system and perhaps a bit on what kind of person you are. The software is rock-solid and can probably do 99% of anything anyone wants to do in astrophotography. It certainly meets all my needs for >99%. After a while you will have found the right profiles, sequences, templates, etc. and using it is a great experience, primarily because it just works. So if you’re the kind of person that just wants the software to do its thing, Voyager Advanced is a great option. If you’re looking for constant updates, a wealth of plugins to explore, a thriving community on how to use them and how to solve problems, Voyager Advanced may not be your best bet. For myself I have to admit that I enjoy trying out new things, exploring capabilities, and discussing on and learning from fora, etc. But I know that that often comes with unpredictable issues, additional learning curves, a lot more time investment, etc. So for me Voyager Advanced fits the bill and I have no intention of switching any time soon.
One thing to mention of course if the price. The things that make Voyager so great are all part of the Voyager Advanced version, which is significantly more expensive than the base version. The base version of Voyager is not that exciting and I would not recommend over free software such as NINA. The Target Manager is what makes Voyager really stand out in a remote setting, and that is part of the Advanced version, together with other goodies, such as the PDF report, moon avoidance lorentzian, custom horizons, etc. If you add up costs for the Advanced version, and perhaps one or two licenses for Viking to control switches, the total sum is not insignificant. It is a personal choice if that’s worth it of course.

 

From hunting to collecting

Back to the imaging again. One of the major differences when using a remote hosting site, is the massively increased amount of data collected. How much more? In the below statistic, the data collected per quarter is shown throughout the years, expressed as hours of exposure. It turns out that the remote site has created a 10-fold increased in the amount of data collected! So really significant. And to be honest, this is not even because there is always clear skies in Spain. In fact, the amount of clear nights, especially in the last few months, is much lower than I expected. But because the system is fully automated, it will make use of each opportunity to shoot some frames. And because of the Target Manager, there is always something to shoot at any part of the night. Only a full night of full moon is a bit challenging, but otherwise all periods are usable to some point. Star clusters are great targets that can well be imaged under full moon conditions, and narrowband images can often be shot under near-full moon conditions. The moon distance lorentzian in Voyager Advanced is very helpful here.

 
 

The number of hours of exposure per quarter over te years. Since starting to use the remote site, the exposure time increased 10-fold.

 

With so much data coming in, the next question is how to keep up the processing? First of all, with so much more exposure time, the exposure per object was increased from about 10h per object to 20h per object. For some faint nebulous targets it may be a bit more, for simpler targets it will be a bit lower. So the 10-fold increase in exposure does not equal a 10-fold increase in number of objects. But still the processing challenge is real. Luckily in today’s world there are more and more practical scripts and AI-based tools that make processing a deep sky image a lot easier than in the past. And if you’re processing so much, you also develop a bit of a standard workflow. So with better tools and standardized workflow, I have been able to largely keep up. I’m sitting on a small backlog at the moment, but that is because weather in recent months has been pretty good here at home, so more backyard data has been collected as well, and collecting data from the backyard, because of the lack of automation, is just taking a lot more time.

But the attitude towards imaging has changed dramatically. In the past I was hunting the occasional clear night and the one or two objects that I was able to shoot during a couple of clear nights. Objects were selected purely based on the most optimal visibility at that time. But this hunting has changed into collecting images. An obvious goal is to image all objects from the Messier catalogue. The remote site has now given the perspective that that is actually possible. Currently I shot and processed 54 out of 110 objects, with several more in the pipeline. I love galaxies, and have always been drawn towards the Arp catalogue of peculiar galaxies. But never really felt it a realistic goal to image a decent number of them. Now, with the long focal length telescope setup and fast data collection, I’m able now to make a real start with that catalogue. Out of the 338 objects, I’ve captured 11 now. That’s still a small number and its not realistic to get all 338 objects, but it is still nice to at least capture the nicest/biggest of the list and have a reasonable amount of them.

 

What about the Backyard?

With the remote hosting site being such a success, is imaging in the backyard still nice to do? In fact, yes it is. The one thing you loose by going remote is to actually work and tinker with your equipment, something I’ve always enjoyed doing. So to have some setups that you can bring out from time to time, is very rewarding, even if the data collected is less than from the remote site. Both my TOA-130 and FSQ-106 setups have seen small updates in the last year and are now in good shape to be used inter-changeably. They can both be used for targets that are not very suitable for the remote site. Think of wider field nebulae. They still form great targets to hit with more wide-field setups in the backyard. And because there is not the hunting pressure anymore, the overall experience is even more enjoyable.

I’ve also taken the opportunity to expand into astrophotography territories other than deep sky imaging. I’ve been experimenting a bit with imaging the moon and have recently got my solar setup complete. It is a lot of fun to just sit next to your scope and scroll over the moon’s surface and be amazed by all the craters, mountains and structure on our closest celestial object.

 

Conclusion

If you’ve made it until here, you will probably realise that going to the remote site was a complete success. It was a fun project to prepare for. I have put lots of hours into the design of a long focal length rig that runs completely automatic on a 24/7 continuous basis. The upfront time spent on design has probably been well worth it, as everything worked more or less as expected. The one thing I was always very keen on to have were cameras. And I have two installed, but use them not very often. It still gives peace of mind to see everything is looking good, but all the other automated mechanisms to image remotely appear to be working well, often negating the need of a visual confirmation. With a 10-fold increase in imaging time, this is probably the best investment I have ever done in my astrophotography adventure. Yes, there is a monthly cost associated with it, and not everyone may be able or willing to pay that. But if you can and astrophotography is your passion, going remote is definitely worth considering.

With so many more images coming in, I realised that my website was not well suited anymore for the information I wanted to share. The site started with a lot of emphasis on the equipment, setup, workflows, etc. With so many images coming in now, I decided to put images more at the center of attention. And there are many more ways now to explore them, based on your own preferences.

Next
Next

Website update: Image Details