Moving to a Remote Hosting site - Part 2: Command and Control

 
 

After six years of imaging from my backyard, the decision was made to move one of the rigs to a remote hosting site. At the moment that is still a bit of a journey into the unknown. In a number of blog-posts you can follow along on this journey. Part 1 described the goals, site selection and general considerations around software and hardware. In this Part 2 the focus will be on design and tools used to control the rig and how to do that remotely.

 

Remote Access

In any remote setup, there will be a local PC of some sort controlling the rig. To control that PC, one needs some sort of remote access. In many cases, this is achieved through a screen emulation option, such a Google Remote Desktop or similar alternatives. The site offers Splashtop as the default option, which will probably be installed when installing the rig on-site. In the meantime I am using my go-to solution, which is Nomachine. This Luxembourg-based company is solely focused on remote access software and provides a very fast and easy to use solution, and it’s free.

But a screen emulation solution does not allow setting up a connection directly to the IP-address of the local PC. Direct access can be important for SSH or FTP connections for file transfer. And as mentioned in Part 1, the Voyager software that will control the rig comes in a server/client configuration. One client is a web dashboard that allows pretty complete control of the rig from any internet browser. Another client is the Robotarget manager that enables defining a database of targets for fully robotic imaging. Both these clients require an IP-address to connect to.

Zerotier virtual network

Direct access to a remote IP-address can be setup via port-forwarding from the site’s network, or via a VPN network. But not every site is keen on creating these options, as they require additional maintenance and support, and could possibly create security threats for the network as a whole. Another solution is Zerotier. This is software that creates a virtualised network. On each device that will be part of the Zerotier virtual network, the application is installed. In your own account you can create a network ID, and devices can be authorized into that network using both the network ID and a specific device ID. Once setup, you can connect the device you want to use to the network. If there is only one device connected, there is no network, but if two devices are connected, there is a network consisting of these two devices. More devices can be added on an as needed basis. The communication within the network is fully encrypted to ensure maximum security. Each of the devices in a Zerotier network has its own user-defined IP-address. This address can be used in for example the Voyager clients or ftp-software. It sounds a lot more complicated than it is. Setting it up appeared quite straightforward, and ease of use has been very good so far.

A Zerotier network is setup with the devices that you specify, based on an address-code only known to you. Each device can be given a random name/description and can be assigned an IP address within a certain range (in the example above 172.22.0.x). In your network dashboard you need to manually authorise each device before it can be used. After that, you can turn on/off each device on the device itself. Connected devices, wherever in the world they are, will show up in your network as if they were part of your local LAN. All communication on the network is fully encrypted.

 

The most important part of the remote access is the control of the equipment. But it always helps to have a visual on what is happening remotely. Therefore typically cameras are installed that monitor mount movement, local environment, etc. There are many options for this. A simple webcam monitored via the screen emulation software would already do the job. A more advanced option would be an IP-camera with some specialised software to monitor it remotely. I have opted to stay within the Ubiquity UniFi ecosystem that is used at home. From an earlier iteration I still had a CloudKey to control the cameras. Two G5 bullet cameras that have high sensitivity in low-light conditions will be used to monitor the rig from two different angles. The Ubiquity Protect software offers easy access to both cameras. And switching between the cameras at home vs the cameras at the site is now just one click of a button.

 

Remote Switching

Key to any remote operation is the possibility to switch things on and off remotely. Typically this is done through an IP-switch. Often this comes in the form of a Power Distribution Unit (PDU) with several plug points and integrated internet connection, either via WiFi or over an internet cable. Via an internet connection, the individual plug points can be turned on and off. Alternatively one could use home automation devices, based on for example Zigbee, Z-wave or Matter protocols. These protocols are essentially local wireless networks, often controlled via WiFi.

I opted for a solution that is kind of a combination between these two options, a product from the Shelly home automation system. Shelly includes a series of products in the Professional Series, that are all controlled via ethernet cables. One of these products is the Shelly Pro 4PM, which is essentially an IP-switch. The device offers a lot of value for its money. Four 16A rated switchable outputs, on-device display with manual control, power metering, protection against overpower/voltage and connection via either ethernet cable or WiFi. And all of this is packed in a very compact device that is DIN-rail mountable. The switch can be controlled via a mobile app though the proprietary Shelly cloud and as such offers the ease of use of a typical home automation system.

While the Shelly Pro 4PM controls several 12V and 24V DC circuits, there are many devices on and around the rig that need to be controlled. The Pegasus Ultimate Powerbox Advance (UPBA) that was already in use, is a great tool to control any devices that are on the OTA, such as camera, focuser, fans, etc. But for full control, the UPBA alone is not enough. Therefore a Lunatico Dragonfly is added to the setup. The Dragonfly is a device often seen in remote observatories, and includes 8 relay switches and 8 input sensors. With the Dragonfly, the UPBA is not strictly necessary. The eight ports of the Dragonfly would be enough for power control, and the PC has 8 USB ports, also more than enough. But eliminating the UPBA would mean that a total of 7 cables should be going up to the OTA, instead of the 3 in the current configuration. So cable management is simpler in the current configuration.

The three devices that will enable switching all components remotely. The Shelly Pro 4PM (left) controlling PC, Dragonfly and the 24 and 12VDC power circuits, the Dragonfly (middle) controlling all devices ‘on the ground’ and the Ultimate Powerbox V2 (right) controlling all devices ‘on the scope’.

 

Remote PC

In a backyard observatory, the remote PC on the rig is typically a small headless computer. Historically I have used Fitlet computers from Compulab to run my setups. Initially that was the Fitlet2, later the Fitlet3. These are robust industrial PC’s, capable of withstanding the elements, yet with modest energy-efficient processors. For the remote site, the demands on computing power will be higher. First of all, it will be a Windows-based setup, which is more demanding than Linux running on the Fitlets. But more software will be running simultaneously and I wanted to keep the option open to run some image processing on-site. A regular desktop-style computer would be an obvious choice. But I’ve noticed how my Mac mini showed lag when temperatures dropped below freezing. And as John Hayes was mentioning in his presentation on remote imaging, the three key success factors are reliability, reliability and reliability. So I decided to go for an industrial PC again, and found a great solution in the same Compulab family in the form of the Tensor I-22. This is a computer that can be fully configured to individual needs. I opted for a model based on the 11th generation Intel Core i5-1145G7E processor with 32GB RAM and Windows 11 Pro pre-installed. Two 1TB NVMe hard disks, 8 USB ports and an extended temperature range from -20 - 70 ºC completed the configuration. At 20x20cm, this PC is substantially larger than the Fitlets, but much smaller than regular desktops and small enough to stack somewhere in a control cabinet.

Coming from a Mac/Linux configuration, I had not realised the amount of installation work required for a Windows/ASCOM based system. When using KStars/Ekos, just the installation of one software package is sufficient for 80-90% of the needs. All drivers, simulators, automation, planetarium, platesolve etc. are installed in one software package. When setting up the Windows PC using ASCOM standards, I found myself browsing through a seemingly endless range of manufacturers websites, Github repositories and software developers sites, trying to find the right ASCOM drivers and software, all for the right 64-bit Windows version. In total I installed 15 software packages to get the basics in place, and that is excluding ‘Front-end’ software such as Voyager, MountWizzard and PixInsight, or any specific software exclusive to enabling remote operation.

 

Remote Infrastructure

With remote access to a PC at the site of the telescope, and remotely controlled switches to turn things on and off, the most important ingredients are in place. But there are still important choices to make in the overall configuration of components and infrastructure. This was a bit of a puzzle, but as of this writing, the plan is to put it together in the following way.

The Shelly Pro 4PM switches four different power supply units. Two are DIN-rail mounted 12VDC power supplies. One for the PC and one for the Dragonfly. The PC is by default configured to switch on as soon as it receives power. Two other power supply units are currently existing in the backyard setup and provide 24VDC and 12VDC respectively. Both are high end regulated power supplies from Elektro Automatik and can provide 300W of power each. That is way too much for what is needed and could have been replaced by smaller DIN-rail mounted power supplies. But these are high end units that I had available, so it would be a shame not to use them. The 24V unit is only needed for the mount. All other equipment runs on 12V.

One of the goals was to be able to turn on/off the PC independently from anything else, and in this lay-out that objective is met by a simple tap in an app on my phone. Also the Dragonfly can be switched on/off independently. That is probably not needed, but in any sustained downtimes, it is nice to be able to switch it off.

The Dragonfly switches power to the mount, the telescope, the Delta-T mirror heater and the flat panel. The 10Micron mounts don’t just switch on when power is applied. They need a separate trigger (a 1-2 sec contact closure is enough) to be turned on. This trigger is applied through one of the relays in the Dragonfly. The Planewave Delta-T mirror heater is a weak link in the system. When ordering new, mine arrived dead in the box and had its board replaced. There are many users that have reported dead Delta-T’s, indicating poorly designed electronics. One combination in particular that appears suspicious is powering the Delta-T from the UPBA. Planewave advises to provide the Delta-T with its own power source. So in the planned setup, it will have its own power line from the telescope to the cabinet, and switched through the Dragonfly.

On the telescope will be the UPBA, ensuring USB and power connections for camera, focuser and rotator. Also the fans of the CDK14 are controlled via the UPBA. The UPBA will also have its environmental sensor plugged in, although the primary environmental information will be provided via bolt wood files from an available Cloudwatcher and made available by the site.

The Voyager software for interacting with switches and relays is called Viking. Each instance of Viking can control one device, and maximally two instances of Viking can be controlled from a single Voyager instance. So in my situation, there are two Viking instances running, one controlling the UPBA and one controlling the Dragonfly.

The wiring diagram with all individual components, as planned for the remote installation. The components in the ‘Control Cabinet’ will be ‘pier-side’, while components in the top-right box will be placed on the OTA. Types of wiring (ethernet, usb, power, etc) are colour-coded.

UPS

A stable source of mains power is critical for a reliable operation of the rig. The power supply at IC Astronomy appears to be stable, but still it is good practise to use an Uninterruptible Power Supply (UPS) unit. Not only does it provide temporary battery backup power in case of a power outage, it also filters sensitive telescope equipment against power spikes from the regular power grid. I chose the SMTL750RMI2UC from APC. This is a 750VA/600W model from the Smart-UPS series. Smart-UPS in the APC line-up allows remote monitoring of the status of the UPS via the internet. And with Powerchute software installed on the remote PC, the behaviour of different groups of outlets of the UPS can be configured in great detail. The UPS is of the ‘Line interactive’ type, as opposed to the ‘offline’ type. In a line interactive design the inverter is part of the output and is always on. This means better protection against power surges, spikes etc, and faster switching to battery power when needed. The chosen model has a Lithium-Ion battery, as opposed to the more regular Lead-Acid battery. Lithium-Ion batteries are more compact, have a longer lifespan and work without degradation in a wider temperature range. At this point it is difficult to say how much power will be used by the system once it is in operation. A rough assessment suggests about 150W. In that case, the 600W provided by the UPS will be more than enough to power the whole rig. And from the runtime graph, it appears that it can do so for about 25 minutes. Of course if the power consumption turns out higher than the 150W, this period will be shorter. Anyway, if after 10 mins or so power would not be restored, the main goal is a graceful shutdown of the system, for which there would be ample time.

All equipment will be on the same group of the UPS and receive battery power in case of a power failure. A graceful shutdown will be initiated when battery level reaches a certain point, after which eventually all power will be shut off. But one relay will be connected to a second outlet group of the UPS and will switch off immediately when the power is lost. This will be coupled to a sensor input on the Dragonfly, triggering to voyager an Emergency Exit. In an Emergency Exit the telescope will be parked, the camera warmed up, the mount turned off and all relevant software properly shut down. So the whole rig will be shut down before the PC will be shut down and the power switched off.

Network switch

The last component to discuss here is the network switch. By design, all network connections will be wired. This means that a total of at least 8 ethernet ports are needed for the equipment. And this is on top of one port for connection with the observatory network. So a 16-port switch was needed. Running a Ubiquity UniFi system at home, a switch from the same system was for me the obvious choice. The USW Standard 16 PoE was selected for this purpose. Besides a total of 8 regular ethernet ports, it also contains 8 PoE+ ports providing a total of 42W of power. The G5 cameras are both connected and powered through such PoE+ connections. The ethernet switch can be power cycled using the Cloud Key Network interface. Alternatively, it can be power cycled via the UPS, but since all components are on the same group, this requires the rest of the components to be switched off first.

 

Cabinet

The final piece of the puzzle is a physical place to put everything together. Main purposes are safety for anyone in the observatory, protection of the gear, and organisation of equipment and cabling for easy maintenance. There are many options that would probably all work well. Plastic boxes, electrical cabinets, custom made cases, all would do the job. For my system a wall-mounted 19” server rack was chosen. The wall-mounted designs don’t have to be hung, but can be placed on the floor. They can be chosen in much lower heights than regular server racks. Options range anywhere from 20cm to well over a meter. Heights are defined in what is called Rack Units, a universal unit of size of 4.4cm. A rack of 12U (53cm) seems to be sufficient to hold all components, with room to wire and connect pieces together. Wall mounted racks can also be ordered in a somewhat shallower version of 45 cm deep, rather than the more typical 60cm or more for regular server racks. The rack finally chosen is the WF01-6412-10B from Lanberg. This is a 12U rack with glass door, removable side-panels and overall dimensions of 60x45x646 cm (WxDxH). There is a lot of ventilation holes on both bottom and top of the cabinet. Hopefully that will be sufficient to keep the temperature under control. In case that would not be sufficient, a mechanical ventilatorcan be placed on top of the cabinet.

A 12U high so-called ‘wall-mounting’ 19” server rack will house all the components ‘on the ground’. It has a glass door and removable side panels for easy access. Lots of ventilation holes at top and bottom should keep the temperature in the cabinet under control. If needed, mechanical ventilation can be added on top of the cabinet.

An initial lay-out of the 19” server rack cabinet. The switch and UPS come with their own rack-mounting brackets. The Shelly, Dragonfly and some of the power supplies will be DIN-rail mounted, which works well in a 19” cabinet.

 

Conclusions and next steps

There are many ways to put together the command and control systems of a remote setup. And mine was definitely a nice puzzle to make. The final design allows turning on and off every single component remotely, with often various layers of redundancy. Power supplies are generally over-dimensioned and components are selected with reliability in mind. All components have been installed/configured and tested individually. The next step is to put everything together, physically build it into the cabinet, with proper cabling and all. Also the automation routines in Voyager will need to be properly programmed using the embedded DragScript language.

Part 3 of this blog series will describe the assembly in more detail and hopefully also some parts of the voyager setup. Also we will have a closer look at the telescope rig itself again. The rig has received an important update, with the arrival of the new Moravian C3-61000 Pro camera. At the end of part 3, the whole system should be complete, tested and ready to be driven to Spain.

Previous
Previous

Moving to a Remote Hosting Site - Part 3: Ready to Ship

Next
Next

Moving to a Remote Hosting site - Part 1