After getting to grips with Voyager over the last two weeks now I really want to see if I can manage to control my entire dual scope setup. In case it’s useful to someone or if anyone has any contributions I will post my progress here. I’m sure I will also have one or two questions for Leo…
My astro setup is a side by side dual scope setup on one mount. For this purpose I divide my setup into a ‘Master’ and a ‘Minion’ part:
- Master: Avalon Linear Mount, Lakeside focuser controlled by Pegasus Hub, Moravian G2-8300 camera with internal filter wheel, SX Lodestar2 Guide Camera, Ascom Observing Conditions from Pegasus Hub
- Minion: Lakeside focuser controlled by Lakeside controller, Moravian G2-8300 camera with internal filter wheel
- Control both Master and Minion with Voyager
- Achieve some kind of synchronisation between the two instances
At the moment Voyager does not allow more than one instance to run at the same time so it’s not possible to control both parts from the same Windows instance. To get around this limitation (without having to buy another PC) I have installed a Windows 10 virtual machine (via VMWare Player) on my Astro PC (an Intel Compute Stick). On this VM I also installed Voyager alongside the ASCOM framework as well as the drivers required for my focuser and camera.
Now I have two PCs connected by a network both running Voyager. When I connect my astro gear I choose which USB device connects to the VM and which remains on the host and I’m pretty much ready to go. In indoor testing everything seems to work very well and I am able to connect the dashboard to both instances from another computer on the same network. Sorry that I get excited about this but that’s just fantastic!
So hopefully I am now in a position to fully control both parts of my setup independently. Indoor testing has shown that it can work but need to test in real conditions to be sure. I’ve had bad experiences with USB connections and virtual machines before so this needs to be tested to see if everything is stable enough over a long period.
The main instance of Voyager running the master part of the rig remains pretty much unchanged, at this point it doesn’t need to know anything about the Minion. The Minion Voyager setup is very basic. It only has the camera, filter wheel and focuser setup in the SetupForm.
One thing to note straight away is that it doesn’t seem to be possible to run a sequence when no mount (or virtual mount) is selected (maybe Leo @Voyager could confirm this)? In any case, although it would be nice, it does not really matter as it seems perfectly fine to run a drag script without the mount connected, which is really what I need.
Now (once I’ve written an appropriate dragscript) I think I am at a point where I can image with both parts of my dual setup at the same time. Hopefully I will be able to try this soon!
For me the main aim of achieving some synchronisation is to be able to dither. I’m honestly not sure if dithering will improve my images but I want to try (I think I am slightly under-sampled and would like to give drizzle integration a go). For all other events that would ‘spoil’ a Minion acquisition (moving to focus star, meridian flip etc.) I am not too bothered really. My usual setup for LRGB is to shoot RGB in the Master and L in the Minion. So if I get some spoilt L subs then no problem. I must also add that for most of my acquisitions all subs will have a similar length (in master and minion) so my approach will be geared towards that.
Now this is where it gets interesting. In an ideal world of course, both master and Minion are aware of each other and play along nicely. Although this I believe is in the the long term plans for Voyager, I really want to get something going now, even if it’s not perfect.
Before I can do anything I will need to create an app that ‘listens’ what the master is doing, a bit like the dashboard, so it knows when exposures are ongoing, how much time is left and whatever else the master is doing. I’m planning to write this in C# using the Voyager Application Server (which has excellent documentation by the way!). Once this is in place I think I will have a go at this in stages:
At this stage I would like to use a dragscript to control the acquisition, filter changes and focussing on the Minion Voyager instance. To achieve some form of sync I will add a function to my listener app which decides if it is a good time to start a sub in the Minion. Initially I think I will base this on detecting the acquisition state of the master i.e. if the master is exposing it must be a good time to expose in the minion. There is a ‘time remaining’ property being transmitted from the server which can be used for this i.e. if the sub I want to take on the minion is shorter than the remaining acquisition time in the master than all is well, go ahead. This obviously assumes that the minion sub will be shorter than the master sub, so I am envisaging maybe 300s subs in the master and 295 in the Minion which should hopefully work.
The way this should hopefully work with the drag script is that the drag script can launch an external script and read a variable. So I will create another very small ‘check’ app that the drag script can launch, this ‘check’ app will contact my listener app and ask if it’s a good time to start a sub. After being told, the check app then returns either true or false to the drag script. The drag scrip (in a loop) will go round in circles until it’s a good time and take the next sub. Easy…
It should be possible to improve on this approach in two ways:
- Abort a sub in the Minion if it has been spoilt and restart as soon as it’s good again. If I think about it then the best single indicator for the minion to know this is to check (or get notified) if the guide software is not guiding correctly or guiding has stopped. So in this step I think I will try to get this info either (ideally) from Voyager or alternatively from the PHD API.
- This could potentially give more imaging time (as compared to Step 1) but would need the ability to cancel an acquisition in the minion (and the cooperation of the camera - from experience some cameras can be a bit temperamental when aborting an acquisition). This will probably need direct interaction from my app with the Voyager Minion via the TCP protocol.
- Additionally it would probably also be beneficial to move away from a drag script and fully control the minion Voyager instance over TCP
By this point we are probably in 2021 and Leo has already implemented a much better internal solution… !
If not then the next best thing would be to investigate if it’s possible to have some interaction with the master and to ask it to wait before carrying out certain actions (e.g. stop tracking, mount move). If this could be achieved then much better sync could be achieved.
Wow, that post turned out a lot longer than intended… . I would welcome feedback if anyone has anything to add or if you think that the approach will not work for some reason or other!!!