[mosh-devel] Mosh on Kuberbnetes

Anders Kaseorg andersk at mit.edu
Sun Jun 24 01:36:03 EDT 2018


You may have a misunderstanding about how a Mosh session is set up.  The 
mosh script launches a mosh-server on the remote system via SSH; 
mosh-server picks a port number and a random encryption key, and writes 
them to stdout, where they go back over SSH to the mosh script; then the 
mosh script launches mosh-client passing the IP address, port number, and 
encryption key.  The newly launched mosh-client and mosh-server processes 
exchange UDP packets encrypted with the shared key; communication is 
successful if the packets can be decrypted.

There’s no separate “key checking” step to be disabled.  And it doesn’t 
make sense to “refuse more than 1 connection on the same port”, both 
because UDP is connectionless, and because a new mosh-server is launched 
on a new port for each Mosh session (it is not a daemon like sshd).

The easiest way to put Mosh servers behind a load balancer is with 
round-robin DNS where a single hostname resolves to many addresses, or to 
different addresses for different clients and/or at different times.  
We’ve already gone out of our way to make the mosh script resolve the 
hostname only once and use the same address for the SSH connection and the 
UDP packets, because that’s needed for MIT’s athena.dialup.mit.edu pool.

If that’s not an option and you really need all connections to go through 
a single load balancer address, you could try wrapping mosh-server in a 
script that passes different disjoint port ranges (-p) on different 
backends, and forwarding those ranges to the corresponding backends from 
the load balancer.

Unrelatedly, brow.sh doesn’t resolve with DNSSEC-enabled resolvers like 
1.1.1.1 or 8.8.8.8, seemingly due to some problem with the DS records set 
with the registrar: https://dnssec-debugger.verisignlabs.com/brow.sh.

Anders



More information about the mosh-devel mailing list