[mosh-devel] Mosh on Kuberbnetes

Thomas Buckley-Houston tom at tombh.co.uk
Sat Jul 7 23:32:36 EDT 2018


Thanks so much for this idea, I really think it's the one, simple and scalable.

I haven't tried but I'm pretty sure the mosh-server's "MOSH CONNECT"
can be wrapped in plain BASH. I'm already in control of the SSH
connection as I'm using my own `ForceCommand` script.

Also I can still use this method with extra Round-Robin balanced IP
addresses giving me multiple sets of 65,000 ports.

The only thing I don't understand is why the outgoing UDP datagram has
to rewrite the container's source IP. Isn't the original MOSH CONNECT
IP:port the canonical reference?

On 1 July 2018 at 12:54, Keith Winstein <keithw at cs.stanford.edu> wrote:
> How about a semi-smart (but mostly Mosh-oblivious) server-side proxy/NAT
> that works like this:
>
> - The proxy service has one public IP address and like 65,000 available UDP
> ports.
> - The proxy service can itself be redundant with failover...
> - When a user wants to open a new Mosh connection, they Mosh to a single
> hostname (which resolves to the IP address of the proxy service).
> - Your code allocates the necessary container, etc., and also allocates a
> unique UDP port on the proxy.
> - Your code runs the new mosh-server process in the target container.
> - The proxy intercepts the mosh-server's "MOSH CONNECT <port> <key>"
> message, replacing the port number with the unique public-facing UDP port
> (and remembering the container's IP address and the original port number).
> - When the proxy receives an incoming UDP datagram destined to a particular
> UDP port, it forwards it to the appropriate container at its IP address and
> at the original port number. It *preserves* the source IP and port of the
> datagram when forwarding.
> - When the container wants to send an outgoing UDP datagram, it sends it
> normally (to whatever IP:port is associated with the client), except the
> containers are not directly connected to the Internet; they use the
> proxy/NAT as their next-hop router.
> - For the outgoing UDP datagram, the proxy/NAT rewrites the container's
> source IP:port to its own IP and the public port number.
>
> I think this will allow you to serve like 65,000 separate mosh connections
> from a single public IP address...
>
> The added latency in forwarding a datagram is probably <1 ms, and you don't
> really have to change anything about Mosh itself or its internals.
>
> Unfortunately there are no unencrypted identifying marks to a Mosh
> connection, except the incrementing sequence numbers (which start at 0 for
> every connection).
>
> -Keith
>
> On Fri, Jun 29, 2018 at 12:17 AM, Thomas Buckley-Houston <tom at tombh.co.uk>
> wrote:
>>
>> Hey Keith, John, everyone,
>>
>> Yeah the more this is looking like a quite a big hurdle. Especially
>> your point Keith about roaming IPs (which I'd forgotten), it's a
>> central feature of Mosh I don't want to lose.
>>
>> So the only 2 options seems to be exposing multiple IPs for Round
>> Robin (or other smart DNS routing) or writing a new Mosh proxy that
>> already has knowledge of the existing keys. Both seem like quite a
>> challenge. Round Robin DNS seems more approachable and I can imagine
>> integrating it with the Google Cloud DNS API I'm already using, but I
>> just wonder how expensive Google (or anyone for that matter) will make
>> thousands of static IP addresses? Apart from me having to learn Mosh
>> internals, one difficulty that strikes me about a Mosh proxy is that
>> it might introduce a non-trivial delay to each datagram arriving?
>> Though surely only ever in the order of a handful of milliseconds I
>> suppose.
>>
>> Are there not any other identifying marks to a datagram, I don't know
>> much about low level networking, but maybe something like a MAC
>> address for example?
>>
>> Thanks,
>> Tom
>>
>> On 27 June 2018 at 04:50, Keith Winstein <keithw at cs.stanford.edu> wrote:
>> > Hi Thomas,
>> >
>> > Glad you could provoke a very interesting discussion! But I'm still
>> > confused
>> > -- how is "sticky IP-based routing" going to work after the client roams
>> > to
>> > a new IP address (or to a new UDP source port)? When your system seems
>> > an
>> > incoming UDP datagram from a previously unseen source IP:port, how does
>> > it
>> > know which mosh-server (on which server machine) to send it to?
>> >
>> > With off-the-shelf Mosh, you basically need a load-balancing strategy
>> > that
>> > allows a destination IP:port to uniquely identify a particular
>> > mosh-server.
>> > You can do this with multiple DNS A/AAAA records (where the client picks
>> > the
>> > winning one -- maybe you permute the list), or with a smart DNS server
>> > that
>> > serves *one* A or AAAA record to the client at the time of resolution
>> > (like
>> > a CDN would use).
>> >
>> > Instead of using the mosh wrapper script, you could have your users use
>> > some
>> > other scheme to figure out the IP:port of the server, but the point is
>> > that
>> > once you launch the mosh-client, it's going to keep sending datagrams to
>> > the
>> > IP:port of the mosh-server, and those datagrams need to get to the same
>> > mosh-server process even if the client roams to a different
>> > publicly-visible
>> > IP address or port.
>> >
>> > You could imagine writing a very smart mosh proxy that has the keys to
>> > all
>> > the sessions and can figure out (for an incoming datagram coming from an
>> > unknown source IP:port) which session it actually belongs to, and then
>> > makes
>> > a sticky mapping and routes it to the proper mosh-server. But I don't
>> > think
>> > anybody has actually done this yet and of course there's a challenge in
>> > making this reliable/replicated.
>> >
>> > -Keith
>> >
>> > On Mon, Jun 25, 2018 at 3:10 AM, Thomas Buckley-Houston
>> > <tom at tombh.co.uk>
>> > wrote:
>> >>
>> >> Thanks so much for the clarification.
>> >>
>> >> > UDP is connectionless
>> >>
>> >> That's the key here. So I have no choice but to use sticky IP-based
>> >> routing. Round-robin DNS isn't an option I don't think, because I hope
>> >> one day to be able to scale to thousands of servers.
>> >>
>> >> And thanks so much for the heads up about my DNSSEC records. I've sent
>> >> a request for them to be deleted. I'd added them and some SSHFP
>> >> records to explore automatically passing the StrictHostKey warning.
>> >> But it's not entirely straight forward. Even with correct DNS records
>> >> the SSH user still has to have VerifyHostKeyDNS enabled, which as I
>> >> understand most people don't. And then on top of that my DNS provider
>> >> (DNSSimple) automatically rotate the keys every 3 months, which means
>> >> I have to manually send a request to my registrars by email to update
>> >> the DNSSEC records. Is it all worth it do you think?
>> >>
>> >> On 24 June 2018 at 13:36, Anders Kaseorg <andersk at mit.edu> wrote:
>> >> > You may have a misunderstanding about how a Mosh session is set up.
>> >> > The
>> >> > mosh script launches a mosh-server on the remote system via SSH;
>> >> > mosh-server picks a port number and a random encryption key, and
>> >> > writes
>> >> > them to stdout, where they go back over SSH to the mosh script; then
>> >> > the
>> >> > mosh script launches mosh-client passing the IP address, port number,
>> >> > and
>> >> > encryption key.  The newly launched mosh-client and mosh-server
>> >> > processes
>> >> > exchange UDP packets encrypted with the shared key; communication is
>> >> > successful if the packets can be decrypted.
>> >> >
>> >> > There’s no separate “key checking” step to be disabled.  And it
>> >> > doesn’t
>> >> > make sense to “refuse more than 1 connection on the same port”, both
>> >> > because UDP is connectionless, and because a new mosh-server is
>> >> > launched
>> >> > on a new port for each Mosh session (it is not a daemon like sshd).
>> >> >
>> >> > The easiest way to put Mosh servers behind a load balancer is with
>> >> > round-robin DNS where a single hostname resolves to many addresses,
>> >> > or
>> >> > to
>> >> > different addresses for different clients and/or at different times.
>> >> > We’ve already gone out of our way to make the mosh script resolve the
>> >> > hostname only once and use the same address for the SSH connection
>> >> > and
>> >> > the
>> >> > UDP packets, because that’s needed for MIT’s athena.dialup.mit.edu
>> >> > pool.
>> >> >
>> >> > If that’s not an option and you really need all connections to go
>> >> > through
>> >> > a single load balancer address, you could try wrapping mosh-server in
>> >> > a
>> >> > script that passes different disjoint port ranges (-p) on different
>> >> > backends, and forwarding those ranges to the corresponding backends
>> >> > from
>> >> > the load balancer.
>> >> >
>> >> > Unrelatedly, brow.sh doesn’t resolve with DNSSEC-enabled resolvers
>> >> > like
>> >> > 1.1.1.1 or 8.8.8.8, seemingly due to some problem with the DS records
>> >> > set
>> >> > with the registrar: https://dnssec-debugger.verisignlabs.com/brow.sh.
>> >> >
>> >> > Anders
>> >>
>> >> _______________________________________________
>> >> mosh-devel mailing list
>> >> mosh-devel at mit.edu
>> >> http://mailman.mit.edu/mailman/listinfo/mosh-devel
>> >
>> >
>>
>> _______________________________________________
>> mosh-devel mailing list
>> mosh-devel at mit.edu
>> http://mailman.mit.edu/mailman/listinfo/mosh-devel
>
>



More information about the mosh-devel mailing list