<div dir="ltr">How about a semi-smart (but mostly Mosh-oblivious) server-side proxy/NAT that works like this:<div><br></div><div>- The proxy service has one public IP address and like 65,000 available UDP ports.</div><div>- The proxy service can itself be redundant with failover...</div><div>- When a user wants to open a new Mosh connection, they Mosh to a single hostname (which resolves to the IP address of the proxy service).<br></div><div>- Your code allocates the necessary container, etc., and also allocates a unique UDP port on the proxy.</div><div><div class="gmail_extra">- Your code runs the new mosh-server process in the target container.</div><div class="gmail_extra">- The proxy intercepts the mosh-server's "MOSH CONNECT <port> <key>" message, replacing the port number with the unique public-facing UDP port (and remembering the container's IP address and the original port number).</div><div class="gmail_extra">- When the proxy receives an incoming UDP datagram destined to a particular UDP port, it forwards it to the appropriate container at its IP address and at the original port number. It *preserves* the source IP and port of the datagram when forwarding.</div><div class="gmail_extra">- When the container wants to send an outgoing UDP datagram, it sends it normally (to whatever IP:port is associated with the client), except the containers are not directly connected to the Internet; they use the proxy/NAT as their next-hop router.</div><div class="gmail_extra">- For the outgoing UDP datagram, the proxy/NAT rewrites the container's source IP:port to its own IP and the public port number.</div><div class="gmail_extra"><br></div><div class="gmail_extra">I think this will allow you to serve like 65,000 separate mosh connections from a single public IP address...</div><div class="gmail_extra"><br></div><div class="gmail_extra">The added latency in forwarding a datagram is probably <1 ms, and you don't really have to change anything about Mosh itself or its internals.</div><div class="gmail_extra"><br></div><div class="gmail_extra">Unfortunately there are no unencrypted identifying marks to a Mosh connection, except the incrementing sequence numbers (which start at 0 for every connection).</div><div class="gmail_extra"><br></div><div class="gmail_extra">-Keith</div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jun 29, 2018 at 12:17 AM, Thomas Buckley-Houston <span dir="ltr"><<a href="mailto:tom@tombh.co.uk" target="_blank">tom@tombh.co.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hey Keith, John, everyone,<br>
<br>
Yeah the more this is looking like a quite a big hurdle. Especially<br>
your point Keith about roaming IPs (which I'd forgotten), it's a<br>
central feature of Mosh I don't want to lose.<br>
<br>
So the only 2 options seems to be exposing multiple IPs for Round<br>
Robin (or other smart DNS routing) or writing a new Mosh proxy that<br>
already has knowledge of the existing keys. Both seem like quite a<br>
challenge. Round Robin DNS seems more approachable and I can imagine<br>
integrating it with the Google Cloud DNS API I'm already using, but I<br>
just wonder how expensive Google (or anyone for that matter) will make<br>
thousands of static IP addresses? Apart from me having to learn Mosh<br>
internals, one difficulty that strikes me about a Mosh proxy is that<br>
it might introduce a non-trivial delay to each datagram arriving?<br>
Though surely only ever in the order of a handful of milliseconds I<br>
suppose.<br>
<br>
Are there not any other identifying marks to a datagram, I don't know<br>
much about low level networking, but maybe something like a MAC<br>
address for example?<br>
<br>
Thanks,<br>
Tom<br>
<div class="m_6212588343687901249HOEnZb"><div class="m_6212588343687901249h5"><br>
On 27 June 2018 at 04:50, Keith Winstein <<a href="mailto:keithw@cs.stanford.edu" target="_blank">keithw@cs.stanford.edu</a>> wrote:<br>
> Hi Thomas,<br>
><br>
> Glad you could provoke a very interesting discussion! But I'm still confused<br>
> -- how is "sticky IP-based routing" going to work after the client roams to<br>
> a new IP address (or to a new UDP source port)? When your system seems an<br>
> incoming UDP datagram from a previously unseen source IP:port, how does it<br>
> know which mosh-server (on which server machine) to send it to?<br>
><br>
> With off-the-shelf Mosh, you basically need a load-balancing strategy that<br>
> allows a destination IP:port to uniquely identify a particular mosh-server.<br>
> You can do this with multiple DNS A/AAAA records (where the client picks the<br>
> winning one -- maybe you permute the list), or with a smart DNS server that<br>
> serves *one* A or AAAA record to the client at the time of resolution (like<br>
> a CDN would use).<br>
><br>
> Instead of using the mosh wrapper script, you could have your users use some<br>
> other scheme to figure out the IP:port of the server, but the point is that<br>
> once you launch the mosh-client, it's going to keep sending datagrams to the<br>
> IP:port of the mosh-server, and those datagrams need to get to the same<br>
> mosh-server process even if the client roams to a different publicly-visible<br>
> IP address or port.<br>
><br>
> You could imagine writing a very smart mosh proxy that has the keys to all<br>
> the sessions and can figure out (for an incoming datagram coming from an<br>
> unknown source IP:port) which session it actually belongs to, and then makes<br>
> a sticky mapping and routes it to the proper mosh-server. But I don't think<br>
> anybody has actually done this yet and of course there's a challenge in<br>
> making this reliable/replicated.<br>
><br>
> -Keith<br>
><br>
> On Mon, Jun 25, 2018 at 3:10 AM, Thomas Buckley-Houston <<a href="mailto:tom@tombh.co.uk" target="_blank">tom@tombh.co.uk</a>><br>
> wrote:<br>
>><br>
>> Thanks so much for the clarification.<br>
>><br>
>> > UDP is connectionless<br>
>><br>
>> That's the key here. So I have no choice but to use sticky IP-based<br>
>> routing. Round-robin DNS isn't an option I don't think, because I hope<br>
>> one day to be able to scale to thousands of servers.<br>
>><br>
>> And thanks so much for the heads up about my DNSSEC records. I've sent<br>
>> a request for them to be deleted. I'd added them and some SSHFP<br>
>> records to explore automatically passing the StrictHostKey warning.<br>
>> But it's not entirely straight forward. Even with correct DNS records<br>
>> the SSH user still has to have VerifyHostKeyDNS enabled, which as I<br>
>> understand most people don't. And then on top of that my DNS provider<br>
>> (DNSSimple) automatically rotate the keys every 3 months, which means<br>
>> I have to manually send a request to my registrars by email to update<br>
>> the DNSSEC records. Is it all worth it do you think?<br>
>><br>
>> On 24 June 2018 at 13:36, Anders Kaseorg <<a href="mailto:andersk@mit.edu" target="_blank">andersk@mit.edu</a>> wrote:<br>
>> > You may have a misunderstanding about how a Mosh session is set up. The<br>
>> > mosh script launches a mosh-server on the remote system via SSH;<br>
>> > mosh-server picks a port number and a random encryption key, and writes<br>
>> > them to stdout, where they go back over SSH to the mosh script; then the<br>
>> > mosh script launches mosh-client passing the IP address, port number,<br>
>> > and<br>
>> > encryption key. The newly launched mosh-client and mosh-server<br>
>> > processes<br>
>> > exchange UDP packets encrypted with the shared key; communication is<br>
>> > successful if the packets can be decrypted.<br>
>> ><br>
>> > There’s no separate “key checking” step to be disabled. And it doesn’t<br>
>> > make sense to “refuse more than 1 connection on the same port”, both<br>
>> > because UDP is connectionless, and because a new mosh-server is launched<br>
>> > on a new port for each Mosh session (it is not a daemon like sshd).<br>
>> ><br>
>> > The easiest way to put Mosh servers behind a load balancer is with<br>
>> > round-robin DNS where a single hostname resolves to many addresses, or<br>
>> > to<br>
>> > different addresses for different clients and/or at different times.<br>
>> > We’ve already gone out of our way to make the mosh script resolve the<br>
>> > hostname only once and use the same address for the SSH connection and<br>
>> > the<br>
>> > UDP packets, because that’s needed for MIT’s <a href="http://athena.dialup.mit.edu" rel="noreferrer" target="_blank">athena.dialup.mit.edu</a> pool.<br>
>> ><br>
>> > If that’s not an option and you really need all connections to go<br>
>> > through<br>
>> > a single load balancer address, you could try wrapping mosh-server in a<br>
>> > script that passes different disjoint port ranges (-p) on different<br>
>> > backends, and forwarding those ranges to the corresponding backends from<br>
>> > the load balancer.<br>
>> ><br>
>> > Unrelatedly, brow.sh doesn’t resolve with DNSSEC-enabled resolvers like<br>
>> > 1.1.1.1 or 8.8.8.8, seemingly due to some problem with the DS records<br>
>> > set<br>
>> > with the registrar: <a href="https://dnssec-debugger.verisignlabs.com/brow.sh" rel="noreferrer" target="_blank">https://dnssec-debugger.verisi<wbr>gnlabs.com/brow.sh</a>.<br>
>> ><br>
>> > Anders<br>
>><br>
>> ______________________________<wbr>_________________<br>
>> mosh-devel mailing list<br>
>> <a href="mailto:mosh-devel@mit.edu" target="_blank">mosh-devel@mit.edu</a><br>
>> <a href="http://mailman.mit.edu/mailman/listinfo/mosh-devel" rel="noreferrer" target="_blank">http://mailman.mit.edu/mailman<wbr>/listinfo/mosh-devel</a><br>
><br>
><br>
<br>
______________________________<wbr>_________________<br>
mosh-devel mailing list<br>
<a href="mailto:mosh-devel@mit.edu" target="_blank">mosh-devel@mit.edu</a><br>
<a href="http://mailman.mit.edu/mailman/listinfo/mosh-devel" rel="noreferrer" target="_blank">http://mailman.mit.edu/mailman<wbr>/listinfo/mosh-devel</a><br>
</div></div></blockquote></div><br></div></div></div>