Kerberos Authentication question(s)

Michael B Allen ioplex at gmail.com
Fri Jun 26 13:05:38 EDT 2015


On Fri, Jun 26, 2015 at 1:06 AM, Amos Jeffries <squid3 at treenet.co.nz> wrote:
> * HTTP is designed to contain multiple TCP connection "hops". The common
> Internet scenario is 2-3 hops, maybe more. On a centrally controlled LAN
> or large enterprise multi-POP network there may be 2 hops.
>
> * HTTP itself is stateless. In particular the proxy hops from above, may
> be coalescing / multiplexing messages from multiple clients onto any
> given server TCP connection. At least in situations where NTLM and
> Negotiate are absent they will.
>
>
> In order to support NTM protocol with its TCP-level binding proxies have
> to completely disable almost all HTTP functionality and act as if they
> were effectively SOCKS proxies. This is an extremely large performance
> decrease.
>
> Also there is a large latency increase due to the fact the NTLM
> handshake as implemented by popular software requires two TCP
> connections to be setup and torn down for each proxy transited. With 2
> or more proxies in the chain this becomes almost impossible to succeed,
> thus we tell people NTLM does not work at all over Internet.
>
> Kerberos helps with its simpler handshake, but still the multi-hop
> problems exist to prevent most HTTP mutiplexing related features being
> used when performance is needed.
>
<snip>
>
> Efficiency there is relative to the HTTP mutiplexing behaviour. If a
> single TCP connection is being used by N clients with small spikes in an
> otherwise low background of traffic the TCP connection persistence can
> vastly reduce the overall response latency for all clients. It also
> frees up N-1 TCP sockets for use by other clients.
>
> Proxies and servers are often dealing simultaneously with 3-4 orders of
> magnitude more requests than a single client is sending. So reduction of
> TCP connection count helps massively raise the server capacity and DoS
> tolerance. That tends to be why HTTP caching proxies and CDN are used in
> the first place.
>
<snip>
>
>> I'm not sure what you mean by using RPCs but bear in mind that any
>> kind of third party service could NOT be based on HTTP (because that
>> would just be pushing the poop around without actually getting rid of
>> it). And a non-HTTP based third party authentication service probably
>> would not play well on the Internet. So HTTP sites are still
>> processing plaintext passwords on the server which is of course
>> ridiculously insecure ...
>>
>> I haven't really thought too much about this but I have to wonder if
>> it would be better to make HTTP optionally / partially stateful where
>> a client could generate a "Client-ID" every once in a while which
>> would just be a long random number and then require HTTP proxies and
>> load balanacers and such to maintain an index of these IDs and then
>> *try* to route requests to the same downstream server. I think they
>> already pretty much have to do this for TLS
>
> No. For TLS the connections to client and to server are forced to be
> pinned together end-to-end and treated as SOCKS-like proxies same as
> handling traffic with NTLM in it. The same TCP performance vs
> multiplexing problems result, and additionally the encrypted content in
> theory cannot be cached so there is not even a chance for caching
> proxies to reduce the traffic load on the end-server. (sad reality is
> it's just forcing TLS MITM to become popular).
>
<snip>
>
> Speaking for Squid HTTP caching proxy. We use the Negotiate ticket value
> as presented in the message WWW-auth header. Comparing it to the
> previous delivered one to ensure the client is still sending the same auth.
>
> Due to some server implementations (including Squid itself due to above)
> assuming connection ties we are still required to also pin the
> connections together as with NTLM. That still leaves us with an
> unfortunately high turnover in TCP sockets. But more HTTP features can
> be used reliably than with NTLM.
>
>
> PS. I am currently working on adding support to Squid for a HTTP scheme
> "Kerberos" that uses the bare non-SPNEGO/GSSAPI token value. If there
> are others interested in working out the details to get this going
> without the TCP level pinning I am interested in collaboration.

I don't think you shouldn't have to know anything about NTLM or
Kerberos or worry about any of that. You just need an exceptional
condition that implements "server stickyness".

If I were coding a proxy like you describe (and that is exactly the
type of thing I would probably be good at implementing) I would
optimize it for muxing like you describe at the top of your message
and ignore Negotiate and NTLM. But somewhere in the middle I would add
an exceptional condition like the following C-like pseudocode:

  get_header(ctx, req, 'Client-ID', &client_id);
  if (lookup_outbound_destination_by_client_id(ctx, client_id,
&destination) == 0) {
      if (is_outbound_destination_fast(ctx, destination) == FALSE) {
        remove_outbound_destination_by_client_id(ctx, client_id);
      } else {
        set_outbound_destination(ctx, req, destination);
      }
  } else {
      set_outbound_destination_in_the_usual_way(ctx, req);
  }

So this is supposed to implement "server stickyness".

Then I would modify all of the browsers in the world to generate a
Clinet-ID header if they need stickyness for something like a
multi-request authentication.

Note that the stickyness is not a hard binding. A different
destination can be selected at any time. The client would just have to
rebuild whatever state is associated with the Client-ID. It just has
to be sticky enough for the client to get a reasonable amount of
stateful work done most of the time.

The Client-ID value would just be a big random number like
4043eeb322518921132308829a0e98af that would be re-generated once in a
while.

This could be the toe-hold necessary to do something like a proper
stand-alone authentication over HTTP.

-- 
Michael B Allen
Java Active Directory Integration
http://www.ioplex.com/


More information about the Kerberos mailing list