KDC query client performance

Nico nico103 at gmail.com
Tue Feb 15 14:39:46 EST 2011

On Sun, Feb 13, 2011 at 04:52:24PM -0500, ghudson at MIT.EDU wrote:
> 2. Speeding up the client retry loop, so that it doesn't take as long
> to time out when you're behind a firewall which black-holes port 88.
> Currently we wait one second per UDP address per pass (and per TCP
> address on the first pass), and also wait 2s/4s/8s/16s (or 30s in
> total) at the end of each pass.
> In order to be nice to KDC load, I think it's still prudent to wait
> one second per server address on the first pass.  After that we're
> mostly trying to be nice to the network, and networks have gotten much
> faster.  So I think once we reach the end of the first pass, we ought
> to speed everything up by a factor of ten--that is, wait only 100ms
> between UDP queries on the second and later passes, and wait
> 200ms/400ms/800ms/1600ms at the end of passes.

It'd be nice if the client could do a lightweight ping of multiple TGSes
in parallel...

For example, send a well-formed TGS-REQ to the first KDC and also send
less well-formed KDC-REQs to two other KDCs -- the latter should cause
the KDC to respond with a KRB-ERROR without wasting any compute
resources on crypto.  For example, the 'from' time in the not-well-
formed requests could be very far in the past.  By the time the first
request times out the client will also know if any of the other KDCs are
alice, thus not likely to timeout.  If TGSes generally validate the
PA-TGS before validating the KDC-REQ-BODY then either use an AS-REQ or
find a way to malform (if I may verbify the adjective) the PA-TGS so as
to produce a KRB-ERROR quickly.

And, as Roland suggests, maybe there should be a client-local daemon
that pings KDCs in a similar fashion so as to maintain a locally cached
list of live KDCs (sorted by round-trip time in fastest-to-slowest

> 3. Eliminate the second default UDP port (750) when parsing profile
> kdc entries.  When a KDC is inaccessible, this causes extra delays,
> and also extra DNS requests due to the way the code is structured.  We
> have always restricted the second default port to UDP over IPv4,
> likely because it was intended as a krb4 transition measure.
> Unfortunately, this change is likely to break a handful of deployments
> which happen to serve KDC requests only on port 750 and win because
> they only need it to work over IPv4 UDP (and don't have any Heimdal
> clients, or configure their Heimdal clients to use port 750
> explicitly).  I'm not sure if it's worth not breaking these
> environments at the cost of extra delays in more common cases.

I'm OK with this as long as people have enough warning about port 750
and/or there's a way to re-enable operation on port 750.


More information about the krbdev mailing list