Determening the number of clients per KDC
Russ Allbery
eagle at eyrie.org
Mon Apr 16 14:27:06 EDT 2018
Sergei Gerasenko <gerases at gmail.com> writes:
> Since I don’t know too much about the KDC architecture, sorry for the
> dilettante questions.
Oh, no problem -- just be aware that they're being answered by someone who
hasn't run large-scale KDCs in about four years, so some of my information
is stale. :)
>> It's unfortunately been long enough since I've tested this on a system
>> running flat out that I don't remember what qps a KDC can do on modern
>> hardware, but I would expect it to at least be in the range of 100 qps.
> Is that per worker?
Oh, workers are new to me. So yes, that would be per-worker.
> Speaking of workers, does MIT Kerberos spawn workers as needed (sort of
> like apache) or is it capped by the `-w` argument? What’s a good number
> of workers to start with? 70? 500? 1000?
If you're doing default Kerberos, the networking is UDP, so it's not going
to spend a lot of time waiting for the network. I would expect that to be
CPU-bound, and therefore would tend towards one worker per core. If
you're doing a lot of TCP, that might increase the chances that you'll
wait for networking, and may benefit from more workers.
This is all just a wild-ass guess, though.
Given your setup, though, it would really surprise me if you saw any
performance issues.
> Ok, to clarify what you mean by the replica serving requests as well, do
> you mean:
> 1. Using a VIP that round-robins the requests to the primary and
> secondary KDC?
> 2. Or do you mean that half the clients use the master and the other
> half the slave?
> 3. Or do you mean that the client itself round-robins between them?
You can use SRV records and get 3 by just listing both KDCs as equal
weight. All Kerberos clients these days should support SRV records.
If you do have Kerberos clients that don't do SRV records for some reason,
it's pretty easy to do 2 by just randomizing the order of the KDCs in
/etc/krb5.conf.
Kerberos clients are very good about falling back to a second server.
You'll just see a slight delay that you might not even notice.
> I can only see 2 as a real option because *I think* once a TGT is
> requested, all TGS requests would need to go to the server that gave the
> TGT?
Nope, all KDCs share the same database and can answer all requests. From
a client perspective, all KDC traffic is completely interchangeable. The
only time it matters is when there's a write, since there's a propagation
delay and the replica will serve stale information for a short period of
time. For keytabs, this almost doesn't matter; for user credentials with
passwords, there's a way to configure the client to automatically retry
the master if an authentication fails against the replica, which can be
useful for authentication immediately after a password change if you're
not using incremental replication.
--
Russ Allbery (eagle at eyrie.org) <http://www.eyrie.org/~eagle/>
More information about the Kerberos
mailing list