RFC: preauth benchmarking methodology

Nathaniel McCallum npmccallum at redhat.com
Mon Jun 6 16:45:41 EDT 2011


On Fri, 2011-06-03 at 15:40 -0400, Dmitri Pal wrote:
> On 06/03/2011 03:07 PM, Nathaniel McCallum wrote:
> > All the code referenced below comes from here:
> > https://github.com/npmccallum/krb5-anonsvn/tree/perftest/src/plugins/preauth/perftest
> >
> > As part of the FreeIPA project (http://freeipa.org) we are attempting to
> > add support for a variety of preauth mechanisms, such as yubikey, rsa,
> > and others.  One of the major concerns that has come up in our testing
> > is that while the current krb5 preauth mechanisms are quite quick to
> > verify, the use of external services like yubikey which may introduce
> > multi-second delays introduces scalability problems due to the
> > non-threaded, synchronous main-loop of krb5.
> >
> > Before we attempt to fix the problem however, we need to make sure that
> > we have a standardized testing suite to measure our progress.  This
> > suite should be reusable for krb5 in other ways as well.
> >
> > The basic idea is that we need to simulate reproducible delays in a
> > preauth plugin and measure the total responsiveness of the server when
> > these delays appear.  To this end I've created a preauth plugin called
> > 'perftest' which always approves the preauth after a delay.  The delay
> > is controlled by the name of the principal, where 1 at REALM.COM would
> > delay for 1 millisecond and 3000 at REALM.COM would delay for 3 seconds.
> >
> > Then we measure the speeds of a set of kinit's by using the simulexec.py
> > file (in the repo above) which will execute a set of commands with a
> > given concurrency and repetition values.  The output of the script is
> > CSVs with the columns:
> >   1. Total size
> >   2. Total time (seconds)
> >   3. Parallelism
> >   4. Successes
> >   5. Failures
> >   6. Average time of successes (seconds)
> >   7. Standard deviation of #4
> >
> > We will of course need to use a parallelism greater than the number of
> > worker processes in the kdc, or the test will be useless.
> >
> > Does anyone have any further thoughts?
> 
> Is there a way to capture how many retries a client has attempted before
> a success and what is the client timeout before it retries?
> That would be a vary important piece of data as the client retry timeout
> as well as the number of retries would probably be something that has to
> be tuned depending on the selected preath scheme implemented by the
> enterprise. The goal should not only to measure things but to create a
> foundation for formulating (and proving) the client tuneup parameters
> (timeout and # of retries).

Good point, I'll look into this.




More information about the krbdev mailing list