Proposed modifications to replay cache to prevent false positives

Ken Raeburn raeburn at MIT.EDU
Tue May 20 18:33:13 EDT 2008

On May 20, 2008, at 16:16, Nicolas Williams wrote:
> B-trees won't help that much, and may possibly hurt.  Unless...

I think part of the intent in this project was to retain backwards  
compatibility with the existing implementation in terms of the on-disk  
format.  (E.g., mixing OS-vendor and MIT binaries without breaking  
things, when the MIT code has Jeff's shiny new replay cache and the OS- 
vendor code is based on a slightly older MIT release.)  I wouldn't see  
a problem with the introduction of a new rcache type that uses a  
different file format, though; it'd let you run the old code until you  
were sure all the implementations on the machine supported the new  
code, and then start using the new stuff.

> Will F. greatly improved the performance of the replay cache on  
> Solaris.

Not surprising; our code is horribly inefficient at doing the i/o, at  
the very least.  (I think we've also got race conditions between  
processes.)  And we could probably do something more clever with the  
in-memory version than a fixed-size hash table with a lame hash  
function.  It may also be worthwhile distinguishing and optimizing two  
different cases, the short-lived server that handles only one client  
and then exits (e.g., telnetd), versus the long-lived server that  
handles a large number of clients (e.g., slapd).

> - provide an option to set the server-side skew to some number of
>   seconds, which in general should be estimated time to boot * 2

[libdefaults]->clockskew is used (and defaults to 300s)

> - provide an option to put the rcache on tmpfs or tmpfs-like
>   filesystems

setenv KRB5RCACHEDIR /path/to/tmpfs

The path prefix is always used, unfortunately; you can't just make the  
pathname be part of the rcache name.  Nor can it be configured via  

> An rcache implementation as a daemon reached via IPC would be even
> better.

Yes, we've discussed it, but that's a bit more work. :)


More information about the krbdev mailing list