Multithreading the KDC server - a design

Ken Raeburn raeburn at MIT.EDU
Mon Apr 4 17:08:39 EDT 2005


On Apr 1, 2005, at 08:10, Rahul Srinivas wrote:
> Hi,
>     I have come up with a design for multithreading the MIT KDC server.
> The design is attached. I have listed the issues that need to be
> addressed while converting the single threaded KDC into a  
> multithreaded
> one and have proposed solutions. Please comment.

Thanks for sending this around.  I have a few comments:

First, you're talking a lot about 1.3.5.  A bunch of changes went into 
library code in 1.4 to make it thread-safe, so 1.4 would be a much 
better starting point for the work.  Otherwise you wind up having to 
pull in a lot of the changes between 1.3.5 and 1.4 anyways, or redo a 
bunch of the work.

You don't indicate whether the thread interface you use would be an 
extension of what I've done so far (which may be a bit complicated to 
sort through at first glance, I admit), or directly using the POSIX 
interfaces.  The former would make it easier for somebody to someday 
port the KDC to Windows, I guess, but extensions would need to be made 
(e.g., thread creation and monitors are outside of its current scope), 
and the POSIX approach is likely to get done a lot faster.

2.4, 2.5: It's also possible in 1.4 for the replay cache and lookaside 
cache to be turned off.

3.1: Again, which thread interface?  The shim layer I wrote does not 
support recursive mutexes.

3.4: The variables set by the signal handlers need to be changed in a 
thread-safe way, and the code checking it should probably delay 
operations (at least the re-opening of the log file) until other 
threads are idle, or at least can't be logging anything.  (If the 
database access is doing something that would be bad to exit in the 
middle of, then perhaps exiting should be delayed too.  On the other 
hand, we don't want a process to hang around indefinitely, ignoring 
signals telling it to exit, just because some database query is 
hanging.)

3.8: Where is gethostbyname called in the KDC?

4.2: Way too expensive.


On Apr 4, 2005, at 13:32, Morrison, Wayne wrote:
> I forwarded the proposed design to one of the OpenVMS threads experts 
> for analysis.  I pointed out that we'd prefer that the implementation 
> avoid forking if possible, and sent some of my concerns about the 
> proposal.  The following are his comments.  The majority of them are 
> not OpenVMS-specific, so (with his permission) I'm sharing them with 
> this mailing list.  Items in brackets are my additions for clarity, 
> and I've made a few minor changes for consistency.

Thanks, Wayne (and Tom)!  I've got a few comments of my own to add:

> I'll also mention that for some kinds of update operations (e.g., 
> incrementing a counter) it is often more efficient to do the update 
> (the read/add/write instructions) using a compiler-provided atomic 
> builtin, rather than use a mutex to protect the read, add, write 
> (which in source code might falsely appear atomic as "counter++" or 
> something). If such a counter is being incremented in isolation, using 
> an atomic builtin is probably optimal. If the counter is incremented 
> in concert with changes to other globals, then it would be more 
> efficient to take out the lock, update all the related variables 
> (including the counter with a normal counter++ expression), and 
> release the lock.

Yes, several architectures have such instructions or instruction 
sequences, but with the variety of systems and compilers we're dealing 
with, it's hard to do any of this in a clean fashion.  And as Sam says, 
as far as we've heard, the KDC's performance hasn't really been a 
bottleneck, so I wouldn't worry for now about using a full-blown mutex 
to protect a counter or bit vector or what have you.  We just need to 
get some parallelism in the database queries.

> 2) Section 3.1 says that there will be a single mutex used for making 
> the entire KDC code thread safe. [...]

Sam's already addressed this.  Making use of multiple processors would 
be nice, but we don't want to add too much extra work before we can 
make use of parallel database queries.

> 3) Section 3.1 -- why is the single mutex to be recursive? The fact 
> that there is proposed to be one single mutex does not lead, in my 
> mind, to the conclusion that this mutex be recursive. Forgive me for 
> saying so, but often the perceived need for a recursive mutex is a 
> byproduct of an incomplete design. There are times when they are a 
> great choice, and this may be one of them -- but the document does not 
> explain why. I think the choice of a recursive mutex needs 
> explanation.

I'm inclined to agree here.  Not so much from experience, as from 
principles; there are probably cases where a recursive mutex makes 
sense, but as it can also encourage sloppiness or at least make missing 
unlock calls harder to spot, I think it needs justification.


> 5) Section 3.5 mentions that an alternative is to have separate 
> mutexes for each global variable. In general this is very attractive 
> to me! More locks (i.e., fine-grained locks) usually yields higher 
> concurrency and higher throughput.
>
> In the extreme, however, one would probably NOT want separate locks 
> for EACH global variable. If two or more global variables' values need 
> to be coordinated, then a common lock for that set of related globals 
> would be appropriate. E.g., queue A and counter A could be governed by 
> lock A, while queue B and counter B could be governed by lock B 
> (assuming that the two queues did not need to be synchronized).

Some of them appear to be recording statistics from the replay cache 
which are never reported.  So, they should be considered protected by 
the same mutex as the replay cache itself.  (Though perhaps we should 
either eliminate them or find a way to report them.)  It appears that 
kdc_current_rcname is not settable, so it could probably be dropped.  
Et cetera...

(Minor aside: For cases where a variable is to be deleted, getting that 
as a small, separate patch would be nice.  But whatever...)

> 8) Still in Section 3.6, in the algorithm for the default thread -- 
> does step 16 (unlock m) trigger any sort of notification for waiting 
> service threads? Default thread step 15 added work to "Q", so I 
> presume that step 16's unlock of m is intended to pro-actively notify 
> any service threads that were waiting for something to appear in Q. 
> Without active notification (e.g., use a pthread condition variable), 
> the service threads may not wake up at all, or at least in a timely 
> manner.

I assumed that was implied by describing 'm' as a monitor, but maybe 
not...

Ken



More information about the krbdev mailing list