[krbdev.mit.edu #1671] no locking used when reading/writing replay cache?

Sam Hartman via RT rt-comment at krbdev.mit.edu
Wed Jul 16 14:33:54 EDT 2003


Return-Path: <kerberos-bounces at MIT.EDU>
Received: from solipsist-nation ([unix socket])
	by solipsist-nation (Cyrus v2.1.5-Debian2.1.5-1) with LMTP; Sat, 12 Jul
 2003 18:16:04 -0400
X-Sieve: CMU Sieve 2.2
Return-Path: <kerberos-bounces at MIT.EDU>
Received: from pacific-carrier-annex.mit.edu (PACIFIC-CARRIER-ANNEX.MIT.EDU
 [18.7.21.83])
	by suchdamage.org (Postfix) with ESMTP id 0DA5413207
	for <hartmans at suchdamage.org>; Sat, 12 Jul 2003 18:16:04 -0400 (EDT)
Received: from pch.mit.edu (PCH.MIT.EDU [18.7.21.90])
	by pacific-carrier-annex.mit.edu (8.12.4/8.9.2) with ESMTP id
 h6CMFC3u028469;
	Sat, 12 Jul 2003 18:15:12 -0400 (EDT)
Received: from pch.mit.edu (localhost [127.0.0.1])
	by pch.mit.edu (8.12.8p1/8.12.8) with ESMTP id h6CLutk3004694;
	Sat, 12 Jul 2003 17:56:57 -0400 (EDT)
Received: from pacific-carrier-annex.mit.edu (PACIFIC-CARRIER-ANNEX.MIT.EDU
	[18.7.21.83])
	by pch.mit.edu (8.12.8p1/8.12.8) with ESMTP id h6CLurk0004690
	for <kerberos at PCH.mit.edu>; Sat, 12 Jul 2003 17:56:53 -0400 (EDT)
Received: from pivsbh2.ms.com (pivsbh2.ms.com [199.89.64.104])
	h6CLuq3u025883
	for <kerberos at mit.edu>; Sat, 12 Jul 2003 17:56:52 -0400 (EDT)
Received: from pivsbh2.ms.com (localhost [127.0.0.1])
	by localhost.ms.com (Postfix) with SMTP id F1B27272EE
	for <kerberos at mit.edu>; Sat, 12 Jul 2003 17:56:51 -0400 (EDT)
Received: from ny16im01.ms.com (unknown [144.14.206.242])
	by pivsbh2.ms.com (internal Postfix) with ESMTP id D7CED262E8
	for <kerberos at mit.edu>; Sat, 12 Jul 2003 17:56:51 -0400 (EDT)
Received: from limus.ms.com (limus [144.14.15.176])
	by ny16im01.ms.com (Sendmail MTA Hub) with ESMTP id h6CLupa20369
	for <kerberos at mit.edu>; Sat, 12 Jul 2003 17:56:51 -0400 (EDT)
Received: (cesarg at localhost) by limus.ms.com (8.11.6/sendmail.cf.client
 v1.05)
	id h6CLupc27200; Sat, 12 Jul 2003 17:56:51 -0400
X-Mailer: 21.4 (patch 12) "Portable Code" XEmacs Lucid (via feedmail 10 I);
	VM 7.14 under 21.4 (patch 12) "Portable Code" XEmacs Lucid
Message-ID: <16144.33827.362156.337126 at limus.ms.com>
Date: Sat, 12 Jul 2003 17:56:51 -0400
From: Cesar Garcia <Cesar.Garcia at morganstanley.com>
To: kerberos at mit.edu
Subject: no file locking used when reading/writing replay cache?
X-BeenThere: kerberos at mit.edu
X-Mailman-Version: 2.1
Precedence: list
List-Id: The Kerberos Authentication System Mailing List <kerberos.mit.edu>
List-Help: <mailto:kerberos-request at mit.edu?subject=help>
List-Post: <mailto:kerberos at mit.edu>
List-Subscribe: <https://mailman.mit.edu/mailman/listinfo/kerberos>,
	<mailto:kerberos-request at mit.edu?subject=subscribe>
List-Archive: <http://mailman.mit.edu/pipermail/kerberos>
List-Unsubscribe: <https://mailman.mit.edu/mailman/listinfo/kerberos>,
	<mailto:kerberos-request at mit.edu?subject=unsubscribe>
Sender: kerberos-bounces at MIT.EDU
Errors-To: kerberos-bounces at MIT.EDU
X-Spam-Status: No, hits=-0.1 required=5.0 tests=SUBJ_ENDS_IN_Q_MARK
 version=2.20
X-Spam-Level: 
MIME-Version: 1.0

short:

There does not appear to be use of file locks when reading/writing to
replay cache files.

long:

We are implementing gss authentication via client and server side
security exits invoked by a vendor application. The application is
both multi-processed and multi-threaded. We have applied various
patches in order to get this code to run cleanly under Purify and use
a mutex in both the client and server side to serialize the entire
sequence of gss calls (within a single process only, of course).

Under extremely high load (note this involves multiple app-server
processes), we are getting SEGVs in our security exit. Unfortunately
the vendor product catches SEGV, so getting a core, stack trace, etc,
will involve some work.

In the mean time, I noticed that there is no use of file locking when
reading/writing to the replay cache. Unfortunately, I also don't have
copy of the replay cache file for us to examine. I wish I had more to
work with - I'm working with the application team to get better data.
However, even if this is not the cause of the problem we saw, I
thought it might be worth raising this issue.

Any insight would be appreciated.

Thanks,
Cesar
________________________________________________
Kerberos mailing list           Kerberos at mit.edu
https://mailman.mit.edu/mailman/listinfo/kerberos



More information about the krb5-bugs mailing list