More memory leaks (1.6.4-beta1 release)

Markus Moeller huaraz at moeller.plus.com
Thu Nov 12 16:43:47 EST 2009


Sorry Dan, but I changed systems in the mean time (to OpenSuse 11.1) and get 
no leak (only still reachable: 672 bytes in 18 blocks independant of the 
number of iterations).

For 1 auth request

==15788== 29 bytes in 4 blocks are still reachable in loss record 1 of 5
==15788==    at 0x4027DDE: malloc (in 
/usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==15788==    by 0x4037FEE: gss_indicate_mechs (in 
/usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x404FD0E: (within /usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x4051147: (within /usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x403361A: gss_accept_sec_context (in 
/usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x8049E13: main (squid_kerb_auth.c:500)
==15788==
==15788==
==15788== 32 bytes in 1 blocks are still reachable in loss record 2 of 5
==15788==    at 0x4025E92: calloc (in 
/usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==15788==    by 0x4037F59: gss_indicate_mechs (in 
/usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x404FD0E: (within /usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x4051147: (within /usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x403361A: gss_accept_sec_context (in 
/usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x8049E13: main (squid_kerb_auth.c:500)
==15788==
==15788==
==15788== 128 bytes in 4 blocks are still reachable in loss record 3 of 5
==15788==    at 0x4027DDE: malloc (in 
/usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==15788==    by 0x4036694: (within /usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x40367A0: (within /usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x4036DD4: (within /usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x40343BC: gss_acquire_cred (in 
/usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x8049D7C: main (squid_kerb_auth.c:493)
==15788==
==15788==
==15788== 131 bytes in 8 blocks are still reachable in loss record 4 of 5
==15788==    at 0x4027DDE: malloc (in 
/usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==15788==    by 0x41CD45F: strdup (in /lib/libc-2.9.so)
==15788==    by 0x40366BC: (within /usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x40367A0: (within /usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x4036DD4: (within /usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x40343BC: gss_acquire_cred (in 
/usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x8049D7C: main (squid_kerb_auth.c:493)
==15788==
==15788==
==15788== 352 bytes in 1 blocks are still reachable in loss record 5 of 5
==15788==    at 0x4027DDE: malloc (in 
/usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==15788==    by 0x41B6B3E: (within /lib/libc-2.9.so)
==15788==    by 0x41B6C0B: fopen (in /lib/libc-2.9.so)
==15788==    by 0x4140F67: (within /lib/libcom_err.so.2.1)
==15788==    by 0x414115D: add_error_table (in /lib/libcom_err.so.2.1)
==15788==    by 0x4031161: (within /usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x40305A4: (within /usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x4036C5E: (within /usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x40343BC: gss_acquire_cred (in 
/usr/lib/libgssapi_krb5.so.2.2)
==15788==    by 0x8049D7C: main (squid_kerb_auth.c:493)
==15788==
==15788== LEAK SUMMARY:
==15788==    definitely lost: 0 bytes in 0 blocks.
==15788==      possibly lost: 0 bytes in 0 blocks.
==15788==    still reachable: 672 bytes in 18 blocks.
==15788==         suppressed: 0 bytes in 0 blocks.
--15788--  memcheck: sanity checks: 4 cheap, 2 expensive
--15788--  memcheck: auxmaps: 0 auxmap entries (0k, 0M) in use
--15788--  memcheck: auxmaps_L1: 0 searches, 0 cmps, ratio 0:10
--15788--  memcheck: auxmaps_L2: 0 searches, 0 nodes
--15788--  memcheck: SMs: n_issued      = 19 (304k, 0M)
--15788--  memcheck: SMs: n_deissued    = 0 (0k, 0M)
--15788--  memcheck: SMs: max_noaccess  = 65535 (1048560k, 1023M)
--15788--  memcheck: SMs: max_undefined = 0 (0k, 0M)
--15788--  memcheck: SMs: max_defined   = 36 (576k, 0M)
--15788--  memcheck: SMs: max_non_DSM   = 19 (304k, 0M)
--15788--  memcheck: max sec V bit nodes:    0 (0k, 0M)
--15788--  memcheck: set_sec_vbits8 calls: 0 (new: 0, updates: 0)
--15788--  memcheck: max shadow mem size:   608k, 0M
--15788-- translate:            fast SP updates identified: 8,394 ( 89.1%)
--15788-- translate:   generic_known SP updates identified: 693 (  7.3%)
--15788-- translate: generic_unknown SP updates identified: 330 (  3.5%)
--15788--     tt/tc: 23,891 tt lookups requiring 25,400 probes
--15788--     tt/tc: 23,891 fast-cache updates, 3 flushes
--15788--  transtab: new        9,075 (209,982 -> 2,791,681; ratio 132:10) 
[0 scs]
--15788--  transtab: dumped     0 (0 -> ??)
--15788--  transtab: discarded  7 (189 -> ??)
--15788-- scheduler: 442,996 jumps (bb entries).
--15788-- scheduler: 4/19,118 major/minor sched events.
--15788--    sanity: 5 cheap, 2 expensive checks.
--15788--    exectx: 1,543 lists, 993 contexts (avg 0 per list)
--15788--    exectx: 2,464 searches, 2,046 full compares (830 per 1000)
--15788--    exectx: 26 cmp2, 3 cmp4, 0 cmpAll
--15788--  errormgr: 9 supplist searches, 360 comparisons during search
--15788--  errormgr: 4 errlist searches, 6 comparisons during search

and for 10000 requests

==15790== LEAK SUMMARY:
==15790==    definitely lost: 0 bytes in 0 blocks.
==15790==      possibly lost: 0 bytes in 0 blocks.
==15790==    still reachable: 672 bytes in 18 blocks.
==15790==         suppressed: 0 bytes in 0 blocks.
--15790--  memcheck: sanity checks: 69361 cheap, 351 expensive
--15790--  memcheck: auxmaps: 0 auxmap entries (0k, 0M) in use
--15790--  memcheck: auxmaps_L1: 0 searches, 0 cmps, ratio 0:10
--15790--  memcheck: auxmaps_L2: 0 searches, 0 nodes
--15790--  memcheck: SMs: n_issued      = 456 (7296k, 7M)
--15790--  memcheck: SMs: n_deissued    = 0 (0k, 0M)
--15790--  memcheck: SMs: max_noaccess  = 65535 (1048560k, 1023M)
--15790--  memcheck: SMs: max_undefined = 0 (0k, 0M)
--15790--  memcheck: SMs: max_defined   = 36 (576k, 0M)
--15790--  memcheck: SMs: max_non_DSM   = 456 (7296k, 7M)
--15790--  memcheck: max sec V bit nodes:    0 (0k, 0M)
--15790--  memcheck: set_sec_vbits8 calls: 0 (new: 0, updates: 0)
--15790--  memcheck: max shadow mem size:   7600k, 7M
--15790-- translate:            fast SP updates identified: 8,574 ( 89.1%)
--15790-- translate:   generic_known SP updates identified: 711 (  7.3%)
--15790-- translate: generic_unknown SP updates identified: 336 (  3.4%)
--15790--     tt/tc: 13,910,651 tt lookups requiring 14,978,428 probes
--15790--     tt/tc: 13,910,651 fast-cache updates, 3 flushes
--15790--  transtab: new        9,240 (213,732 -> 2,840,293; ratio 132:10) 
[0 scs]
--15790--  transtab: dumped     0 (0 -> ??)
--15790--  transtab: discarded  7 (189 -> ??)
--15790-- scheduler: 6,936,176,324 jumps (bb entries).
--15790-- scheduler: 69,361/376,007,102 major/minor sched events.
--15790--    sanity: 69362 cheap, 351 expensive checks.
--15790--    exectx: 1,543 lists, 1,087 contexts (avg 0 per list)
--15790--    exectx: 213,864,694 searches, 217,952,068 full compares (1,019 
per 1000)
--15790--    exectx: 26 cmp2, 3 cmp4, 0 cmpAll
--15790--  errormgr: 9 supplist searches, 360 comparisons during search
--15790--  errormgr: 4 errlist searches, 6 comparisons during search

Markus

----- Original Message ----- 
From: "Dan Searle" <dan.searle at censornet.com>
To: "Markus Moeller" <huaraz at moeller.plus.com>
Cc: <krbdev at mit.edu>
Sent: Thursday, November 12, 2009 9:50 AM
Subject: Re: More memory leaks (1.6.4-beta1 release)


> Hi,
>
> Since you say CentOS has a version of MIT Kerberos 5 which shows no leaks, 
> can you provide me with a link to their source code or be more specific 
> about which version of CentOS you were testing? I really need a solution 
> to this problem very quickly. I've tried to find CentOS's RPM's for MIT's 
> Kerberos but have so far failed.
> Dan...
>
> Markus Moeller wrote:
>> "Marcus Watts" <mdw at umich.edu> wrote in message 
>> news:E1N8GYL-0005yn-Kd at bruson.ifs.umich.edu...
>>
>>>> Date:    Wed, 11 Nov 2009 13:57:01 GMT
>>>> To:      krbdev at mit.edu
>>>> From:    Dan Searle <dan.searle at censornet.com>
>>>> Subject: More memory leaks (1.6.4-beta1 release)
>>>>
>>>> Hi,
>>>>
>>>> I've been debugging the squid_kerb_auth helper some more and have found
>>>> yet more memory leaks. I missed these last round because I didn't take
>>>> into account (ignored) the still reachable blocks (malloced blocks 
>>>> which
>>>> still have valid pointers).
>>>>
>>>> I've ignores the instances (loss records) with just 1 block or what 
>>>> seem
>>>> to be a "fixed" number of still reachable blocks, i.e. I've only
>>>> included here the loss records which seem to scale with the number of
>>>> iterations the authentication helper goes through, and so are a threat
>>>> to a running process...
>>>>
>>>> ==5314== 372 bytes in 31 blocks are still reachable in loss record 76 
>>>> of 82
>>>> ==5314==    at 0x4022AB8: malloc (vg_replace_malloc.c:207)
>>>> ==5314==    by 0x4032BE5: gssint_g_set_entry_add (util_set.c:61)
>>>> ==5314==    by 0x4033B1B: g_save (util_validate.c:114)
>>>> ==5314==    by 0x404097A: krb5_gss_acquire_cred (acquire_cred.c:645)
>>>> ==5314==    by 0x403D23F: krb5_gss_accept_sec_context
>>>> (accept_sec_context.c:302)
>>>> ==5314==    by 0x404B192: k5glue_accept_sec_context 
>>>> (krb5_gss_glue.c:434)
>>>> ==5314==    by 0x4034178: gss_accept_sec_context
>>>> (g_accept_sec_context.c:196)
>>>> ==5314==    by 0x40510A7: spnego_gss_accept_sec_context 
>>>> (spnego_mech.c:1113)
>>>> ==5314==    by 0x4034178: gss_accept_sec_context
>>>> (g_accept_sec_context.c:196)
>>>> ==5314==    by 0x8049C91: main (squid_kerb_auth.c:515)
>>>>
>>>> ==5314== 372 bytes in 31 blocks are still reachable in loss record 77 
>>>> of 82
>>>> ==5314==    at 0x4022AB8: malloc (vg_replace_malloc.c:207)
>>>> ==5314==    by 0x40A006A: krb5_ktfile_resolve (kt_file.c:207)
>>>> ==5314==    by 0x409D900: krb5_kt_resolve (ktbase.c:129)
>>>> ==5314==    by 0x409DB38: krb5_kt_default (ktdefault.c:41)
>>>> ==5314==    by 0x4041EC1: krb5_gss_acquire_cred (acquire_cred.c:171)
>>>> ==5314==    by 0x403D23F: krb5_gss_accept_sec_context
>>>> (accept_sec_context.c:302)
>>>> ==5314==    by 0x404B192: k5glue_accept_sec_context 
>>>> (krb5_gss_glue.c:434)
>>>> ==5314==    by 0x4034178: gss_accept_sec_context
>>>> (g_accept_sec_context.c:196)
>>>> ==5314==    by 0x40510A7: spnego_gss_accept_sec_context 
>>>> (spnego_mech.c:1113)
>>>> ==5314==    by 0x4034178: gss_accept_sec_context
>>>> (g_accept_sec_context.c:196)
>>>> ==5314==    by 0x8049C91: main (squid_kerb_auth.c:515)
>>>>
>>>> ==5314== 527 bytes in 31 blocks are still reachable in loss record 78 
>>>> of 82
>>>> ==5314==    at 0x4021BDE: calloc (vg_replace_malloc.c:397)
>>>> ==5314==    by 0x40A013A: krb5_ktfile_resolve (kt_file.c:223)
>>>> ==5314==    by 0x409D900: krb5_kt_resolve (ktbase.c:129)
>>>> ==5314==    by 0x409DB38: krb5_kt_default (ktdefault.c:41)
>>>> ==5314==    by 0x4041EC1: krb5_gss_acquire_cred (acquire_cred.c:171)
>>>> ==5314==    by 0x403D23F: krb5_gss_accept_sec_context
>>>> (accept_sec_context.c:302)
>>>> ==5314==    by 0x404B192: k5glue_accept_sec_context 
>>>> (krb5_gss_glue.c:434)
>>>> ==5314==    by 0x4034178: gss_accept_sec_context
>>>> (g_accept_sec_context.c:196)
>>>> ==5314==    by 0x40510A7: spnego_gss_accept_sec_context 
>>>> (spnego_mech.c:1113)
>>>> ==5314==    by 0x4034178: gss_accept_sec_context
>>>> (g_accept_sec_context.c:196)
>>>> ==5314==    by 0x8049C91: main (squid_kerb_auth.c:515)
>>>>
>>>> ==5314== 2,852 bytes in 31 blocks are still reachable in loss record 81
>>>> of 82
>>>> ==5314==    at 0x4022AB8: malloc (vg_replace_malloc.c:207)
>>>> ==5314==    by 0x403F3C4: krb5_gss_acquire_cred (acquire_cred.c:497)
>>>> ==5314==    by 0x403D23F: krb5_gss_accept_sec_context
>>>> (accept_sec_context.c:302)
>>>> ==5314==    by 0x404B192: k5glue_accept_sec_context 
>>>> (krb5_gss_glue.c:434)
>>>> ==5314==    by 0x4034178: gss_accept_sec_context
>>>> (g_accept_sec_context.c:196)
>>>> ==5314==    by 0x40510A7: spnego_gss_accept_sec_context 
>>>> (spnego_mech.c:1113)
>>>> ==5314==    by 0x4034178: gss_accept_sec_context
>>>> (g_accept_sec_context.c:196)
>>>> ==5314==    by 0x8049C91: main (squid_kerb_auth.c:515)
>>>>
>>>> ==5314== 256,308 bytes in 31 blocks are still reachable in loss record
>>>> 82 of 82
>>>> ==5314==    at 0x4022AB8: malloc (vg_replace_malloc.c:207)
>>>> ==5314==    by 0x40A008A: krb5_ktfile_resolve (kt_file.c:211)
>>>> ==5314==    by 0x409D900: krb5_kt_resolve (ktbase.c:129)
>>>> ==5314==    by 0x409DB38: krb5_kt_default (ktdefault.c:41)
>>>> ==5314==    by 0x4041EC1: krb5_gss_acquire_cred (acquire_cred.c:171)
>>>> ==5314==    by 0x403D23F: krb5_gss_accept_sec_context
>>>> (accept_sec_context.c:302)
>>>> ==5314==    by 0x404B192: k5glue_accept_sec_context 
>>>> (krb5_gss_glue.c:434)
>>>> ==5314==    by 0x4034178: gss_accept_sec_context
>>>> (g_accept_sec_context.c:196)
>>>> ==5314==    by 0x40510A7: spnego_gss_accept_sec_context 
>>>> (spnego_mech.c:1113)
>>>> ==5314==    by 0x4034178: gss_accept_sec_context
>>>> (g_accept_sec_context.c:196)
>>>> ==5314==    by 0x8049C91: main (squid_kerb_auth.c:515)
>>>>
>>>> I've also debugged the squid_kerb_auth helper code and made sure it
>>>> matches calls to gss_accept_sec_context() with calls to
>>>> gss_delete_sec_context() which it does. So these still reachable leaks
>>>> do appear to be bugs in the MIT Kerberos libs.
>>>>
>>>> Can anyone shed any light on these? as they are making MIT Kerberos
>>>> unusable. Regards, Dan...
>>>>
>>>> -- 
>>>> Dan Searle
>>>>
>>>> CensorNet Ltd - professional & affordable Web & E-mail filtering
>>>> email: dan.searle at censornet.com web: www.censornet.com
>>>> tel: 0845 230 9590 / fax: 0845 230 9591 / support: 0845 230 9592
>>>> snail: Vallon House, Vantage Court Office Park, Winterbourne,
>>>>        Bristol, BS16 1GW, UK.
>>>>
>>>> CensorNet Ltd is a registered company in England & Wales No. 05518629
>>>> VAT registration number 901-2048-78
>>>> Any views expressed in this email communication are those of the
>>>> individual sender, except where the sender specifically states them to
>>>> be the views of a member of Censornet Ltd.  Censornet Ltd. does not
>>>> represent, warrant or guarantee that the integrity of this
>>>> communication has been maintained nor that the communication is free
>>>> of errors or interference.
>>>>
>>>>
>>>> -------------------------------------------------------------------------------
>>>> -----
>>>> Scanned for viruses, spam and offensive content by CensorNet MailSafe
>>>>
>>>> Try CensorNet free for 14 days. Provide Internet access on your terms.
>>>> Visit www.censornet.com for more information.
>>>>
>>>> _______________________________________________
>>>> krbdev mailing list             krbdev at mit.edu
>>>> https://mailman.mit.edu/mailman/listinfo/krbdev
>>>>
>>> Presumably your iteration count in this sample was 31.
>>> 3 of your 5 loss records are associated with calls to
>>> krb5_ktfile_resolve.  Calls to krb5_ktfile_resolve/krb5_kt_default
>>> should be balanced with calls to krb5_ktfile_close.
>>> The relevant logic is in krb5/src/lib/gssapi/krb5/acquire_cred.c
>>> so that's presumably where the missing krb5_ktfile_close call ought
>>> to be.
>>>
>>> Both your remaining loss records are directly from gssapi.
>>> specifically krb5_gss_acquire_cred.  It looks like calls to that
>>> should be paired up with calls to krb5_gss_release_cred.
>>> That could happen at the end of krb5_gss_accept_sec_context
>>> if verifier_cred_handle is passed as 0, otherwise, it looks like
>>> the application (squid?) should be responsible.
>>>
>>
>> This is a standalone application. All interaction with squid is via stdin 
>> (base64 encoded token) and stdout.
>>
>>
>>> Presumably the
>>> squid code should actually be calling gss_release_cred.
>>> It's possible there's a bug here in the MIT code, but it would be
>>> useful to first establish the credential isn't in fact being
>>> passed out to squid.
>>>
>>>
>>
>> I tested in the past several MIT/Heimdal versions on different Linux 
>> distributions. Some distribution have fixed MIT memory leaks others 
>> haven't. I think my squid_kerb_auth code is correct in freeing allocated 
>> memory (at least i have seen one or two Linux distributions e.g. centos 
>> which don't show leaks)
>>
>>
>>> -Marcus Watts
>>> _______________________________________________
>>> krbdev mailing list             krbdev at mit.edu
>>> https://mailman.mit.edu/mailman/listinfo/krbdev
>>>
>>>
>> Regards
>> Markus
>>
>> _______________________________________________
>> krbdev mailing list             krbdev at mit.edu
>> https://mailman.mit.edu/mailman/listinfo/krbdev
>>
>> ------------------------------------------------------------------------------------
>> Scanned for viruses, spam and offensive content by CensorNet MailSafe
>>
>> Try CensorNet free for 14 days. Provide Internet access on your terms.
>> Visit www.censornet.com for more information.
>>   ------------------------------------------------------------------------
>>
>>
>> No virus found in this incoming message.
>> Checked by AVG - www.avg.com Version: 9.0.704 / Virus Database: 
>> 270.14.60/2496 - Release Date: 11/11/09 07:40:00
>>
>>
>
>
> -- 
> Dan Searle
>
> CensorNet Ltd - professional & affordable Web & E-mail filtering
> email: dan.searle at censornet.com web: www.censornet.com
> tel: 0845 230 9590 / fax: 0845 230 9591 / support: 0845 230 9592
> snail: Vallon House, Vantage Court Office Park, Winterbourne,
>       Bristol, BS16 1GW, UK.
>
> CensorNet Ltd is a registered company in England & Wales No. 05518629
> VAT registration number 901-2048-78
> Any views expressed in this email communication are those of the
> individual sender, except where the sender specifically states them to
> be the views of a member of Censornet Ltd.  Censornet Ltd. does not
> represent, warrant or guarantee that the integrity of this
> communication has been maintained nor that the communication is free
> of errors or interference.
>
> ------------------------------------------------------------------------------------
> Scanned for viruses, spam and offensive content by CensorNet MailSafe
>
> Try CensorNet free for 14 days. Provide Internet access on your terms.
> Visit www.censornet.com for more information.
>
> 





More information about the krbdev mailing list