KDC Audit project

Dmitri Pal dpal at redhat.com
Tue Jan 8 15:52:31 EST 2013


On 01/08/2013 11:48 AM, Benjamin Kaduk wrote:
> On Mon, 7 Jan 2013, Dmitri Pal wrote:
>
>> On 01/07/2013 02:25 PM, Zhanna Tsitkov wrote:
>>> On Jan 7, 2013, at 2:02 PM, Dmitri Pal wrote:
>>>
>>>> There should be recognized and known keys like: type of the event
>>>> (example KDC start/stop) and may be subtype (start or stop), timestamp,
>>>> principal operation is performed with etc.
>>>>
>>>> It will be up to the plugin to decide what to do with the data.
> With one-interface-per-event, the plugin knows exactly what it has to do 
> with the data, there is no room for ambiguity and no need to attempt a 
> possibly-incorrect "best effort" treatment of unknown data.

I think you miss the point.
The interface has two sides: producer - the kerberos code,  and consumer
- the plugin.
Say that in the initial implementation producer is capable of events A,
B , C.
Each event has specific subset of data.
First event has A.a, A.b, A.c
Next has B.x, B.y
And another C.k, C.l, C.m, C.n

In general A.a field and B.x can be the same thing.

So you can create a static interface like proposed that has three
methods and hard coded arguments or you can create a single call that
passes key value pairs. For you the amount of work at the beginning
pretty much the same.
But what if later the provider needs to pass additional piece of data?
In a static interface there is no room for that, i.e you have to rev
interface or create a new call as a result it becomes very hard to
expose more information over time. In the interface that I propose the
plugin would recognize the fields that it was coded to recognize and
will have generic handler for fields it does not know about. This means
that you do not need to rev the producer and and consumer at the same
time. They can be updated independently especially if they are done in
the context of a different projects. The default consumer plugin
provided by MIT should probably write events to file but I envision
following plugins right away:
1. Simple syslog plugin
2. Plugin that would put data into logcee format and then send to syslog
3. Plygin that would send data to systemd journal
4. Plugin that would use amqp to sent the event somewhere.
May be something else.
The independence of the  consumer from the provider and ability to
capture any information regardless whether the plugin is capable of
recognizing it or not are the key requirements for the successful event
capturing plugin. This is why a generic interface is much more
preferable here than a static one.

If you look at the latest state of art regarding event logging (CEE
initiative for example) you will see that people who are working on the
event logging initiatives and standards tend to agree that it is better
to have a generic interface and publish event descriptions for the
possible events emitted than to create interfaces that hard code
parameters assuming that it helps the consume those events in any way.
It does not.
The goal is to make the app to be able to emit information without any
obstacles. It is important to capture it but the actual interpretation
might be lagging in the event processing software.


>
>>>> Such approach would allow evolving the interface and adding more data to
>>>> the events over time without breaking the existing plugins.
>>>> Approach listed on the page would make it very hard to evolve the
>>>> interface on both sides, we effectively create a "one shot do it right"
>>>> which is always hard to accomplish.
> The structure of the plugin interface is such that an existing plugin can 
> continue to do what it originally did, even if new events/interfaces are 
> added.  If a plugin does not supply a handler for an event, it just does 
> not handle that event.

But it can't handle the case when the event needs to add a new piece of
data.

>>>> A generic interface is a bit more work but existing libraries help to
>>>> reduce the cost of development.
> My sense is that the generic interface would put a heavy burden on the 
> plugin module author to make an attempt to cover "all possible cases".

No you cover the same cases as you listed. Nothing is different but you
also create a default handler for events and key that you do not know.
This would result that the data would be recorded/passed on even if the
plugin does not know anything about it. In your case the new data will
not be consumable until the plugin is revved to understand it. This is
the biggest problem of the static interfaces.

> Are you saying that it would be more work for us the framework author and 
> less work for the plugin module author?

IMo the amount of work is pretty much the same as long as you decide how
you pass the KVP to the generic call. If you create custom encoding it
would be hard to deal with. If you use JSON or generic ASN encoding
there will be no problems and amount of work will be very limited.

If you really want you can implement the functions listed in the
proposal as wrappers around the generic interface on the provider side.
On the consumer side IMO the default implementation should be: serialize
the event as is and dump into the file.



>
>>> As a matter of fact we have discussed exactly this approach inside the
>>> group.  However,  it was suggested that too generalized API is not a
>>> good idea because of possible confusion while debugging and/or
>>> collecting information to be reported.  Hence, one-api-per-event
>>> approach.
>>> We will definitely revisit this topic. Thanks for the comment!
>> It depends on the helpers you provide.
>> If you use JSON it is easy to print and visualize so it might be the
>> best of all approach.
> I had the impression that audit tracing was intended to be more structured 
> than just a simple logfile, so I am not sure that the ease of printing and 
> visualization is the most relevant feature.  

One you have a variable set of KVPs you have structure logging. All the
rest is encoding and should be left to the projects that consume MIT
Kerberos and integrate with it.
I suspect that for embedded systems they would try to do something very
specific.


> If audit records are to be 
> treated as more of a database to be queried than a static logfile, then 
> having very structured data is useful.  Of course, structured data can be 
> encoded/serialized in JSON or other ways, but the benefit of serializing 
> structured data just to decode it again is not clear.

Once you have structure data passed over the interface you can leave to
the plugin implementation to deal with the way how this data needs to be:
a) Filtered
b) Encoded
c) Recorded
MIT can provide a very simple plugin that does it No_filter/JSON/File
leaving other options to other implementations and welcoming other
contributions.

> _______________________________________________
> krbdev mailing list             krbdev at mit.edu
> https://mailman.mit.edu/mailman/listinfo/krbdev
>
>


-- 
Thank you,
Dmitri Pal

Sr. Engineering Manager for IdM portfolio
Red Hat Inc.


-------------------------------
Looking to carve out IT costs?
www.redhat.com/carveoutcosts/





More information about the krbdev mailing list