General thoughts about external authentication and related TL data

Dmitri Pal dpal at
Fri Sep 2 17:28:41 EDT 2011


As I promised here are some thoughts about the OTP policy data.

First of all to avoid the problems in future we IMO need to dream as big
as we can. I completely understand that we have very tight schedule so I
want to have a good design worked out and then a minimal work to be done
to support and enable the implementation of such design in future.

The ultimate goal is to support different external authentication
methods implemented by different vendors/communities at the same time.
* It should be easy to migrate from one OTP solution to another. This
means that there should be a way to have multiple OTP solutions at the
same time and gradually switch from one to another.
* Have different OTP solutions for different subsets of the users as
they have different goals and security attributes (especially in the
government cases)
* Allow smooth merges and acquisitions when two companies use two
different OTP methodologies.

There are probably more use cases like this but IMO this is enough to
justify the need for multiple authentication methods. In addition to
that similar authentication method can be implemented in different ways
. For example AFAIK HOTP/POTP standards do not define the protocol so it
can be HTTP, RADIUS, AMQP or something else like simple TCP or UDP.  The
actual transport is the implementation detail of the specific solution
but the point is that for the method identification it is not enough to
say that it is an HOTP or TOTP or RSA or Yubikey. In addition to that
the vendor can support multiple different authentication sequences so
this should be reflected in the method identification too.

So to summarize at this point:
* We need to support multiple different authentication methods
* These methods need to be identified (named)
* Generic names are not enough. They should be specific enough to be
able to select the right OTP-type+implementation-type+sequence-type.

Now a side step to the actual architecture of the future KDC based
solution. Say there are two different methods that proxy authentication
to two different external servers. Each of these implementations would
have to establish a secure connection to an external server. This means
that each of the implementations would need to keep some configuration
and some security credentials somewhere on the system. These
configurations and credentials should be isolated from each other. If
the proxy implementations are just KDC plugins such isolation can't be
easily accomplished. It will suffer from the same security problems as
NSS and PAM stacks which lead to the birth of SSSD. We should think
about the similar architecture from the day 1. We should also not forget
that in general cases several worker processes would be configured in

So here is how I see the architecture (I should probably draw a picture,
let me know if it makes sense to do).

* User starts an authentication
* His credential is sent to the KDC via FAST tunnel
* The KDC is made capable of doing asynchronous operations via plugins
* There is a special proxy plugin that we will create
* This proxy plugin will detect what kind of authentication method
should be used for the account by querying the back end data stora
* Based on the selected method it will send the data to the
corresponding method responder
* Method responder is a process that will be started on the same box to
handle a specific OTP-type+implementation-type+sequence-type method.
* Method responder will consist from four different parts:
a) Transport socket
b) Common processing loop related to the communication socket
c) Generic provider interface that the loop will call
d) Loadable shared library implementation of the actual provider. The
idea is to take vendor client components and wrap them into a specific
interface we will define. This is the effort is conceptually similar to
the libvirto work that is now nearly complete.
* The responder will get the request and related data and invoke a
specific interface call
* The wrapper library will call the actual vendor/method specific call
or operation
* The operation will do its magic communicating with the external server.  
* External server will respond and the corresponding processing will
happen within the vendor/method specific client library
* As a result the responder will be told what response it needs to send
back to the proxy plugin within the specific instance of the KDC worker
* The proxy plugin in KDC will get a reply from responder and interact
with KDC to either:
a) Complete authentication successfully
b) Fail authentication
c) Continue authentication sequence with the new challenge

There are several open policy related questions here.
1) How does the client know what to prompt user for. Current answer is:
it knows because it is either configured this way or gets its policies
from some kind of central server that can detect how user should be
prompted and can pass this information to the client software.  In
either case it is outside of the scope of the Kerberos protocol (at
least for now...)
2) The proxy plugin needs to know where (to which responder) to send
information. Here are the factors that need to be considered making this
* What authentication credential and method client actually invoked? I
am a little bit fuzzy about the FAST protocol here but I assume that
there would be multiple FAST methods eventually implemented and there
might be different configurations of how data is sent inside the OTP
packet inside the FAST tunnel.
* What stage of the authentication for multi stage authentication sequences
* What method is configured for the specific account including private
method data. It seems that the private data should be view as a blob
however there in general case should be 2 blobs of data: one associated
with the authenticating principal and another in the method specific
record/entry in the back end store.
Within the proxy plugin there should be another internal callback that
will drive the business logic.
I see it as following:
1) Proxy plugin will get 3 pieces of information listed above and pass
them to the callback
2) Callback will have to make a decision about which responder to talk
to and what information to send to it
3) Proxy plugin will get back from the callback the following info:
where to send and what to send.

So now we are getting close to the nature of Linus's question about the
configuration data in the back end.

1) Proxy plugin calls a storage driver interface to get data configured
for a specific account. It uses principal as input. As a result it gets
a blob of data associated with the user account. The contents of this
blob should be specific to the method. The only part that should be able
to understand what it contains is the business logic callback mentioned
above. This blob should be treated as a list of KVPs. The administrative
interface should also treat it as such. The examples of the data that
can be configured here is the mapped userid for the external system,
token ID or something else. The only required pieces of data is the
method name - a unique sting that identifies method.
2) Proxy plugin calls a storage driver interface to get data that is
configured for the method. This data remains the same regardless of what
account is currently authenticating. The result of the call is another
blob of data that has another set of KVPs. This set of KVPs will have
keys and values that would allow proxy plugin to send data to a specific
responder. For example UNIX socket name, or pipe name, may be some other
data that will help the actual client implementation to connect to the
external server.

So let us get into specifics within the current implementation. Since
there is no proxy plugin and responders yet and plugins are currently
directly integrated into the KDC I assume there will be only one OTP
plugin possible at a time. There is no need to determine where to send
so the common per method blob is not yet needed. We just need the per
account data. But it is more generic than OTP token identity. I do not
think we should deal with the method selection now as it will add more
complexity to the configuration data than it should be. IMO having all
these plugins inside the KDC is wrong approach long term (but see below).

A successful OTP authentication for now follows this process on the KDC.

  (-1) The KDC is configured with a specific OTP plugin.

  (1) The kdb is searched for an OTP method account specific data blob
      (KRB5_TL_ACCOUNT_OTP_BLOB) matching the principal used.

  (2) The only registered OTP plugin is invoked with the blob
  (3) The result from (3) is returned.

  One new tl-data type is defined for the krbExtraData field in the
  Kerberos database,  KRB5_TL_ACCOUNT_OTP_BLOB.

A successful OTP authentication in future follows this process on the KDC.

  (-1) The KDC is configured with proxy plugin and responders are started.

  (1) The kdb is searched for an OTP method account specific data blob
      (KRB5_TL_ACCOUNT_OTP_BLOB) matching the principal used.

  (2) The method ID is extracted from the blob KRB5_TL_ACCOUNT_OTP_BLOB
  (3) The kdb is searched for an OTP method generic data blob
      (KRB5_TL_METHOD_OTP_BLOB) matching the method ID used.

  (4) KRB5_TL_METHOD_OTP_BLOB is decomposed and the data is proxied to the right responder

  (5) responder interacts with external server and responds
  (6) The result from (5) is set to client.

I do not like the idea of passing any data from the client about the
token ID. The server should be able to figure it out based on the
Fore now if you want the OTP plugins to actually coexist and specific
method be selectable on the fly you might create a very thin proxy
plugin that will wrap the methods you already implemented and pick the
right plugin based on the method ID extracted from the
KRB5_TL_ACCOUNT_OTP_BLOB. That would be a very good interim solution
before we introduce next level of complexity described above. In this case:

  (-1) The KDC is configured with thin proxy wrapper plugin

  (1) The kdb is searched for an OTP method account specific data blob
      (KRB5_TL_ACCOUNT_OTP_BLOB) matching the principal used.

  (2) The method ID is extracted from the blob KRB5_TL_ACCOUNT_OTP_BLOB

  (3) Right method is called passing in the rest of the blob

  (4) Method does the work
  (5) The result from (4) is set to client.


Thank you,
Dmitri Pal

Sr. Engineering Manager IPA project,
Red Hat Inc.

Looking to carve out IT costs?

More information about the krbdev mailing list