Overall shape of Python-based test framework
ghudson at MIT.EDU
Mon Feb 22 13:53:09 EST 2010
With 1.8 winding up, I'm trying to get cracking on a Python-based test
framework. The sooner we have it, the sooner we can start using it
for 1.9 work.
We've had some internal discussions about how this should look. What
I would prefer is to create a library which can be used by individual
Python test programs scattered around the tree near the functionality
they test, much like the C unit tests are. The general workflow would
1. Developer adds new functionality or fixes bug in code which can
only be tested in a running Kerberos environment.
2. Developer creates C test program to exercise code (assuming
running environment), or identifies existing commands which can
exercise it (kinit, etc.).
3. Developer creates Python script which uses the library to set up
the krb5 environment, executes the C test program or existing
commands, and tears down the environment.
4. Developer adds a check-unix rule to execute the Python test
5. At some point in the future, the test fails. Developer runs the
test program with a special flag or environment variable to
facilitate running the test commands under a debugger. (Haven't
worked out the exact process.)
I like two things about this model: first, I think step 3 is
inherently easier than inserting a test into a "box of tests" like the
dejagnu test suite. Second, I think step 5 is inherently easier than
convincing a "box of tests" to execute a particular command from a
particular test under a debugger--because by the very act of running
an individual test script narrows the work to that test.
The cost is that the tests are not collected into one place, meaning:
* If any test starts failing, the whole test suite fails, and it
becomes a little more difficult to execute other tests (although
being able to run "make check" in a subdir helps).
* Because of the previous point, if you're doing work on a branch
which deliberately breaks a whole raft of tests, you can't as easily
choose which order to work on fixing the tests.
* You can't produce reports and charts for a QA manager.
* You can't as easily set up expensive resources and reuse them for a
series of tests.
I don't consider these to be significant issues for us because (1)
we're pretty good at keeping the test suite working, (2) we haven't
done much in the way of "break the world" development in my
experience, (3) we aren't big enough to have a QA manager, and (4) we
don't have any expensive resources to set up (setting up a Kerberos
environment is very fast as long as the automation is properly
designed without using sleeps).
More information about the krbdev