[Cad] Formally Verifying a B-Rep kernel

Peter Boyer peter.b.boyer at gmail.com
Tue Nov 10 10:02:55 EST 2015


This is a wonderful discussion, David!

If we are only looking to verify topological correctness, the Euler
Operators provide the necessary tools to maintain the Euler Characteristic
with every topology change. Mårtti Mantyla's excellent book, "Introduction
to Solid Modeling", discusses this topic in depth.

The space of topologically-correct models, however, is extremely large in
comparison to the space of topologically-correct, geometrically-correct
(e.g. non-self-intersecting, non-negative volume) models. Geometric
correctness checks appear to be the "magic" of implementing such a kernel.
I would be surprised (and delighted!) to see a proof-based system capable
of reasoning about a comparably complex issue.

Furthermore, we are typically limited to fixed-precision arithmetic for
operations that require infinite precision. This has a multiplicative
effect on the complexity of implementing, e.g. boolean operations. We might
say that we could use rational arithmetic to represent all geometry, but I
think this rapidly limits us to simple surface and curve representations
(like lines and planes).

I should say that I'm very interested to see a system that combines the
control that comes with free-form surfaces and the simplicity of
verification that comes with a simplified geometric representations.

On a side note, I wouldn't suggest we use the size of the ACIS binaries as
a heuristic for the complexity of implementing a kernel. There are so many
years of development in that system and it has not been developed with an
intent to minimize the size of the kernel. You might also find that the
core libraries are rather small in comparison to the entire system.

~Peter





On Mon, Nov 9, 2015 at 4:12 PM, Andres Erbsen <andreser at mit.edu> wrote:

> Hi,
>
> Quoting Andrew Zukoski (2015-11-08 20:27:33)
> > Interesting idea! I can think of two challenges; one is the sheer scale
> of most
> > kernel codebases (eg, the ACIS binaries are around 100 mb). The other is
> that
> > writing the specification for a kernel would itself be a significant
> task and
> > at least at first glance would have to make heavy use of rich, dependent
> types.
> > The state of a geometric program that has 'gone wrong' seems much harder
> to
> > encode than a null pointer dereference.
> >
> > I wonder if there is a way to scope down the surface area that would
> need to be
> > formally verified to still have a significant impact on the development
> of a
> > rich kernel. At least a few folks are investigating this: [1]http://
> > www.researchgate.net/publication/
> > 221610369_A_Formally_Verified_Geometric_Modelling_Core , but I'm not
> aware of
> > any broad uptake or reusable components that have been verified.
>
> I will comment on the two challenges in the opposite order: starting
> with specification and moving on to proof engineering.
>
> Disclaimer: I do not know much about B-rep cad implementation internals;
> I am also a firm believer in the utility of formal verification.
>
> If the issue is computing a representation of the intersection (resp.
> union) of two objects represented using B-reps, then here is a possible
> specification for that: a point is inside the output object iff it is in
> both (resp. either) of the input objects. I am assuming defining when a
> point is inside an object is not hard (e.g., a ray from that point
> intersects with the surface some odd number of times).
>
> I am not saying that all operations will be that easy to specify (or
> even that this specification is sufficient for all needs), but I am also
> optimistic that verifying similar sanity properties can be helpful in
> improving the reliability of the software.
>
> Generalizing from there, the operations supported by both B-reps and
> F-reps can quite meaningfully be verified by proving that they are
> equivalent: if F1, F2 and B1,B2 represent the same objects (as F-reps
> and B-reps respectively), then f_B(B1,B2) represents the same object as
> f_F(F1, F2). This of course does not help you any if the operation is
> hard to define in one of the representations.
>
> The cited paper is based on a methodology from two decades ago,
> implemented using tools from 10 years ago. I would expect that today's
> proof automation can scale up from this (20 years ago it was news to
> verify a small finite state machine, now there are verified web servers,
> OS kernels and quadcopter controllers). Which is not to say that it is
> trivial to do or does not require expertize; yet I continue to be
> optimistic.
>
> And this leads us to what I think is the main thing that makes
> verification hard: the lack of a "standard library" of verified
> primitives, be they geometric or algorithmic. Often it has not been the
> complexity of the algorithm or the elusiveness of the specification that
> has left good software unverified, it is that the path of least
> resistance towards the next product/paper is to use readily available
> non-verified building blocks instead of spending more time on creating
> verified equivalents. In which cases this greedy approach actually saves
> net time (after accounting for debugging/support/disaster management and
> such) is a very open question.
>
> I think that verifying the fundamental operations that "everything else"
> depends on is quite often a good idea, especially they are notorious for
> being prone to implementation error. The common subset of F-rep and
> B-rep operations is one possible choice of this core. Yet the practical
> utility of verifying this obviously depends on what kind of expertize
> the people working on this will have and how much the correctness of the
> core actually matters (i.e., if it is wrong, how much money/time is
> lost).
>
> Those are my two cents.
>
> Best,
> Andres
>
> >
> > Andrew
> >
> >
> > On Sun, Nov 8, 2015 at 5:13 PM, David Kaufman <[2]
> davidgilkaufman at gmail.com>
> > wrote:
> >
> >     My understanding from our discussion last week is that writing a
> B-Rep
> >     kernel is difficult primarily because there are tons of edge cases
> for
> >     how objects can intersect each other and it's really hard to get them
> >     all correct/test for all of them. If this is the main challenge then
> my
> >     intuition is that it would be best solved by building a formally
> >     verified b-rep kernel (and that perhaps it would take much less time
> to
> >     write that way). Do any of you know if anyone has tried to write a
> >     formally verified b-rep kernel or why not?
> >
> >
> >
> > References:
> >
> > [1]
> http://www.researchgate.net/publication/221610369_A_Formally_Verified_Geometric_Modelling_Core
> > [2] mailto:davidgilkaufman at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/pipermail/cad-rg/attachments/20151110/70ceb5bd/attachment-0001.html


More information about the CAD-rg mailing list